Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI. Show all posts

Apple's New Feature Will Help Users Restrict Location Data


Apple has introduced a new privacy feature that allows users to restrict the accuracy of location data shared with cellular networks on a few iPad models and iPhone. 

About the feature

The “Limit Precise Location” feature will start after updating to iOS26.3 or later. It restricts the information that mobile carriers use to decide locations through cell tower connections. Once turned on, cellular networks can only detect the device’s location, like neighbourhood instead of accurate street address. 

According to Apple, “The precise location setting doesn't impact the precision of the location data that is shared with emergency responders during an emergency call.” “This setting affects only the location data available to cellular networks. It doesn't impact the location data that you share with apps through Location Services. For example, it has no impact on sharing your location with friends and family with Find My.”

Users can turn on the feature by opening “Settings,” selecting “Cellular,” “Cellular Data Options,” and clicking the “Limit Precise Location” setting. After turning on limited precise location, the device may trigger a device restart to complete activation. 

The privacy enhancement feature works only on iPhone Air, iPad Pro (M5) Wi-Fi + Cellular variants running on iOS 26.3 or later. 

Where will it work?

The availability of this feature will depend on carrier support. The mobile networks compatible are:

EE and BT in the UK

Boost Mobile in the UK

Telecom in Germany 

AIS and True in Thailand 

Apple hasn't shared the reason for introducing this feature yet.

Compatibility of networks with the new feature 

Apple's new privacy feature, which is currently only supported by a small number of networks, is a significant step towards ensuring that carriers can only collect limited data on their customers' movements and habits because cellular networks can easily track device locations via tower connections for network operations.

“Cellular networks can determine your location based on which cell towers your device connects to. The limit precise location setting enhances your location privacy by reducing the precision of location data available to cellular networks,”

Exposed Admin Dashboard in AI Toy Put Children’s Data and Conversations at Risk

 

A routine investigation by a security researcher into an AI-powered toy revealed a serious security lapse that could have exposed sensitive information belonging to children and their families.

The issue came to light when security researcher Joseph Thacker examined an AI toy owned by a neighbor. In a blog post, Thacker described how he and fellow researcher Joel Margolis uncovered an unsecured admin interface linked to the Bondu AI toy.

Margolis identified a suspicious domain—console.bondu.com—referenced in the Content Security Policy headers of the toy’s mobile app backend. On visiting the domain, he found a simple option labeled “Login with Google.”

“By itself, there’s nothing weird about that as it was probably just a parent portal,” Thacker wrote. Instead, logging in granted access to Bondu’s core administrative dashboard.

“We had just logged into their admin dashboard despite [not] having any special accounts or affiliations with Bondu themselves,” Thacker said.

AI Toy Admin Panel Exposed Children’s Conversations

Further analysis of the dashboard showed that the researchers had unrestricted visibility into “Every conversation transcript that any child has had with the toy,” spanning “tens of thousands of sessions.” The exposed panel also included extensive personal details about children and their households, such as:
  • Child’s name and date of birth
  • Names of family members
  • Preferences, likes, and dislikes
  • Parent-defined developmental objectives
  • The custom name assigned to the toy
  • Historical conversations used to provide context to the language model
  • Device-level data including IP-based location, battery status, and activity state
  • Controls to reboot devices and push firmware updates
The researchers also observed that the system relies on OpenAI GPT-5 and Google Gemini. “Somehow, someway, the toy gets fed a prompt from the backend that contains the child profile information and previous conversations as context,” Thacker wrote. “As far as we can tell, the data that is being collected is actually disclosed within their privacy policy, but I doubt most people realize this unless they go and read it (which most people don’t do nowadays).”

Beyond the authentication flaw, the team identified an Insecure Direct Object Reference (IDOR) vulnerability in the API. This weakness “allowed us to retrieve any child’s profile data by simply guessing their ID.”

“This was all available to anyone with a Google account,” Thacker said. “Naturally we didn’t access nor store any data beyond what was required to validate the vulnerability in order to responsibly disclose it.”

Bondu Responds Within Minutes

Margolis contacted Bondu’s CEO via LinkedIn over the weekend, prompting the company to disable access to the exposed console “within 10 minutes.”

“Overall we were happy to see how the Bondu team reacted to this report; they took the issue seriously, addressed our findings promptly, and had a good collaborative response with us as security researchers,” Thacker said.

Bondu also initiated a broader security review, searched for additional vulnerabilities, and launched a bug bounty program. After reviewing console access logs, the company stated that no unauthorized parties had accessed the system aside from the researchers, preventing what could have become a data breach.

Despite the swift and responsible response, the incident changed Thacker’s perspective on AI-driven toys.

“To be honest, Bondu was totally something I would have been prone to buy for my kids before this finding,” he wrote. “However this vulnerability shifted my stance on smart toys, and even smart devices in general.”

“AI models are effectively a curated, bottled-up access to all the information on the internet,” he added. “And the internet can be a scary place. I’m not sure handing that type of access to our kids is a good idea.”

He further noted that, beyond data security concerns, AI introduces new risks at home. “AI makes this problem even more interesting because the designer (or just the AI model itself) can have actual ‘control’ of something in your house. And I think that is even more terrifying than anything else that has existed yet,” he said.

Bondu’s website maintains that the toy was designed with safety as a priority, stating that its “safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from bondu throughout the entire beta period.”

Google’s Project Genie Signals a Major Shift for the Gaming Industry

 

Google has sent a strong signal to the video game sector with the launch of Project Genie, an experimental AI world-model that can create explorable 3D environments using simple text or image prompts.

Although Google’s Genie AI has been known since 2024, its integration into Project Genie marks a significant step forward. The prototype is now accessible to Google AI Ultra subscribers in the US and represents one of Google’s most ambitious AI experiments to date.

Project Genie is being introduced through Google Labs, allowing users to generate short, interactive environments that can be explored in real time. Built on DeepMind’s Genie 3 world-model research, the system lets users move through AI-generated spaces, tweak prompts, and instantly regenerate variations. However, it is not positioned as a full-scale game engine or production-ready development tool.

Demonstrations on the Project Genie website showcase a variety of scenarios, including a cat roaming a living room from atop a Roomba, a vehicle traversing the surface of a rocky moon, and a wingsuit flyer gliding down a mountain. These environments remain navigable in real time, and while the worlds are generated dynamically as characters move, consistency is maintained. Revisiting areas does not create new terrain, and any changes made by an agent persist as long as the system retains sufficient memory.

"Genie 3 environments are … 'auto-regressive' – created frame by frame based on the world description and user actions," Google explains on Genie's website. "The environments remain largely consistent for several minutes, with memory recalling changes from specific interactions for up to a minute."

Despite these capabilities, time constraints remain a challenge.

"The model can support a few minutes of continuous interaction, rather than extended hours," Google said, adding elsewhere that content generation is currently capped at 60 seconds. A Google spokesperson told The Register that Genie can render environments beyond that limit, but the company "found 60 seconds provides a high quality and consistent world, and it gives people enough time to explore and experience the environment."

Google stated that world consistency lasts throughout an entire session, though it remains unclear whether session durations will be expanded in the future. Beyond time limits, the system has other restrictions.

Agents in Genie’s environments are currently limited in the actions they can perform, and interactions between multiple agents are unreliable. The model struggles with readable text, lacks accurate real-world simulation, and can suffer from lag or delayed responses. Google also acknowledged that some previously announced features are missing.

In addition, "A few of the Genie 3 model capabilities we announced in August, such as promptable events that change the world as you explore it, are not yet included in this prototype," Google added.

"A world model simulates the dynamics of an environment, predicting how they evolve and how actions affect them," the company said of Genie. "While Google DeepMind has a history of agents for specific environments like Chess or Go, building AGI requires systems that navigate the diversity of the real world."

Game Developers Face an Uncertain Future

Beyond AGI research, Google also sees potential applications for Genie within the gaming industry—an area already under strain. While Google emphasized that Genie "is not a game engine and can’t create a full game experience," a spokesperson told The Register, "we are excited to see the potential to augment the creative process, enhancing ideation, and speeding up prototyping."

Industry data suggests this innovation arrives at a difficult time. A recent Informa Game Developers Conference report found that 33 percent of US game developers and 28 percent globally experienced at least one layoff over the past two years. Half of respondents said their employer had conducted layoffs within the last year.

Concerns about AI’s role are growing. According to the same survey, 52 percent of industry professionals believe AI is negatively affecting the games sector—up sharply from 30 percent last year and 18 percent the year before. The most critical views came from professionals working in visual and technical art, narrative design, programming, and game design.

One machine learning operations employee summed up those fears bluntly.

"We are intentionally working on a platform that will put all game devs out of work and allow kids to prompt and direct their own content," the GDC study quotes the respondent as saying.

While Project Genie still has clear technical limitations, the rapid pace of AI development suggests those gaps may not last long—raising difficult questions about the future of game development.

Google Introduces AI-Powered Side Panel in Chrome to Automate Browsing




Google has updated its Chrome browser by adding a built-in artificial intelligence panel powered by its Gemini model, marking a stride toward automated web interaction. The change reflects the company’s broader push to integrate AI directly into everyday browsing activities.

Chrome, which currently holds more than 70 percent of the global browser market, is now moving in the same direction as other browsers that have already experimented with AI-driven navigation. The idea behind this shift is to allow users to rely on AI systems to explore websites, gather information, and perform online actions with minimal manual input.

The Gemini feature appears as a sidebar within Chrome, reducing the visible area of websites to make room for an interactive chat interface. Through this panel, users can communicate with the AI while keeping their main work open in a separate tab, allowing multitasking without constant tab switching.

Google explains that this setup can help users organize information more effectively. For example, Gemini can compare details across multiple open tabs or summarize reviews from different websites, helping users make decisions more quickly.

For subscribers to Google’s higher-tier AI plans, Chrome now offers an automated browsing capability. This allows Gemini to act as a software agent that can follow instructions involving multiple steps. In demonstrations shared by Google, the AI can analyze images on a webpage, visit external shopping platforms, identify related products, and add items to a cart while staying within a user-defined budget. The final purchase, however, still requires user approval.

The browser update also includes image-focused AI tools that allow users to create or edit images directly within Chrome, further expanding the browser’s role beyond simple web access.

Chrome’s integration with other applications has also been expanded. With user consent, Gemini can now interact with productivity tools, communication apps, media services, navigation platforms, and shopping-related Google services. This gives the AI broader context when assisting with tasks.

Google has indicated that future updates will allow Gemini to remember previous interactions across websites and apps, provided users choose to enable this feature. The goal is to make AI assistance more personalized over time.

Despite these developments, automated browsing faces resistance from some websites. Certain platforms have already taken legal or contractual steps to limit AI-driven activity, particularly for shopping and transactions. This underlines the ongoing tension between automation and website control.

To address these concerns, Google says Chrome will request human confirmation before completing sensitive actions such as purchases or social media posts. The browser will also support an open standard designed to allow AI-driven commerce in collaboration with participating retailers.

Currently, these features are available on Chrome for desktop systems in the United States, with automated browsing restricted to paid subscribers. How widely such AI-assisted browsing will be accepted across the web remains uncertain.


Some ChatGPT Browser Extensions Are Putting User Accounts at Risk

 


Cybersecurity researchers are cautioning users against installing certain browser extensions that claim to improve ChatGPT functionality, warning that some of these tools are being used to steal sensitive data and gain unauthorized access to user accounts.

These extensions, primarily found on the Chrome Web Store, present themselves as productivity boosters designed to help users work faster with AI tools. However, recent analysis suggests that a group of these extensions was intentionally created to exploit users rather than assist them.

Researchers identified at least 16 extensions that appear to be connected to a single coordinated operation. Although listed under different names, the extensions share nearly identical technical foundations, visual designs, publishing timelines, and backend infrastructure. This consistency indicates a deliberate campaign rather than isolated security oversights.

As AI-powered browser tools become more common, attackers are increasingly leveraging their popularity. Many malicious extensions imitate legitimate services by using professional branding and familiar descriptions to appear trustworthy. Because these tools are designed to interact deeply with web-based AI platforms, they often request extensive permissions, which exponentially increases the potential impact of abuse.

Unlike conventional malware, these extensions do not install harmful software on a user’s device. Instead, they take advantage of how browser-based authentication works. To operate as advertised, the extensions require access to active ChatGPT sessions and advanced browser privileges. Once installed, they inject hidden scripts into the ChatGPT website that quietly monitor network activity.

When a logged-in user interacts with ChatGPT, the platform sends background requests that include session tokens. These tokens serve as temporary proof that a user is authenticated. The malicious extensions intercept these requests, extract the tokens, and transmit them to external servers controlled by the attackers.

Possession of a valid session token allows attackers to impersonate users without needing passwords or multi-factor authentication. This can grant access to private chat histories and any external services connected to the account, potentially exposing sensitive personal or organizational information. Some extensions were also found to collect additional data, including usage patterns and internal access credentials generated by the extension itself.

Investigators also observed synchronized publishing behavior, shared update schedules, and common server infrastructure across the extensions, reinforcing concerns that they are part of a single, organized effort.

While the total number of installations remains relatively low, estimated at fewer than 1,000 downloads, security experts warn that early-stage campaigns can scale rapidly. As AI-related extensions continue to grow in popularity, similar threats are likely to emerge.

Experts advise users to carefully evaluate browser extensions before installation, pay close attention to permission requests, and remove tools that request broad access without clear justification. Staying cautious is increasingly important as browser-based attacks become more subtle and harder to detect.

Android Malware Uses Artificial Intelligence to Secretly Generate Ad Clicks

 


Security researchers have identified a new category of Android malware that uses artificial intelligence to carry out advertising fraud without the user’s knowledge. The malicious software belongs to a recently observed group of click-fraud trojans that rely on machine learning rather than traditional scripted techniques.

Instead of using hard-coded JavaScript instructions to interact with web pages, this malware analyzes advertisements visually. By examining what appears on the screen, it can decide where to tap, closely imitating normal user behavior. This approach allows the malware to function even when ads frequently change layout, include video content, or are embedded inside iframes, which often disrupt older click-fraud methods.

The threat actors behind the operation are using TensorFlow.js, an open-source machine learning library developed by Google. The framework allows trained AI models to run inside web browsers or server environments through JavaScript. In this case, the models are loaded remotely and used to process screenshots taken from an embedded browser.

Researchers from mobile security firm Dr.Web reported that the malware has been distributed through GetApps, Xiaomi’s official application store. The infected apps are mainly games. In several cases, the applications were initially uploaded without harmful functionality and later received malicious components through software updates.

Once active, the malware can run in what researchers describe as a “phantom” mode. In this mode, it opens a hidden browser based on Android’s WebView component. This browser loads a webpage containing advertisements and a JavaScript file designed to automate interactions. The browser operates on a virtual screen that is not visible to the device owner. Screenshots of this screen are repeatedly captured and sent to the AI model, which identifies relevant ad elements and triggers taps that appear legitimate.

A second operational mode, referred to as “signalling,” gives attackers direct control. Using WebRTC technology, the malware streams a live video feed of the hidden browser to the threat actor. This allows them to perform actions such as tapping, scrolling, or entering text in real time.

Dr.Web identified multiple infected games hosted on Xiaomi’s platform, including titles with tens of thousands of downloads. Beyond official app stores, the malware has also been found in modified versions of popular streaming applications distributed through third-party APK websites, Telegram channels, and a Discord server with a large subscriber base. Many of these apps function as expected, which reduces user suspicion.

Although this activity does not directly target personal data, it still affects users through increased battery drain, higher mobile data usage, and faster device wear. For cybercriminals, however, covert ad fraud remains a profitable operation.

Security experts advise Android users to avoid downloading apps from unofficial sources and to be cautious of altered versions of well-known apps that promise free access to paid features.

Google Appears to Be Preparing Gemini Integration for Chrome on Android

 

Google appears to be testing a new feature that could significantly change how users browse the web on mobile devices. The company is reportedly experimenting with integrating its AI model, Gemini, directly into Chrome for Android, enabling advanced agentic browsing capabilities within the mobile browser.

The development was first highlighted by Leo on X, who shared that Google has begun testing Gemini integration alongside agentic features in Chrome’s Android version. These findings are based on newly discovered references within Chromium, the open-source codebase that forms the foundation of the Chrome browser.

Additional insight comes from a Chromium post, where a Google engineer explained the recent increase in Chrome’s binary size. According to the engineer, "Binary size is increased because this change brings in a lot of code to support Chrome Glic, which will be enabled in Chrome Android in the near future," suggesting that the infrastructure needed for Gemini support is already being added. For those unfamiliar, “Glic” is the internal codename used by Google for Gemini within Chrome.

While the references do not reveal exactly how Gemini will function inside Chrome for Android, they strongly indicate that Google is actively preparing the feature. The integration could mirror the experience offered by Microsoft Copilot in Edge for Android. In such a setup, users might see a floating Gemini button that allows them to summarize webpages, ask follow-up questions, or request contextual insights without leaving the browser.

On desktop platforms, Gemini in Chrome already offers similar functionality by using the content of open tabs to provide contextual assistance. This includes summarizing articles, comparing information across multiple pages, and helping users quickly understand complex topics. However, Gemini’s desktop integration is still not widely available. Users who do have access can launch it using Alt + G on Windows or Ctrl + G on macOS.

The potential arrival of Gemini in Chrome for Android could make AI-powered browsing more accessible to a wider audience, especially as mobile devices remain the primary way many users access the internet. Agentic capabilities could help automate common tasks such as researching topics, extracting key points from long articles, or navigating complex websites more efficiently.

At present, Google has not confirmed when Gemini will officially roll out to Chrome for Android. However, the appearance of multiple references in Chromium suggests that development is progressing steadily. With Google continuing to expand Gemini across its ecosystem, an official announcement regarding its availability on Android is expected in the near future.

AI Agent Integration Can Become a Problem in Workplace Operations


AI agents were considered harmless sometime ago. They did what they were supposed to do: write snippets of code, answer questions, and help users in doing things faster. 

Then business started expecting more.

Slowly, companies started using organizational agents over personal copilots- agents integrated into customer support, HR, IT, engineering, and operations. These agents didn't just suggest, but started acting- touching real systems, changing configurations, and moving real data:

  • A support agent that gets customer data from CRM, triggers backend fixes, updates tickets, and checks bills.
  • An HR agent who overlooks access throughout VPNs, IAM, SaaS apps.
  • A change management agent that processes requests, logs actions in ServiceNow, updates production configurations and Confluence.
  • These AI agents automate oversight and control, and have become core of companies’ operational infrastructure

Work of AI agents

Organizational agents are made to work across many resources, supporting various roles, multiple users, and workflows via a single implement. Instead of getting linked with an individual user, these business agents work as shared resources that cater to requests, and automate work of across systems for many users. 

To work effectively, the AI agents depend on shared accounts, OAuth grants, and API keys to verify with the systems for interaction. The credentials are long-term and managed centrally, enabling the agent to work continuously. 

Threat of AI agents in workplace 

While this approach maximizes convenience and coverage, these design choices can unintentionally create powerful access intermediaries that bypass traditional permission boundaries.

Although this strategy optimizes coverage and convenience, these design decisions may inadvertently provide strong access intermediaries that go beyond conventional permission constraints. The next actions may seem legitimate and harmless when agents inadvertently grant access outside the specific user's authority. 

Reliable detection and attribution are eliminated when the execution is attributed to the agent identity, losing the user context. Conventional security controls are not well suited for agent-mediated workflows because they are based on direct system access and human users. Permissions are enforced by IAM systems according to the user's identity, but when an AI agent performs an activity, authorization is assessed based on the agent's identity rather than the requester's.

The impact 

Therefore, user-level limitations are no longer in effect. By assigning behavior to the agent's identity and concealing who started the action and why, logging and audit trails exacerbate the issue. Security teams are unable to enforce least privilege, identify misuse, or accurately assign intent when using agents, which makes it possible for permission bypasses to happen without setting off conventional safeguards. Additionally, the absence of attribution slows incident response, complicates investigations, and makes it challenging to ascertain the scope or aim of a security occurrence.

n8n Supply Chain Attack Exploits Community Nodes In Google Ads Integration to Steal Tokens


Hackers were found uploading a set of eight packages on the npm registry that pretended as integrations attacking the n8n workflow automation platform to steal developers’ OAuth credentials. 

About the exploit 

The package is called “n8n-nodes-hfgjf-irtuinvcm-lasdqewriit”, it copies Google Ads integration and asks users to connect their ad account in a fake form and steal OAuth credentials from servers under the threat actors’ control. 

Endor Labs released a report on the incident. "The attack represents a new escalation in supply chain threats,” it said. Adding that “unlike traditional npm malware, which often targets developer credentials, this campaign exploited workflow automation platforms that act as centralized credential vaults – holding OAuth tokens, API keys, and sensitive credentials for dozens of integrated services like Google Ads, Stripe, and Salesforce in a single location," according to the report. 

Attack tactic 

Experts are not sure if the packages share similar malicious functions. But Reversing labs Spectra Assure analysed a few packages and found no security issues. In one package called “n8n-nodes-zl-vietts,” it found a malicious component with malware history. 

The campaign might still be running as another updated version of the package “n8n-nodes-gg-udhasudsh-hgjkhg-official” was posted to npm recently.

Once installed as a community node, the malicious package works as a typical n8n integration, showing configuration screens. Once the workflow is started, it launches a code to decode the stored tokens via n8n’s master key and send the stolen data to a remote server. 

This is the first time a supply chain attack has specially targeted the n8n ecosystem, with hackers exploiting the trust in community integrations. 

New risks in ad integration 

The report exposed the security gaps due to untrusted workflows integration, which increases the attack surface. Experts have advised developers to audit packages before installing them, check package metadata for any malicious component, and use genuine n8n integrations. 

The findings highlight the security issues that come with integrating untrusted workflows, which can expand the attack surface. Developers are recommended to audit packages before installing them, scrutinize package metadata for any anomalies, and use official n8n integrations.

According to researchers Kiran Raj and Henrik Plate, "Community nodes run with the same level of access as n8n itself. They can read environment variables, access the file system, make outbound network requests, and, most critically, receive decrypted API keys and OAuth tokens during workflow execution.”

Trust Wallet Browser Extension Hacked, $7 Million Stolen


Users of the Binance-owned Trust wallet lost more than $7 million after the release of an updated chrome extension. Changpenng Zhao, company co-founder said that the company will cover the stolen money of all the affected users. Crypto investigator ZachXBT believes hundreds of Trust Wallet users suffered losses due to the extension flaw. 

Trust Wallets in a post on X said, “We’ve identified a security incident affecting Trust Wallet Browser Extension version 2.68 only. Users with Browser Extension 2.68 should disable and upgrade to 2.69.”

CZ has assured that the company is investigating how threat actors were able to compromise the new version. 

Affected users

Mobile-only users and browser extension versions are not impacted. User funds are SAFE,” Zhao wrote in a post on X.

The compromise happened because of a flaw in a version of the Trust Wallet Google Chrome browser extension. 

What to do if you are a victim?

If you suffered the compromise of Browser Extension v2.68, follow these steps on Trust Wallet X site:

  • To safeguard your wallet's security and prevent any problems, do not open the Trust Wallet Browser Extension v2.68 on your desktop computer. 
  • Copy this URL into the address bar of your Chrome browser to open the Chrome Extensions panel: chrome://extensions/?id=egjidjbpglichdcondbcbdnbeeppgdph
  • If the toggle is still "On," change it to "Off" beneath the Trust Wallet. 
  • Select "Developer mode" from the menu in the top right corner. 
  • Click the "Update" button in the upper left corner. 
  • Verify the 2.69 version number. The most recent and safe version is this one. 

Please wait to open the Browser Extension until you have updated to Extension version 2.69. This helps safeguard the security of your wallet and avoids possible problems.

How did the public react?

Social media users expressed their views. One said, “The problem has been going on for several hours,” while another user complained that the company ”must explain what happened and compensate all users affected. Otherwise reputation is tarnished.” A user also asked, “How did the vulnerability in version 2.68 get past testing, and what changes are being made to prevent similar issues?”

Salesforce Pulls Back from AI LLMs Citing Reliability Issues


Salesforce, a famous enterprise software company, is withdrawing from its heavy dependence on large language models (LLMs) after facing reliability issues that the executive didn't like. The company believes that trust in AI LLMs has declined in the past year, according to The Information. 

Parulekar, senior VP of product marketing said, “All of us were more confident about large language models a year ago.” This means the company has shifted away from GenAI towards more “deterministic” automation in its flagship product Agentforce.

In its official statement, the company said, “While LLMs are amazing, they can’t run your business by themselves. Companies need to connect AI to accurate data, business logic, and governance to turn the raw intelligence that LLMs provide into trusted, predictable outcomes.”

Salesforce cut down its staff from 9,000 to 5,000 employees due to AI agent deployment. The company emphasizes that Agentforce can help "eliminate the inherent randomness of large models.” 

Failing models, missing surveys

Salesforce experienced various technical issues with LLMs during real-world applications. According to CTO Muralidhar Krishnaprasad, when given more than eight prompts, the LLMs started missing commands. This was a serious flaw for precision-dependent tasks. 

Home security company Vivint used Agentforce for handling its customer support for 2.5 million customers and faced reliability issues. Even after giving clear instructions to send satisfaction surveys after each customer conversation, Agentforce sometimes failed to send surveys for unknown reasons. 

Another challenge was the AI drift, according to executive Phil Mui. This happens when users ask irrelevant questions causing AI agents to lose focus on their main goals. 

AI expectations vs reality hit Salesforce 

The withdrawal from LLMs shows an ironic twist for CEO Marc Benioff, who often advocates for AI transformation. In his conversation with Business Insider, Benioff talked about drafting the company's annually strategic document, prioritizing data foundations, not AI models due to “hallucinations” issues. He also suggests rebranding the company as Agentforce. 

Although Agentforce is expected to earn over $500 million in sales annually, the company's stock has dropped about 34% from its peak in December 2024. Thousands of businesses that presently rely on this technology may be impacted by Salesforce's partial pullback from large models as the company attempts to bridge the gap between AI innovation and useful business application.

Okta Report: Pirates of Payrolls Attacks Plague Corporate Industry


IT helps desks be ready for an evolving threat that sounds like a Hollywood movie title. In December 2025, Okta Threat Intelligent published a report that explained how hackers can gain unauthorized access to payroll software. These threats are infamous as payroll pirate attacks. 

Pirates of the payroll

These attacks start with threat actors calling an organization’s help desk, pretending to be a user and requesting a password reset. 

“Typically, what the adversary will do is then come back to the help desk, probably to someone else on the phone, and say, ‘Well, I have my password, but I need my MFA factor reset,’” according to VP of Okta Threat Intelligence Brett Winterford. “And then they enroll their own MFA factor, and from there, gain access to those payroll applications for the purposes of committing fraud.”

Attack tactic 

The threat actors are working at a massive scale and leveraging various services and devices to assist their malicious activities. According to Okta report, cyber thieves employed social engineering, calling help desk personnel on the phone and attempting to trick them into resetting the password for a user account. These attacks have impacted multiple industries,

“They’re certainly some kind of cybercrime organization or fraud organization that is doing this at scale,” Winterford said. Okta believes the hackers gang is based out of West Africa. 

Recently, the US industry has been plagued with payroll pirates in the education sector. The latest Okta research mentions that these schemes are now happening across different industries like retail sector and manufacturing. “It’s not often you’ll see a huge number of targets in two distinct industries. I can’t tell you why, but education [and] manufacturing were massively targeted,” Winterford said. 

How to mitigate pirates of payroll attacks?

Okta advises companies to establish a standard process to check the real identity of users who contact the help desk for aid. Winterford advised businesses that depend on outsourced IT help should limit their help desks’ ability to reset user passwords without robust measures. “In some organizations, they’re relying on nothing but passwords to get access to payroll systems, which is madness,” he said.



Google Launches Emergency Location Services in India for Android Devices


Google starts emergency location service in India

Google recently announced the launch of its Emergency Location Service (ELS) in India for compatible Android smartphones. It means that users who are in an emergency can call or contact emergency service providers like police, firefighters, and healthcare professionals. ELS can share the user's accurate location immediately. 

Uttar Pradesh (UP) in India has become the first state to operationalise ELS for Android devices. Earlier, ELS was rolled out to devices having Android 6 or newer versions. For integration, however, ELS will require state authorities to connect it with their services for activation. 

More about ELS

According to Google, the ELS function on Android handsets has been activated in India. The built-in emergency service will enable Android users to communicate their location by call or SMS in order to receive assistance from emergency service providers, such as firefighters, police, and medical personnel. 

ELS on Android collects information from the device's GPS, Wi-Fi, and cellular networks in order to pinpoint the user's exact location, with an accuracy of up to 50 meters.

Implementation details

However, local wireless and emergency infrastructure operators must enable support for the ELS capability. The first state in India to "fully" operationalize the service for Android devices is Uttar Pradesh. 

ELS assistance has been integrated with the emergency number 112 by the state police in partnership with Pert Telecom Solutions. It is a free service that solely monitors a user's position when an Android phone dials 112. 

Google added that all suitable handsets running Android 6.0 and later versions now have access to the ELS functionality. 

Even if a call is dropped within seconds of being answered, the business claims that ELS in Android has enabled over 20 million calls and SMS messages to date. ELS is supported by Android Fused Location Provider- Google's machine learning tool.

Promising safety?

According to Google, the feature is only available to emergency service providers and it will never collect or share accurate location data for itself. The ELS data will be sent directly only to the concerned authority.

Recently, Google also launched the Emergency Live Video feature for Android devices. It lets users share their camera feed during an emergency via a call or SMS with the responder. But the emergency service provider has to get user approval for the access. The feature is shown on screen immediately when the responder requests a video from their side. User can accept the request and provide a visual feed or reject the request.

High Severity Flaw In Open WebUI Can Leak User Conversations and Data


A high-severity security bug impacting Open WebUI has been found by experts. It may expose users to account takeover (ATO) and, in some incidents, cause full server compromise. 

Talking about WebUI, Cato researchers said, “When a platform of this size becomes vulnerable, the impact isn’t just theoretical. It affects production environments managing research data, internal codebases, and regulated information.”

The flaw is tracked as CVE-2025-64496 and found by Cato Networks experts. The vulnerability affects Open WebUI versions 0.6.34 and older if the Director Connection feature is allowed. The flaw has a severity rating of 7.3 out of 10. 

The vulnerability exists inside Direct Connections, which allows users to connect Open WebUI to external OpenAI-supported model servers. While built for supporting flexibility and self-hosted AI workflows, the feature can be exploited if a user is tricked into linking with a malicious server pretending to be a genuine AI endpoint. 

Fundamentally, the vulnerability comes from a trust relapse between unsafe model servers and the user's browser session. A malicious server can send a tailored server-sent events message that prompts the deployment of JavaScript code in the browser. This lets a threat actor steal authentication tokens stored in local storage. When the hacker gets these tokens, it gives them full access to the user's Open WebUI account. Chats, API keys, uploaded documents, and other important data is exposed. 

Depending on user privileges, the consequences can be different.

Consequences?

  • Hackers can steal JSON web tokens and hijack sessions. 
  • Full account hack, this includes access to chat logs and uploaded documents.
  • Leak of important data and credentials shared in conversations. 
  • If the user has enabled workspace.tools permission, it can lead to remote code execution (RCE). 

Open WebUI maintainers were informed about the issue in October 2025, and publicly disclosed in November 2025, after patch validation and CVE assignment. Open WebUI variants 0.6.35 and later stop the compromised execute events, patching the user-facing threat.

Open WebUI’s security patch will work for v0.6.35 or “newer versions, which closes the user-facing Direct Connections vulnerability. However, organizations still need to strengthen authentication, sandbox extensibility and restrict access to specific resources,” according to Cato Networks researchers.





New US Proposal Allows Users to Sue AI Companies Over Unauthorised Data Use


US AI developers would be subject to data privacy obligations applicable in federal court under a wide legislative proposal disclosed recently by the US senate Marsha Blackburn, R-Tenn. 

About the proposal

Beside this, the proposal will create a federal right for users to sue companies for misusing their personal data for AI model training without proper consent. The proposal allows statutory and punitive damages, attorney fees and injunctions. 

Blackburn is planning to officially introduce the bill this year to codify President Donald Trump’s push for “one federal rule book” for AI, according to the press release. 

Why the need for AI regulations 

The legislative framework comes on the heels of Trump’s signing of an executive order aimed at blocking “onerous” AI laws at the state level and promoting a national policy framework for the technology.  

In order to ensure that there is a least burdensome national standard rather than fifty inconsistent State ones, the directive required the administration to collaborate with Congress. 

Michael Kratsios, the president's science and technology adviser, and David Sacks, the White House special adviser for AI and cryptocurrency, were instructed by the president to jointly propose federal AI legislation that would supersede any state laws that would contradict with administration policy. 

Blackburn stated in the Friday release that rather than advocating for AI amnesty, President Trump correctly urged Congress to enact federal standards and protections to address the patchwork of state laws that have impeded AI advancement.

Key highlights of proposal:

  • Mandate that regulations defining "minimum reasonable" AI protections be created by the Federal Trade Commission. 
  • Give the U.S. attorney general, state attorneys general, and private parties the authority to sue AI system creators for damages resulting from "unreasonably dangerous or defective product claims."
  • Mandate that sizable, state-of-the-art AI developers put procedures in place to control and reduce "catastrophic" risks associated with their systems and provide reports to the Department of Homeland Security on a regular basis. 
  • Hold platforms accountable for hosting an unauthorized digital replica of a person if they have actual knowledge that the replica was not authorized by the person portrayed.
  • Require quarterly reporting to the Department of Labor of AI-related job effects, such as job displacement and layoffs.

The proposal will preempt state laws regulating the management of catastrophic AI risks. The legislation will also mostly “preempt” state laws for digital replicas to make a national standard for AI. 

The proposal will not preempt “any generally applicable law, including a body of common law or a scheme of sectoral governance that may address” AI. The bill becomes effective 180 days after enforcement. 

India's Fintech Will Focus More on AI & Compliance in 2026


India’s Fintech industry enters the new year 2026 with a new set of goals. The industry focused on rapid expansion through digital payments and aggressive customer acquisition in the beginning, but the sector is now focusing more towards sustainable growth, compliance, and risk management. 

“We're already seeing traditional boundaries blur- payments, lending, embedded finance, and banking capabilities are coming closer together as players look to build more integrated and efficient models. While payments continue to be powerful for driving access and engagement, long-term value will come from combining scale with operational efficiency across the financial stack,” said Ramki Gaddapati, Co-Founder, APAC CEO and Global CTO, Zeta.

India’s fintech industry is preparing to enter 2026 with a new Artificial intelligence (AI) emerging as a critical tool in this transformation, helping firms strengthen fraud detection, streamline regulatory processes, and enhance customer trust.

What does the data suggest?

According to Reserve Bank of India (RBI) data, digital payment volumes crossed 180 billion transactions in FY25, powered largely by the Unified Payments Interface (UPI) and embedded payment systems across commerce, mobility, and lending platforms. 

Yet, regulators and industry leaders are increasingly concerned about operational risks and fraud. The RBI, along with the Bank for International Settlements (BIS), has highlighted vulnerabilities in digital payment ecosystems, urging fintechs to adopt stronger compliance frameworks. A

AI a major focus

Artificial intelligence is set to play a central role in this compliance-first era. Fintech firms are deploying AI to:

Detect and prevent fraudulent transactions in real time  

Automate compliance reporting and monitoring  

Personalize customer experiences while maintaining data security  

Analyze risk patterns across lending and investment platforms  

Moving beyond payments?

The sector is also diversifying beyond payments. Fintechs are moving deeper into credit, wealth management, and banking-related services, areas that demand stricter oversight. It allows firms to capture new revenue streams and broaden their customer base but exposes them to heightened regulatory scrutiny and the need for more robust governance structures.

“The DPDP Act is important because it protects personal data and builds trust. Without compliance, organisations face penalties, data breaches, customer loss, and reputational damage. Following the law improves credibility, strengthens security, and ensures responsible data handling for sustained business growth,” said Neha Abbad, co-founder, CyberSigma Consulting.




Former Cybersecurity Employees Involved in Ransomware Extortion Incidents Worth Millions


It is very unfortunate and shameful for the cybersecurity industry, when cybersecurity professionals themselves betray trust to launch cyberattacks against their own country. In a shocking incident, two men have admitted to working normal jobs as cybersecurity professionals during the day, while moonlighting as cyber attackers.

About accused

An ex-employee of the Israeli cybersecurity company Sygnia has pleaded guilty to federal crimes in the US for having involvement in ransomware cyberattacks aimed to extort millions of dollars from firms in the US. 

The culprit, Ryan Clifford Goldberg, worked as a cyber incident response supervisor at Sygnia, and accepted that he was involved in a year-long plan of attacking business around the US. 

Kevin Tyler Martin, another associate,who worked as an ex DigitalMint employee, worked as a negotiation intermediary with the threat actors, a role supposed to help ransomware targets, has also accepted involvement. 

The situation is particularly disturbing because both men held positions of trust inside the sector established to fight against such threats.

Accused pled guilty to extortion charges 

Both the accused have pleaded guilty to one count of conspiracy to manipulate commerce via extortion, according to federal court records. In the plea statement, they have accepted that along with a third actor (not charged and unknown), they both launched business compromises and ransom extortions over many years. 

Extortion worth millions 

In one incident, the actors successfully extorted over $1 million in crypto from a Florida based medical equipment firm. According to the federal court, besides their legitimate work, they deployed software ‘ALPHV BlackCat’ to extract and encode target’s data, and distributed the extortion money with the software’s developers. 

According to DigitalMint, two of the people who were charged were ex-employees. After the incident, both were fired and “acted wholly outside the scope of their employment and without any authorization, knowledge or involvement from the company,” DigitalMint said in an email shared with Bloomberg.

In a recent conversation with Bloomberg, Sygnia mentioned that it was not a target of the investigation and the accused Goldberg was relieved of his duties as soon as the news became known.

A representative for Sygnia declined to speak further, and Goldberg and Martin's lawyers also declined to comment on the report.

2FA Fail: Hackers Exploit Microsoft 365 to Launch Code Phishing Attacks


Two-factor authentication (2FA) has been one of the most secure ways to protect online accounts. It requires a secondary code besides a password. However, in recent times, 2FA has not been a reliable method anymore, as hackers have started exploiting it easily. 

Experts advise users to use passkeys instead of 2FA these days, as they are more secure and less prone to hack attempts. Recent reports have shown that 2FA as a security method is undermined. 

Russian-linked state sponsored threat actors are now abusing flaws in Microsoft’s 365. Experts from Proofpoint have noticed a surge in Microsoft 365 account takeover cyberattacks, threat actors are exploiting authentication code phishing to compromise Microsoft’s device authorization flow.

They are also launching advanced phishing campaigns that escape 2FA and hack sensitive accounts. 

About the attack

The recent series of cyberattacks use device code phishing where hackers lure victims into giving their authentication codes on fake websites that look real. When the code is entered, hackers gain entry to the victim's Microsoft 365 account, escaping the safety of 2FA. 

The campaigns started in early 2025. In the beginning, hackers relied primarily on code phishing. By March, they increased their tactics to exploit Oauth authentication workflows, which are largely used for signing into apps and services. The development shows how fast threat actors adapt when security experts find their tricks.

Who is the victim? 

The attacks are particularly targeted against high-value sectors that include:

Universities and research institutes 

Defense contractors

Energy providers

Government agencies 

Telecommunication companies 

By targeting these sectors, hackers increase the impact of their attacks for purposes such as disruption, espionage, and financial motives. 

The impact 

The surge in 2FA code attacks exposes a major gap, no security measure is foolproof. While 2FA is still far stronger than relying on passwords alone, it can be undermined if users are deceived into handing over their codes. This is not a failure of the technology itself, but of human trust and awareness.  

A single compromised account can expose sensitive emails, documents, and internal systems. Users are at risk of losing their personal data, financial information, and even identity in these cases.

How to Stay Safe

Verify URLs carefully. Never enter authentication codes on unfamiliar or suspicious websites.  

Use phishing-resistant authentication. Hardware security keys (like YubiKeys) or biometric logins are harder to trick.  

Enable conditional access policies. Organizations can restrict logins based on location, device, or risk level.  

Monitor OAuth activity. Be cautious of unexpected consent requests from apps or services.  

Educate users. Awareness training is often the most effective defense against social engineering.  


Antivirus vs Identity Protection Software: What to Choose and How?


Users often put digital security into a single category and confuse identity protection with antivirus, assuming both work the same. But they are not. Before you buy one, it is important to understand the difference between the two. This blog covers the difference between identity theft security and device security.

Cybersecurity threats: Past vs present 

Traditionally, a common computer virus could crash a machine and infect a few files. That was it. But today, the cybersecurity landscape has changed from compromising computers via system overload of resources to stealing personal data. 

A computer virus is a malware that self-replicates, travelling through devices. It corrupts data and software, and can also steal personal data. 

With time, hackers have learned that users are easier targets than computers. These days, malware and social engineering attacks pose more threats than viruses. A well planned phishing email or a fake login page will benefit hackers more than a traditional virus. 

Due to the surge in data breaches, hackers have got it easy. Your data- phone number, financial details, passwords is swimming in databases, sold like bulk goods on the dark web. 

AI has made things worse and easier to exploit. Hackers can now create believable messages and even impersonate your voice. These shenanigans don't even require creativity, they need to be convincing enough to bait a victim to click or reply. 

Where antivirus fails

Your personal data never stays only on your computer, it is collected and sold by data brokers and advertisers, or to third-parties who benefit from it. When threat actors get their hands on this data, they can use it to impersonate you. 

In this case, antivirus is of no help. It is unable to notice breaches happening at organizations you don't control or someone impersonating you. Antivirus protects your system from malware that exists outside your system. There is a limit to what it can do. Antivirus can protect the machine, but not the user behind it. 

Role of identity theft protection 

Identity protection doesn't concern itself with your system health. It looks out for information that follows you everywhere- SSN, e-mail addresses, your contact number and accounts linked to your finances. If something suspicious turns up, it informs you. Identity protection works more on the monitoring side. It may watch your credit reports for threats- a new account or a hard enquiry, or falling credit score. Identity protection software looks out for early warning signs of theft, as mentioned above. It also checks if your data has been put up on dark web or part of any latest leaks. 

Adobe Brings Photo, Design, and PDF Editing Tools Directly Into ChatGPT

 



Adobe has expanded how users can edit images, create designs, and manage documents by integrating select features of its creative software directly into ChatGPT. This update allows users to make visual and document changes simply by describing what they want, without switching between different applications.

With the new integration, tools from Adobe Photoshop, Adobe Acrobat, and Adobe Express are now available inside the ChatGPT interface. Users can upload images or documents and activate an Adobe app by mentioning it in their request. Once enabled, the tool continues to work throughout the conversation, allowing multiple edits without repeatedly selecting the app.

For image editing, the Photoshop integration supports focused and practical adjustments rather than full professional workflows. Users can modify specific areas of an image, apply visual effects, or change settings such as brightness, contrast, and exposure. In some cases, ChatGPT presents multiple edited versions for users to choose from. In others, it provides interactive controls, such as sliders, to fine-tune the result manually.

The Acrobat integration is designed to simplify common document tasks. Users can edit existing PDF files, reduce file size, merge several documents into one, convert files into PDF format, and extract content such as text or tables. These functions are handled directly within ChatGPT once a file is uploaded and instructions are given.

Adobe Express focuses on design creation and quick visual content. Through ChatGPT, users can generate and edit materials like posters, invitations, and social media graphics. Every element of a design, including text, images, colors, and animations, can be adjusted through conversational prompts. If users later require more detailed control, their projects can be opened in Adobe’s standalone applications to continue editing.

The integrations are available worldwide on desktop, web, and iOS platforms. On Android, Adobe Express is already supported, while Photoshop and Acrobat compatibility is expected to be added in the future. These tools are free to use within ChatGPT, although advanced features in Adobe’s native software may still require paid plans.

This launch follows OpenAI’s broader effort to introduce third-party app integrations within ChatGPT. While some earlier app promotions raised concerns about advertising-like behavior, Adobe’s tools are positioned as functional extensions rather than marketing prompts.

By embedding creative and document tools into a conversational interface, Adobe aims to make design and editing more accessible to users who may lack technical expertise. The move also reflects growing competition in the AI space, where companies are racing to combine artificial intelligence with practical, real-world tools.

Overall, the integration represents a shift toward more interactive and simplified creative workflows, allowing users to complete everyday editing tasks efficiently while keeping professional software available for advanced needs.