Many organisations are starting to recognise a security problem that has been forming silently in the background. Conversations employees hold with public AI chatbots can accumulate into a long-term record of sensitive information, behavioural patterns, and internal decision-making. As reliance on AI tools increases, these stored interactions may become a serious vulnerability that companies have not fully accounted for.
The concern resurfaced after a viral trend in late 2024 in which social media users asked AI models to highlight things they “might not know” about themselves. Most treated it as a novelty, but the trend revealed a larger issue. Major AI providers routinely retain prompts, responses, and related metadata unless users disable retention or use enterprise controls. Over extended periods, these stored exchanges can unintentionally reveal how employees think, communicate, and handle confidential tasks.
This risk becomes more severe when considering the rise of unapproved AI use at work. Recent business research shows that while the majority of employees rely on consumer AI tools to automate or speed up tasks, only a fraction of companies officially track or authorise such usage. This gap means workers frequently insert sensitive data into external platforms without proper safeguards, enlarging the exposure surface beyond what internal security teams can monitor.
Vendor assurances do not fully eliminate the risk. Although companies like OpenAI, Google, and others emphasize encryption and temporary chat options, their systems still operate within legal and regulatory environments. One widely discussed court order in 2025 required the preservation of AI chat logs, including previously deleted exchanges. Even though the order was later withdrawn and the company resumed standard deletion timelines, the case reminded businesses that stored conversations can resurface unexpectedly.
Technical weaknesses also contribute to the threat. Security researchers have uncovered misconfigured databases operated by AI firms that contained user conversations, internal keys, and operational details. Other investigations have demonstrated that prompt-based manipulation in certain workplace AI features can cause private channel messages to leak. These findings show that vulnerabilities do not always come from user mistakes; sometimes the supporting AI infrastructure itself becomes an entry point.
Criminals have already shown how AI-generated impersonation can be exploited. A notable example involved attackers using synthetic voice technology to imitate an executive, tricking an employee into transferring funds. As AI models absorb years of prompt history, attackers could use stylistic and behavioural patterns to impersonate employees, tailor phishing messages, or replicate internal documents.
Despite these risks, many companies still lack comprehensive AI governance. Studies reveal that employees continue to insert confidential data into AI systems, sometimes knowingly, because it speeds up their work. Compliance requirements such as GDPR’s strict data minimisation rules make this behaviour even more dangerous, given the penalties for mishandling personal information.
Experts advise organisations to adopt structured controls. This includes building an inventory of approved AI tools, monitoring for unsanctioned usage, conducting risk assessments, and providing regular training so staff understand what should never be shared with external systems. Some analysts also suggest that instead of banning shadow AI outright, companies should guide employees toward secure, enterprise-level AI platforms.
If companies fail to act, each casual AI conversation can slowly accumulate into a dataset capable of exposing confidential operations. While AI brings clear productivity benefits, unmanaged use may convert everyday workplace conversations into one of the most overlooked security liabilities of the decade.
The Federal Bureau of Investigation (FBI) has issued a pressing security alert regarding two cybercriminal groups that are breaking into corporate Salesforce systems to steal information and demand ransoms. The groups, tracked as UNC6040 and UNC6395, have been carrying out separate but related operations, each using different methods to compromise accounts.
In its official advisory, the FBI explained that attackers are exploiting weaknesses in how companies connect third-party tools to Salesforce. To help organizations defend themselves, the agency released a list of warning signs, including suspicious internet addresses, user activity patterns, and malicious websites linked to the breaches.
How the Attacks took place
The first campaign, attributed to UNC6040, came to light in mid-2024. According to threat intelligence researchers, the attackers relied on social engineering, particularly through fraudulent phone calls to employees. In these calls, criminals pretended to be IT support staff and convinced workers to link fake Salesforce apps to company accounts. One such application was disguised under the name “My Ticket Portal.” Once connected, the attackers gained access to sensitive databases and downloaded large amounts of customer-related records, especially tables containing account and contact details. The stolen data was later used in extortion schemes by criminal groups.
A newer wave of incidents, tied to UNC6395, was detected a few months later. This group relied on stolen digital tokens from tools such as Salesloft Drift, which normally allow companies to integrate external platforms with Salesforce. With these tokens, the hackers were able to enter Salesforce systems and search through customer support case files. These cases often contained confidential information, including cloud service credentials, passwords, and access keys. Possessing such details gave the attackers the ability to break into additional company systems and steal more data.
Investigations revealed that the compromise of these tokens originated months earlier, when attackers infiltrated the software provider’s code repositories. From there, they stole authentication tokens and expanded their reach, showing how one breach in the supply chain can spread to many organizations.
The Scale of this Campaign
The campaigns have had far-reaching consequences, affecting a wide range of businesses across different industries. In response, the software vendors involved worked with Salesforce to disable the stolen tokens and forced customers to reauthenticate. Despite these steps, the stolen data and credentials may still pose long-term risks if reused elsewhere.
According to industry reports, the campaigns are believed to have impacted a number of well-known organizations across sectors, including technology firms such as Cloudflare, Zscaler, Tenable, and Palo Alto Networks, as well as companies in finance, retail, and enterprise software. Although the FBI has not officially attributed the intrusions, external researchers have linked the activity to criminal collectives with ties to groups known as ShinyHunters, Lapsus$, and Scattered Spider.
FBI Recommendations
The FBI is urging organizations to take immediate action by reviewing connected third-party applications, monitoring login activity, and rotating any keys or tokens that may have been exposed. Security teams are encouraged to rely on the technical indicators shared in the advisory to detect and block malicious activity.
Although the identity of the hackers remains uncertain, the scale of the attacks highlights how valuable cloud-based platforms like Salesforce have become for criminals. The FBI has not confirmed the groups’ claims about further breaches and has declined to comment on ongoing investigations.
For businesses, the message is clear: protecting cloud environments requires not only technical defenses but also vigilance against social engineering tactics that exploit human trust.