Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Workplace AI Tools Now Top Cause of Data Leaks, Cyera Report Warns

The alarming trend shows that nearly 50% of enterprise employees are using AI tools at work, often unknowingly exposing sensitive information.

 

A recent Cyera report reveals that generative AI tools like ChatGPT, Microsoft Copilot, and Claude have become the leading source of workplace data leaks, surpassing traditional channels like email and cloud storage for the first time. The alarming trend shows that nearly 50% of enterprise employees are using AI tools at work, often unknowingly exposing sensitive company information through personal, unmanaged accounts.

The research found that 77% of AI interactions in workplace settings involve actual company data, including financial records, personally identifiable information, and strategic documents. Employees frequently copy and paste confidential materials directly into AI chatbots, believing they are simply improving productivity or efficiency. However, many of these interactions occur through personal AI accounts rather than enterprise-managed ones, making them invisible to corporate security systems.

The critical issue lies in how traditional cybersecurity measures fail to detect these leaks. Most security platforms are designed to monitor file attachments, suspicious downloads, and outbound emails, but AI conversations appear as normal web traffic. Because data is shared through copy-paste actions within chat windows rather than direct file uploads, it bypasses conventional data-loss prevention tools entirely.

A 2025 LayerX enterprise report revealed that 67% of AI interactions happen on personal accounts, creating a significant blind spot for IT teams who cannot monitor or restrict these logins. This makes it nearly impossible for organizations to provide adequate oversight or implement protective measures. In many cases, employees are not intentionally leaking data but are unaware of the security risks associated with seemingly innocent actions like asking AI to "summarize this report".

Security experts emphasize that the solution is not to ban AI outright but to implement stronger controls and improved visibility. Recommended measures include blocking access to generative AI through personal accounts, requiring single sign-on for all AI tools on company devices, monitoring for sensitive keywords and clipboard activity, and treating AI chat interactions with the same scrutiny as traditional file transfers.

The fundamental advice for employees is straightforward: never paste anything into an AI chat that you wouldn't post publicly on the internet. As AI adoption continues to grow in workplace settings, organizations must recognize this emerging threat and take immediate action to protect sensitive information from inadvertent exposure.
Share it:

AI Tool

Cyber Security

Data Leak

Generative AI

threat report