Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Shadow AI Quietly Spreads Across Workplaces, Study Warns

The report found that companies themselves are partly fueling this behavior.

 




A growing number of employees are using artificial intelligence tools that their companies have never approved, a new report by 1Password has found. The practice, known as shadow AI, is quickly becoming one of the biggest unseen cybersecurity risks inside organizations.

According to 1Password’s 2025 Annual Report, based on responses from more than 5,000 knowledge workers in six countries, one in four employees admitted to using unapproved AI platforms. The study shows that while most workplaces encourage staff to explore artificial intelligence, many do so without understanding the data privacy or compliance implications.


How Shadow AI Works

Shadow AI refers to employees relying on external or free AI services without oversight from IT teams. For instance, workers may use chatbots or generative tools to summarize meetings, write reports, or analyze data, even if these tools were never vetted for corporate use. Such platforms can store or learn from whatever information users enter into them, meaning sensitive company or customer data could unknowingly end up being processed outside secure environments.

The 1Password study found that 73 percent of workers said their employers support AI experimentation, yet 37 percent do not fully follow the official usage policies. Twenty-seven percent said they had used AI tools their companies never approved, making shadow AI the second-most common form of shadow IT, just after unapproved email use.


Why Employees Take the Risk

Experts say this growing behavior stems from convenience and the pressure to be efficient. During a CISO roundtable hosted for the report, Mark Hazleton, Chief Security Officer at Oracle Red Bull Racing, said employees often “focus on getting the job done” and find ways around restrictions if policies slow them down.

The survey confirmed this: 45 percent of respondents use unauthorized AI tools because they are convenient, and 43 percent said AI helps them be more productive.

Security leaders like Susan Chiang, CISO at Headway, warn that the rapid expansion of third-party tools hasn’t been matched by awareness of the potential consequences. Many users, she said, still believe that free or browser-based AI apps are harmless.


The Broader Shadow IT Problem

1Password’s research highlights that shadow AI is part of a wider trend. More than half of employees (52 percent) admitted to downloading other apps or using web tools without approval. Brian Morris, CISO at Gray Media, explained that tools such as Grammarly or Monday.com often slip under the radar because employees do not consider browser-based services as applications that could expose company data.


Building Safer AI Practices

The report advises companies to adopt a three-step strategy:

1. Keep an up-to-date inventory of all AI tools being used.

2. Define clear, accessible policies and guide users toward approved alternatives.

3. Implement controls that prevent sensitive data from reaching unverified AI systems.

Chiang added that organizations should not only chase major threats but also tackle smaller issues that accumulate over time. She described this as avoiding “death by a thousand cuts,” which can be prevented through continuous education and awareness programs.

As AI becomes embedded in daily workflows, experts agree that responsible use and visibility are key. Encouraging innovation should not mean ignoring the risks. For organizations, managing shadow AI is no longer optional, it is essential for protecting data integrity and maintaining digital trust.



Share it:

Artificial Intelligence

CISO

IT department

Shadow AI

Technology