Search This Blog

Powered by Blogger.

Blog Archive

Labels

Employees are Feeding Sensitive Data to ChatGPT, Prompting Security Concerns

The majority of ChatGPT breaches appear to be the fault of developers.

 

Despite the apparent risk of leaks or breaches, according to the latest study from Netskope, employees are still sharing private company information with chatbots like ChatGPT and AI writers. 

The study, which examines 1.7 million users across 70 international organisations, discovered an average of 158 monthly cases of source code being posted to ChatGPT per 10,000 users, making it the most significant corporate vulnerability ahead of other types of sensitive stuff. 

Although there are far fewer instances of private data (18 incidents/10,000 users/month) and intellectual property (4 incidents/10,000 users/month) being posted to ChatGPT, it is obvious that many developers are just unaware of the harm that may be done by leaked source code. 

Netskope also emphasised the surge in interest in artificial intelligence along with continuing exposures that can result in weak points for businesses. The study indicates a 22.5% increase in GenAI app usage over the previous two months, with major companies with more than 10,000 users using an average of five AI apps per day.

In comparison to other GenAI apps, ChatGPT leads with eight times as many daily active users. Each user has the ability to do a great deal of harm to their employer with an average of six prompts per day. 

Grammarly (9.9%) and Bard (4.5%) round out the top three generative AI apps used by companies worldwide, joining ChatGPT (84%) at number one. Bard is growing at a strong 7.1% each week compared to ChatGPT's 1.6% per week. 

Ray Canzanese, director of threat research at Netskope, argues that while many may claim that posting source code or other sensitive information can be avoided, it is "inevitable." Canzanese instead lays the burden of implementing AI controls on the organisations. 

According to James Robinson, the company's Deputy Chief Information Security Officer, "organisations should focus on evolving their workforce awareness and data policies to meet the needs of employees using AI products productively." 

The company advises IT teams and admins to deploy suitable contemporary data loss protection technology, regularly teach users, prohibit access to superfluous or overly risky apps, and provide frequent user coaching.
Share it:

AI Tool

ChatGPT

Data Breach

Data Leak

Data Safety

Private Data