Search This Blog

Powered by Blogger.

Blog Archive

Labels

Blocking Access to AI Apps is a Short-term Solution to Mitigate Safety Risk

Sensitive data communicated via ChatGPT includes regulated data, such as financial and healthcare data, as well as passwords and keys.


Another major revelation in regard to ChatGPT recently came to light through research conducted by Netskope. According to their analysis, business organizations are experiencing about 183 occurrences of sensitive data being posted to ChatGPT for every 10,000 corporate users each month. Amongst the sensitive data being exposed, source code bagged the largest share.

The security researchers further scrutinized the data of the million enterprise users worldwide and emphasized the growing trend of generative AI app usage, which witnessed an increase of 22.5% over the past two months. This consequently escalated the chance of sensitive data being exposed. 

ChatGPT Reigning the Generative AI Market

Apparently, organizations with 10,000 (or more) users are utilizing some or the other AI tool – with an average of 5 apps – on a regular basis. Compared to other generative AI apps, ChatGPT has more than 8 times as many daily active users. Within the next seven months, it is anticipated that the number of people accessing AI apps will double at the present growth pace.

The AI app with the swiftest growth in installations over the last two months was Google Bard, which is presently attracting new users at a rate of 7.1% per week versus 1.6% for ChatGPT. Although the generative AI app market is expected to considerably grow before then, with many more apps in development, Google Bard is not projected to overtake ChatGPT for more than a year at the current rate.

Besides the intellectual property (excluding source code) and personally identifiable information, other sensitive data communicated via ChatGPT includes regulated data, such as financial and healthcare data, as well as passwords and keys, which are typically included in source code.

According to Ray Canzanese, Threat Research Director, Netskope Threat Lab, “It is inevitable that some users will upload proprietary source code or text containing sensitive data to AI tools that promise to help with programming or writing[…]Therefore, it is imperative for organizations to place controls around AI to prevent sensitive data leaks. Controls that empower users to reap the benefits of AI, streamlining operations and improving efficiency, while mitigating the risks are the ultimate goal. The most effective controls that we see are a combination of DLP and interactive user coaching.”

Safety Measures to Adopt AI Apps

As opportunistic attackers look to profit from the popularity of artificial intelligence, Netskope Threat Labs is presently monitoring ChatGPT proxies and more than 1,000 malicious URLs and domains, including several phishing attacks, malware distribution campaigns, spam, and fraud websites.

While blocking access to AI content and apps may seem like a good idea, it is indeed a short-term solution. 

James Robinson, Deputy CISO at Netskope, said “As security leaders, we cannot simply decide to ban applications without impacting on user experience and productivity[…]Organizations should focus on evolving their workforce awareness and data policies to meet the needs of employees using AI products productively. There is a good path to safe enablement of generative AI with the right tools and the right mindset.”

Organizations must focus their strategy on finding acceptable applications and implementing controls that enable users to use them to their maximum potential while protecting the business from dangers in order to enable the safe adoption of AI apps. For protection against assaults, such a strategy should incorporate domain filtering, URL filtering, and content inspection.

Here, we are listing some more safety measures to secure data and use AI tools with safety: 

  • Disable access to apps that lack a legitimate commercial value or that put the organization at disproportionate risk. 
  • Educate employees to remind users of their company policy pertaining to the usage of AI apps.
  • Utilize cutting-edge data loss prevention (DLP) tools to identify posts with potentially sensitive data.  

Share it:

AI Apps

AI Safety

AI technology

AI tools

ChatGPT

Cyber Security

Generative AI

Google Bard

Netskope