Search This Blog

Powered by Blogger.

Blog Archive

Labels

ChatGPT Sparking Security Concerns

Cyberhaven's Ting predicts that the adoption of generative AI apps will continue to grow.

 

Cyberhaven, a data security company, recently released a report in which it found and blocked requests to input data into ChatGPT from 4.2% of the 1.6 million employees at its client companies due to the potential leakage of sensitive information to the LLM, including client data, source code, and regulated information.

The appeal of ChatGPT has skyrocketed. It became the fastest-growing consumer application ever released after only two months of release when it reached 100 million active users. Users are drawn to the tool's sophisticated skills, but they are also concerned about its potential to upend numerous industries.ChatGPT was given 300 billion words by OpenAI, the firm that created it. These words came from books, articles, blogs, and posts on the Internet, as well as personally identifiable information that was illegally stolen.

Following Microsoft's $1 billion investment in the parent company of ChatGPT, OpenAI, in January, ChatGPT is expected to be rolled out across all Microsoft products, including Word, Powerpoint, and Outlook.

Employees are providing sensitive corporate data and privacy-protected information to large language models (LLMs), like ChatGPT, which raises concerns that the data may be incorporated into the models of artificial intelligence (AI) services, and that information may be retrieved at a later time if adequate data security isn't implemented for the service.

The growing acceptance of OpenAI's ChatGPT, its core AI model, the Generative Pre-trained Transformer, or GPT-3, as well as other LLMs, businesses, and security experts have started to be concerned that sensitive data consumed as training data into the models could reemerge when prompted by the appropriate queries. Some are acting: JPMorgan, for instance, restricted employees' access to ChatGPT, and Amazon, Microsoft, and Wal-Mart cautioned staff to use generative AI services carefully.

Some AI-based services, outside of those that are GPT-based, have sparked concerns about whether they are risky. For example, Otter.ai, an automated transcription service, converts audio files into text while automatically identifying speakers, allowing for the tagging of crucial words and phrases, and underlining of key phrases. Journalists have raised concerns about the company's storage of that information in its cloud.

Cyberhaven's Ting predicts that the adoption of generative AI apps will continue to grow and be used for a variety of tasks, including creating memos and presentations, identifying security incidents, and interacting with patients. His predictions are based on conversations with the clients of his company.

Because only a few individuals handle the majority of the dangerous requests, education could have a significant impact on whether data leaks from a particular organization. According to Ting of Cyberhaven, less than 1% of employees are accountable for 80% of the instances of providing critical data to ChatGPT.

The LLM's access to sensitive data and personal information is also being restricted by OpenAI and other businesses: Nowadays, when ChatGPT is asked for personal information or sensitive corporate data, canned responses are used as an excuse not to cooperate.


Share it:

AI Chatbot

ChatGPT

Data Breach

LLM

Microsoft

OpenAI

Source Code

User Privacy