Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label new update. Show all posts

New ChatGPT Update Unveils Alarming Security Vulnerabilities – Is Your Data at Risk?

 

The recent enhancements to ChatGPT, such as the introduction of the Code Interpreter, have brought about heightened security issues, as per the investigations conducted by security expert Johann Rehberger and subsequently validated by Tom's Hardware. Notably, the vulnerabilities in ChatGPT stem from its newly added file-upload feature, a component of the recent ChatGPT Plus update.

Among the various additions to ChatGPT Plus, the Code Interpreter stands out, allowing the execution of Python code and file analysis, along with DALL-E image generation. However, these updates have inadvertently exposed security flaws in the system. The Code Interpreter operates within a sandbox environment that, unfortunately, proves susceptible to prompt injection attacks.

A long-standing vulnerability in ChatGPT has been identified, wherein an attacker could manipulate the system by tricking it into executing instructions from an external URL. This manipulation prompts ChatGPT to encode uploaded files into URL-friendly strings and send the data to a potentially malicious website. 

While the success of such an attack depends on specific conditions, like the user actively pasting a malicious URL into ChatGPT, the potential risks are worrisome. This security threat could materialize through scenarios such as compromising a trusted website with a malicious prompt or utilizing social engineering tactics.

Tom's Hardware conducted testing to gauge the extent of user vulnerability to this attack. The test involved creating a fabricated environment variables file and using ChatGPT to process and inadvertently transmit this data to an external server. 

The effectiveness of the exploit varied across sessions, but the overall findings raise considerable security concerns. Particularly troubling is ChatGPT's capability to read and execute Linux commands, as well as handle user-uploaded files within a Linux-based virtual environment.

Despite the seemingly unlikely nature of this security loophole, its existence is noteworthy. Ideally, ChatGPT should refrain from executing instructions from external web pages, but the discovered vulnerabilities challenge this expectation. Mashable sought a response from OpenAI on these findings, but as of the report, no immediate response had been received.