Search This Blog

Powered by Blogger.

Blog Archive

Labels

Researchers Develop AI "Worms" Capable of Inter-System Spread, Enabling Data Theft Along the Way

The worm successfully pilfered sensitive information such as social security numbers and credit card details.

 

A team of researchers has developed a self-replicating computer worm designed to target AI-powered applications like Gemini Pro, ChatGPT 4.0, and LLaVA. The aim of this project was to showcase the vulnerabilities in AI-enabled systems, particularly how interconnections between generative-AI platforms can facilitate the spread of malware.

The researchers, consisting of Stav Cohen from the Israel Institute of Technology, Ben Nassi from Cornell Tech, and Ron Bitton from Intuit, dubbed their creation 'Morris II', drawing inspiration from the infamous 1988 internet worm.

Their worm was designed with three main objectives. Firstly, it was engineered to replicate itself using adversarial self-replicating prompts, which exploit the AI applications' tendency to output the original prompt, thereby perpetuating the worm. 

Secondly, it aimed to carry out various malicious activities, ranging from data theft to the creation of inflammatory emails for propagandistic purposes. Lastly, it needed the capability to traverse hosts and AI applications to proliferate within the AI ecosystem.

The worm utilizes two primary methods for propagation. The first method targets AI-assisted email applications employing retrieval-augmented generation (RAG), where a poisoned email triggers the generation of a reply containing the worm, subsequently spreading it to other hosts. The second method involves inputs to generative-AI models, prompting them to create outputs that further disseminate the worm to new hosts.

During testing, the worm successfully pilfered sensitive information such as social security numbers and credit card details.

To raise awareness about the potential risks posed by such worms, the researchers shared their findings with Google and OpenAI. While Google declined to comment, an OpenAI spokesperson acknowledged the potential exploitability of prompt-injection vulnerabilities resulting from unchecked or unfiltered user inputs.

Instances like these underscore the imperative for increased research, testing, and regulation in the deployment of generative-AI applications.
Share it:

AI Worms

Cyber Security

Data Safety

Data Theft

Private Data

Researchers

Security

system spread