Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI Worms. Show all posts

Researchers Develop AI "Worms" Capable of Inter-System Spread, Enabling Data Theft Along the Way

 

A team of researchers has developed a self-replicating computer worm designed to target AI-powered applications like Gemini Pro, ChatGPT 4.0, and LLaVA. The aim of this project was to showcase the vulnerabilities in AI-enabled systems, particularly how interconnections between generative-AI platforms can facilitate the spread of malware.

The researchers, consisting of Stav Cohen from the Israel Institute of Technology, Ben Nassi from Cornell Tech, and Ron Bitton from Intuit, dubbed their creation 'Morris II', drawing inspiration from the infamous 1988 internet worm.

Their worm was designed with three main objectives. Firstly, it was engineered to replicate itself using adversarial self-replicating prompts, which exploit the AI applications' tendency to output the original prompt, thereby perpetuating the worm. 

Secondly, it aimed to carry out various malicious activities, ranging from data theft to the creation of inflammatory emails for propagandistic purposes. Lastly, it needed the capability to traverse hosts and AI applications to proliferate within the AI ecosystem.

The worm utilizes two primary methods for propagation. The first method targets AI-assisted email applications employing retrieval-augmented generation (RAG), where a poisoned email triggers the generation of a reply containing the worm, subsequently spreading it to other hosts. The second method involves inputs to generative-AI models, prompting them to create outputs that further disseminate the worm to new hosts.

During testing, the worm successfully pilfered sensitive information such as social security numbers and credit card details.

To raise awareness about the potential risks posed by such worms, the researchers shared their findings with Google and OpenAI. While Google declined to comment, an OpenAI spokesperson acknowledged the potential exploitability of prompt-injection vulnerabilities resulting from unchecked or unfiltered user inputs.

Instances like these underscore the imperative for increased research, testing, and regulation in the deployment of generative-AI applications.

Generative AI Worms: Threat of the Future?

Generative AI worms

The generative AI systems of the present are becoming more advanced due to the rise in their use, such as Google's Gemini and OpenAI's ChatGPT. Tech firms and startups are making AI bits and ecosystems that can do mundane tasks on your behalf, think about blocking a calendar or product shopping. But giving more freedom to these things tools comes at the cost of risking security. 

Generative AI worms: Threat in the future

In the latest study, researchers have made the first "generative AI worms" that can spread from one device to another, deploying malware or stealing data in the process.  

Nassi, in collaboration with fellow academics Stav Cohen and Ron Bitton, developed the worm, which they named Morris II in homage to the 1988 internet debacle caused by the first Morris computer worm. The researchers demonstrate how the AI worm may attack a generative AI email helper to steal email data and send spam messages, circumventing several security measures in ChatGPT and Gemini in the process, in a research paper and website.

Generative AI worms in the lab

The study, conducted in test environments rather than on a publicly accessible email assistant, coincides with the growing multimodal nature of large language models (LLMs), which can produce images and videos in addition to text.

Prompts are language instructions that direct the tools to answer a question or produce an image. This is how most generative AI systems operate. These prompts, nevertheless, can also be used as a weapon against the system. 

Prompt injection attacks can provide a chatbot with secret instructions, while jailbreaks can cause a system to ignore its security measures and spew offensive or harmful content. For instance, a hacker might conceal text on a website instructing an LLM to pose as a con artist and request your bank account information.

The researchers used a so-called "adversarial self-replicating prompt" to develop the generative AI worm. According to the researchers, this prompt causes the generative AI model to output a different prompt in response. 

The email system to spread worms

The researchers connected ChatGPT, Gemini, and open-source LLM, LLaVA, to develop an email system that could send and receive messages using generative AI to demonstrate how the worm may function. They then discovered two ways to make use of the system: one was to use a self-replicating prompt that was text-based, and the other was to embed the question within an image file.

A video showcasing the findings shows the email system repeatedly forwarding a message. Also, according to the experts, data extraction from emails is possible. According to Nassi, "It can be names, phone numbers, credit card numbers, SSNs, or anything else that is deemed confidential."

Generative AI worms to be a major threat soon

Nassi and the other researchers report that they expect to see generative AI worms in the wild within the next two to three years in a publication that summarizes their findings. According to the research paper, "many companies in the industry are massively developing GenAI ecosystems that integrate GenAI capabilities into their cars, smartphones, and operating systems."