Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI prompt injection attack. Show all posts

Hackers Use DNS Records to Hide Malware and AI Prompt Injections

 

Cybercriminals are increasingly leveraging an unexpected and largely unmonitored part of the internet’s infrastructure—the Domain Name System (DNS)—to hide malicious code and exploit security weaknesses. Security researchers at DomainTools have uncovered a campaign in which attackers embedded malware directly into DNS records, a method that helps them avoid traditional detection systems. 

DNS records are typically used to translate website names into IP addresses, allowing users to access websites without memorizing numerical codes. However, they can also include TXT records, which are designed to hold arbitrary text. These records are often used for legitimate purposes, such as domain verification for services like Google Workspace. Unfortunately, they can also be misused to store and distribute malicious scripts. 

In a recent case, attackers converted a binary file of the Joke Screenmate malware into hexadecimal code and split it into hundreds of fragments. These fragments were stored across multiple subdomains of a single domain, with each piece placed inside a TXT record. Once an attacker gains access to a system, they can quietly retrieve these fragments through DNS queries, reconstruct the binary code, and deploy the malware. Since DNS traffic often escapes close scrutiny—especially when encrypted via DNS over HTTPS (DOH) or DNS over TLS (DOT)—this method is particularly stealthy. 

Ian Campbell, a senior security engineer at DomainTools, noted that even companies with their own internal DNS resolvers often struggle to distinguish between normal and suspicious DNS requests. The rise of encrypted DNS traffic only makes it harder to detect such activity, as the actual content of DNS queries remains hidden from most monitoring tools. This isn’t a new tactic. Security researchers have observed similar methods in the past, including the use of DNS records to host PowerShell scripts. 

However, the specific use of hexadecimal-encoded binaries in TXT records, as described in DomainTools’ latest findings, adds a new layer of sophistication. Beyond malware, the research also revealed that TXT records are being used to launch prompt injection attacks against AI chatbots. These injections involve embedding deceptive or malicious prompts into files or documents processed by AI models. 

In one instance, TXT records were found to contain commands instructing a chatbot to delete its training data, return nonsensical information, or ignore future instructions entirely. This discovery highlights how the DNS system—an essential but often overlooked component of the internet—can be weaponized in creative and potentially damaging ways. 

As encryption becomes more widespread, organizations need to enhance their DNS monitoring capabilities and adopt more robust defensive strategies to close this blind spot before it’s further exploited.

This Side of AI Might Not Be What You Expected

 


In the midst of our tech-driven era, there's a new concern looming — AI prompt injection attacks. 

Artificial intelligence, with its transformative capabilities, has become an integral part of our digital interactions. However, the rise of AI prompt injection attacks introduces a new dimension of risk, posing challenges to the trust we place in these advanced systems. This article seeks to demystify the threat, shedding light on the mechanisms that underlie these attacks and empowering individuals to operate the AI with a heightened awareness.

But what exactly are they, how do they work, and most importantly, how can you protect yourself?

What is an AI Prompt Injection Attack?

Picture AI as your intelligent assistant and prompt injection attacks as a clever ploy to make it go astray. These attacks exploit vulnerabilities in AI systems, allowing individuals with malicious intent to sneak in instructions the AI wasn't programmed to handle. In simpler terms, it's like manipulating the AI into saying or doing things it shouldn't. From minor inconveniences to major threats like coaxing people into revealing sensitive information, the implications are profound.

The Mechanics Behind Prompt Injection Attacks

1. DAN Attacks (Do Anything Now):

Think of this as the AI version of "jailbreaking." While it doesn't directly harm users, it expands the AI's capabilities, potentially transforming it into a tool for mischief. For instance, a savvy researcher demonstrated how an AI could be coerced into generating harmful code, highlighting the risks involved.

2. Training Data Poisoning Attacks: 

These attacks manipulate an AI's training data, altering its behaviour. Picture hackers deceiving an AI designed to catch phishing messages, making it believe certain scams are acceptable. This compromises the AI's ability to effectively safeguard users.

3. Indirect Prompt Injection Attacks:

Among the most concerning for users, these attacks involve feeding malicious instructions to the AI before users receive their responses. This could lead to the AI persuading users into harmful actions, such as signing up for a fraudulent website.

Assessing the Threat Level

Yes, AI prompt injection attacks are a legitimate concern, even though no successful attacks have been reported outside of controlled experiments. Regulatory bodies, including the Federal Trade Commission, are actively investigating, underscoring the importance of vigilance in the ever-evolving landscape of AI.

How To Protect Yourself?

Exercise caution with AI-generated information. Scrutinise the responses, recognizing that AI lacks human judgement. Stay vigilant and responsibly enjoy the benefits of AI. Understand that questioning and comprehending AI outputs are essential to navigating this dynamic technological landscape securely.

In essence, while AI prompt injection attacks may seem intricate, breaking down the elements emphasises the need for a mindful and informed approach.