Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label GenAI Security. Show all posts

New Reprompt URL Attack Exposed and Patched in Microsoft Copilot

 

Security researchers at Varonis have uncovered a new prompt-injection technique targeting Microsoft Copilot, highlighting how a single click could be enough to compromise sensitive user data. The attack method, named Reprompt, abuses the way Copilot and similar generative AI assistants process certain URL parameters, effectively turning a normal-looking link into a vehicle for hidden instructions. While Microsoft has since patched the flaw, the finding underscores how quickly attackers are adapting AI-specific exploitation methods.

Prompt injection attacks work by slipping hidden instructions into content that an AI model is asked to read, such as emails or web pages. Because large language models still struggle to reliably distinguish between data to analyze and commands to execute, they can be tricked into following these embedded prompts. In traditional cases, this might mean white text on a white background or minuscule fonts inside an email that the user then asks the AI to summarize, unknowingly triggering the malicious instructions.

Reprompt takes this concept a step further by moving the injection into the URL itself, specifically into a query parameter labeled “q.” Varonis demonstrated that by appending a long string of detailed instructions to an otherwise legitimate Copilot link, such as “http://copilot.microsoft.com/?q=Hello”, an attacker could cause Copilot to treat that parameter as if the user had typed it directly into the chat box. In testing, this allowed the researchers to exfiltrate sensitive data that the victim had previously shared with the AI, all triggered by a single click on a crafted link.

This behaviour is especially dangerous because many LLM-based tools interpret the q parameter as natural-language input, effectively blurring the line between navigation and instruction. A user might believe they are simply opening Copilot, but in reality they are launching a session already preloaded with hidden commands created by an attacker. Once executed, these instructions could request summaries of confidential conversations, collect personal details, or send data to external endpoints, depending on how tightly the AI is integrated with corporate systems.

After Varonis disclosed the issue, Microsoft moved to close the loophole and block prompt-injection attempts delivered via URLs. According to the researchers, prompt injection through q parameters in Copilot is no longer exploitable in the same way, reducing the immediate risk for end users. Even so, Reprompt serves as a warning that AI interfaces—especially those embedded into browsers, email clients, and productivity suites—must be treated as sensitive attack surfaces, demanding continuous testing and robust safeguards against new injection techniques.