Google recently fixed a critical vulnerability in its Gemini AI assistant, which is tightly integrated with Android, Google Workspace, Gmail, Calendar, and Google Home. The flaw allowed attackers to exploit Gemini via creatively crafted Google Calendar invites, using indirect prompt injection techniques hidden in event titles.
Once the malicious invite was sent, any user interaction with Gemini—such as asking for their daily calendar or emails—could trigger unintended actions, including the extraction of sensitive data, the control of smart home devices, tracking of user locations, launching of applications, or even joining Zoom video calls.
The vulnerability exploited Gemini’s wide-reaching permissions and its context window. The attack did not require acceptance of the calendar invite, as Gemini’s natural behavior is to pull all event details when queried. The hostile prompt, embedded in the event title, would be processed by Gemini as part of the conversation, bypassing its prompt filtering and other security mechanisms.
The researchers behind the attack, SafeBreach, demonstrated that just acting like a normal Gemini user could unknowingly expose confidential information or give attackers command over connected devices. In particular, attackers could stealthily place the malicious prompt in the sixth invite out of several, as Google Calendar only displays the five most recent events unless manually expanded, further complicating detection by users.
The case raises deep concerns about the inherent risks of AI assistants linked to rich context sources like email and calendars, where hostile prompts can easily evade standard model protections and inject instructions not visible to the user. This type of attack, called an indirect prompt injection, was previously flagged by Mozilla’s Marco Figueroa in other Gemini-related exploits. Such vulnerabilities pave the way for both data leaks and convincing phishing attacks.
Google responded proactively, patching the flaw before public exploitation, crediting the research team for responsible disclosure and collaboration. The incident has accelerated Google’s deployment of advanced defenses, including improved adversarial awareness and mitigations against hijack attempts.
Security experts stress that continued red-teaming, industry cooperation, and rethinking automation boundaries are now imperative as AI becomes more enmeshed in smart devices and agents with broad powers. Gemini’s incident stands as a wake-up call for the real-world risks of prompt injection and automation in next-generation AI assistants, emphasizing the need for robust, ongoing security measures.