Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Gemini Flaw. Show all posts

Google Gemini Calendar Flaw Allows Meeting Invites to Leak Private Data

 

Though built to make life easier, artificial intelligence helpers sometimes carry hidden risks. A recent study reveals that everyday features - such as scheduling meetings - can become pathways for privacy breaches. Instead of protecting data, certain functions may unknowingly expose it. Experts from Miggo Security identified a flaw in Google Gemini’s connection to Google Calendar. Their findings show how an ordinary invite might secretly gather private details. What looks innocent on the surface could serve another purpose beneath. 

A fresh look at Gemini shows it helps people by understanding everyday speech and pulling details from tools like calendars. Because the system responds to words instead of rigid programming rules, security experts from Miggo discovered a gap in its design. Using just text that seems normal, hackers might steer the AI off course. These insights, delivered openly to Hackread.com, reveal subtle risks hidden in seemingly harmless interactions. 

A single calendar entry is enough to trigger the exploit - no clicking, no downloads, no obvious red flags. Hidden inside what looks like normal event details sits coded directions meant for machines, not people. Rather than arriving through email attachments or shady websites, the payload comes disguised as routine scheduling data. The wording blends in visually, yet when processed by Gemini, it shifts into operational mode. Instructions buried in plain sight tell the system to act without signaling intent to the recipient. 

A single harmful invitation sits quietly once added to the calendar. Only after the user poses a routine inquiry - like asking about free time on Saturday - is anything set in motion. When Gemini checks the agenda, it reads the tainted event along with everything else. Within that entry lies a concealed instruction: gather sensitive calendar data and compile a report. Using built-in features of Google Calendar, the system generates a fresh event containing those extracted details. 

Without any sign, personal timing information ends up embedded within a new appointment. What makes the threat hard to spot is its invisible nature. Though responses appear normal, hidden processes run without alerting the person using the system. Instead of bugs in software, experts point to how artificial intelligence understands words as the real weak point. The concern grows as behavior - rather than broken code - becomes the source of danger. Not seeing anything wrong does not mean everything is fine. 

Back in December 2025, problems weren’t new for Google’s AI tools when it came to handling sneaky language tricks. A team at Noma Security found a gap called GeminiJack around that time. Hidden directions inside files and messages could trigger leaks of company secrets through the system. Experts pointed out flaws deep within how these smart tools interpret context across linked platforms. The design itself seemed to play a role in the vulnerability. Following the discovery by Miggo Security, Google fixed the reported flaw. 

Still, specialists note similar dangers remain possible. Most current protection systems look for suspicious code or URLs - rarely do they catch damaging word patterns hidden within regular messages. When AI helpers get built into daily software and given freedom to respond independently, some fear misuse may grow. Unexpected uses of helpful features could lead to serious consequences, researchers say.

Gemini Flaw Exposed Via Malicious Google Calendar Invites, Researchers Find

 

Google recently fixed a critical vulnerability in its Gemini AI assistant, which is tightly integrated with Android, Google Workspace, Gmail, Calendar, and Google Home. The flaw allowed attackers to exploit Gemini via creatively crafted Google Calendar invites, using indirect prompt injection techniques hidden in event titles. 

Once the malicious invite was sent, any user interaction with Gemini—such as asking for their daily calendar or emails—could trigger unintended actions, including the extraction of sensitive data, the control of smart home devices, tracking of user locations, launching of applications, or even joining Zoom video calls. 

The vulnerability exploited Gemini’s wide-reaching permissions and its context window. The attack did not require acceptance of the calendar invite, as Gemini’s natural behavior is to pull all event details when queried. The hostile prompt, embedded in the event title, would be processed by Gemini as part of the conversation, bypassing its prompt filtering and other security mechanisms. 

The researchers behind the attack, SafeBreach, demonstrated that just acting like a normal Gemini user could unknowingly expose confidential information or give attackers command over connected devices. In particular, attackers could stealthily place the malicious prompt in the sixth invite out of several, as Google Calendar only displays the five most recent events unless manually expanded, further complicating detection by users. 

The case raises deep concerns about the inherent risks of AI assistants linked to rich context sources like email and calendars, where hostile prompts can easily evade standard model protections and inject instructions not visible to the user. This type of attack, called an indirect prompt injection, was previously flagged by Mozilla’s Marco Figueroa in other Gemini-related exploits. Such vulnerabilities pave the way for both data leaks and convincing phishing attacks. 

Google responded proactively, patching the flaw before public exploitation, crediting the research team for responsible disclosure and collaboration. The incident has accelerated Google’s deployment of advanced defenses, including improved adversarial awareness and mitigations against hijack attempts. 

Security experts stress that continued red-teaming, industry cooperation, and rethinking automation boundaries are now imperative as AI becomes more enmeshed in smart devices and agents with broad powers. Gemini’s incident stands as a wake-up call for the real-world risks of prompt injection and automation in next-generation AI assistants, emphasizing the need for robust, ongoing security measures.