Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Attacks Gemini. Show all posts

Publicly Exposed Google Cloud API Keys Gain Unintended Access to Gemini Services

 










A recent security analysis has revealed that thousands of Google Cloud API keys available on the public internet could be misused to interact with Google’s Gemini artificial intelligence platform, creating both data exposure and financial risks.

Google Cloud API keys, often recognizable by the prefix “AIza,” are typically used to connect websites and applications to Google services and to track usage for billing. They are not meant to function as high-level authentication credentials. However, researchers from Truffle Security discovered that these keys can be leveraged to access Gemini-related endpoints once the Generative Language API is enabled within a Google Cloud project.

During their investigation, the firm identified nearly 3,000 active API keys embedded directly in publicly accessible client-side code, including JavaScript used to power website features such as maps and other Google integrations. According to security researcher Joe Leon, possession of a valid key may allow an attacker to retrieve stored files, read cached content, and generate large volumes of AI-driven requests that would be billed to the project owner. He further noted that these keys can now authenticate to Gemini services, even though they were not originally designed for that purpose.

The root of the problem lies in how permissions are applied when the Gemini API is activated. If a project owner enables the Generative Language API, all existing API keys tied to that project may automatically inherit access to Gemini endpoints. This includes keys that were previously embedded in publicly visible website code. Critically, there is no automatic alert notifying users that older keys have gained expanded capabilities.

As a result, attackers who routinely scan websites for exposed credentials could capture these keys and use them to access endpoints such as file storage or cached content interfaces. They could also submit repeated Gemini API requests, potentially generating substantial usage charges for victims through quota abuse.

The researchers also observed that when developers create a new API key within Google Cloud, the default configuration is set to “Unrestricted.” This means the key can interact with every enabled API within the same project, including Gemini, unless specific limitations are manually applied. In total, Truffle Security reported identifying 2,863 active keys accessible online, including one associated with a Google-related website.

Separately, Quokka published findings from a large-scale scan of 250,000 Android applications, uncovering more than 35,000 unique Google API keys embedded in mobile software. The company warned that beyond financial abuse through automated AI requests, organizations must consider broader implications. AI-enabled endpoints can interact with prompts, generated outputs, and integrated cloud services in ways that amplify the consequences of a compromised key.

Even in cases where direct customer records are not exposed, the combination of AI inference access, consumption of service quotas, and potential connectivity to other Google Cloud resources creates a substantially different risk profile than developers may have anticipated when treating API keys as simple billing identifiers.

Although the behavior was initially described as functioning as designed, Google later confirmed it had collaborated with researchers to mitigate the issue. A company spokesperson stated that measures have been implemented to detect and block leaked API keys attempting to access Gemini services. There is currently no confirmed evidence that the weakness has been exploited at scale. However, a recent online post described an incident in which a reportedly stolen API key generated over $82,000 in charges within a two-day period, compared to the account’s typical monthly expenditure of approximately $180.

The situation remains under review, and further updates are expected if additional details surface.

Security experts recommend that Google Cloud users audit their projects to determine whether AI-related APIs are enabled. If such services are active and associated API keys are publicly accessible through website code or open repositories, those keys should be rotated immediately. Researchers advise prioritizing older keys, as they are more likely to have been deployed publicly under earlier guidance suggesting limited risk.

Industry analysts emphasize that API security must be continuous. Changes in how APIs operate or what data they can access may not constitute traditional software vulnerabilities, yet they can materially increase exposure. As artificial intelligence becomes more tightly integrated with cloud services, organizations must move beyond periodic testing and instead monitor behavior, detect anomalies, and actively block suspicious activity to reduce evolving risk.

Researchers Expose AI Prompt Injection Attack Hidden in Images

 

Researchers have unveiled a new type of cyberattack that can steal sensitive user data by embedding hidden prompts inside images processed by AI platforms. These malicious instructions remain invisible to the human eye but become detectable once the images are downscaled using common resampling techniques before being sent to a large language model (LLM).

The technique, designed by Trail of Bits experts Kikimora Morozova and Suha Sabi Hussain, builds on earlier research from a 2020 USENIX paper by TU Braunschweig, which first proposed the concept of image-scaling attacks in machine learning systems.

Typically, when users upload pictures into AI tools, the images are automatically reduced in quality for efficiency and cost optimization. Depending on the resampling method—such as nearest neighbor, bilinear, or bicubic interpolation—aliasing artifacts can emerge, unintentionally revealing hidden patterns if the source image was crafted with this purpose in mind.

In one demonstration by Trail of Bits, carefully engineered dark areas within a malicious image shifted colors when processed through bicubic downscaling. This transformation exposed black text that the AI system interpreted as additional user instructions. While everything appeared normal to the end user, the model silently executed these hidden commands, potentially leaking data or performing harmful tasks.

In practice, the team showed how this vulnerability could be exploited in Gemini CLI, where hidden prompts enabled the extraction of Google Calendar data to an external email address. With Zapier MCP configured to trust=True, the tool calls were automatically approved without requiring user consent.

The researchers emphasized that the success of such attacks depends on tailoring the malicious image to the specific downscaling algorithm used by each AI system. Their testing confirmed the method’s effectiveness against:

  1. Google Gemini CLI
  2. Vertex AI Studio (Gemini backend)
  3. Gemini’s web interface
  4. Gemini API via llm CLI
  5. Google Assistant on Android
  6. Genspark

Given the broad scope of this vulnerability, the team developed Anamorpher, an open-source tool (currently in beta) that can generate attack-ready images aligned with multiple downscaling methods.

To defend against this threat, Trail of Bits recommends that AI platforms enforce image dimension limits, provide a preview of the downscaled output before submission to an LLM, and require explicit user approval for sensitive tool calls—especially if text is detected in images.

"The strongest defense, however, is to implement secure design patterns and systematic defenses that mitigate impactful prompt injection beyond multi-modal prompt injection," the researchers said, pointing to their earlier paper on robust LLM design strategies.