A recent security analysis has revealed that thousands of Google Cloud API keys available on the public internet could be misused to interact with Google’s Gemini artificial intelligence platform, creating both data exposure and financial risks.
Google Cloud API keys, often recognizable by the prefix “AIza,” are typically used to connect websites and applications to Google services and to track usage for billing. They are not meant to function as high-level authentication credentials. However, researchers from Truffle Security discovered that these keys can be leveraged to access Gemini-related endpoints once the Generative Language API is enabled within a Google Cloud project.
During their investigation, the firm identified nearly 3,000 active API keys embedded directly in publicly accessible client-side code, including JavaScript used to power website features such as maps and other Google integrations. According to security researcher Joe Leon, possession of a valid key may allow an attacker to retrieve stored files, read cached content, and generate large volumes of AI-driven requests that would be billed to the project owner. He further noted that these keys can now authenticate to Gemini services, even though they were not originally designed for that purpose.
The root of the problem lies in how permissions are applied when the Gemini API is activated. If a project owner enables the Generative Language API, all existing API keys tied to that project may automatically inherit access to Gemini endpoints. This includes keys that were previously embedded in publicly visible website code. Critically, there is no automatic alert notifying users that older keys have gained expanded capabilities.
As a result, attackers who routinely scan websites for exposed credentials could capture these keys and use them to access endpoints such as file storage or cached content interfaces. They could also submit repeated Gemini API requests, potentially generating substantial usage charges for victims through quota abuse.
The researchers also observed that when developers create a new API key within Google Cloud, the default configuration is set to “Unrestricted.” This means the key can interact with every enabled API within the same project, including Gemini, unless specific limitations are manually applied. In total, Truffle Security reported identifying 2,863 active keys accessible online, including one associated with a Google-related website.
Separately, Quokka published findings from a large-scale scan of 250,000 Android applications, uncovering more than 35,000 unique Google API keys embedded in mobile software. The company warned that beyond financial abuse through automated AI requests, organizations must consider broader implications. AI-enabled endpoints can interact with prompts, generated outputs, and integrated cloud services in ways that amplify the consequences of a compromised key.
Even in cases where direct customer records are not exposed, the combination of AI inference access, consumption of service quotas, and potential connectivity to other Google Cloud resources creates a substantially different risk profile than developers may have anticipated when treating API keys as simple billing identifiers.
Although the behavior was initially described as functioning as designed, Google later confirmed it had collaborated with researchers to mitigate the issue. A company spokesperson stated that measures have been implemented to detect and block leaked API keys attempting to access Gemini services. There is currently no confirmed evidence that the weakness has been exploited at scale. However, a recent online post described an incident in which a reportedly stolen API key generated over $82,000 in charges within a two-day period, compared to the account’s typical monthly expenditure of approximately $180.
The situation remains under review, and further updates are expected if additional details surface.
Security experts recommend that Google Cloud users audit their projects to determine whether AI-related APIs are enabled. If such services are active and associated API keys are publicly accessible through website code or open repositories, those keys should be rotated immediately. Researchers advise prioritizing older keys, as they are more likely to have been deployed publicly under earlier guidance suggesting limited risk.
Industry analysts emphasize that API security must be continuous. Changes in how APIs operate or what data they can access may not constitute traditional software vulnerabilities, yet they can materially increase exposure. As artificial intelligence becomes more tightly integrated with cloud services, organizations must move beyond periodic testing and instead monitor behavior, detect anomalies, and actively block suspicious activity to reduce evolving risk.
As organizations build and host their own Large Language Models, they also create a network of supporting services and APIs to keep those systems running. The growing danger does not usually originate from the model’s intelligence itself, but from the technical framework that delivers, connects, and automates it. Every new interface added to support an LLM expands the number of possible entry points into the system. During rapid rollouts, these interfaces are often trusted automatically and reviewed later, if at all.
When these access points are given excessive permissions or rely on long-lasting credentials, they can open doors far wider than intended. A single poorly secured endpoint can provide access to internal systems, service identities, and sensitive data tied to LLM operations. For that reason, managing privileges at the endpoint level is becoming a central security requirement.
In practical terms, an endpoint is any digital doorway that allows a user, application, or service to communicate with a model. This includes APIs that receive prompts and return generated responses, administrative panels used to update or configure models, monitoring dashboards, and integration points that allow the model to interact with databases or external tools. Together, these interfaces determine how deeply the LLM is embedded within the broader technology ecosystem.
A major issue is that many of these interfaces are designed for experimentation or early deployment phases. They prioritize speed and functionality over hardened security controls. Over time, temporary testing configurations remain active, monitoring weakens, and permissions accumulate. In many deployments, the endpoint effectively becomes the security perimeter. Its authentication methods, secret management practices, and assigned privileges ultimately decide how far an intruder could move.
Exposure rarely stems from a single catastrophic mistake. Instead, it develops gradually. Internal APIs may be made publicly reachable to simplify integration and left unprotected. Access tokens or API keys may be embedded in code and never rotated. Teams may assume that internal networks are inherently secure, overlooking the fact that VPN access, misconfigurations, or compromised accounts can bridge that boundary. Cloud settings, including improperly configured gateways or firewall rules, can also unintentionally expose services to the internet.
These risks are amplified in LLM ecosystems because models are typically connected to multiple internal systems. If an attacker compromises one endpoint, they may gain indirect access to databases, automation tools, and cloud resources that already trust the model’s credentials. Unlike traditional APIs with narrow functions, LLM interfaces often support broad, automated workflows. This enables lateral movement at scale.
Threat actors can exploit prompts to extract confidential information the model can access. They may also misuse tool integrations to modify internal resources or trigger privileged operations. Even limited access can be dangerous if attackers manipulate input data in ways that influence the model to perform harmful actions indirectly.
Non-human identities intensify this exposure. Service accounts, machine credentials, and API keys allow models to function continuously without human intervention. For convenience, these identities are often granted broad permissions and rarely audited. If an endpoint tied to such credentials is breached, the attacker inherits trusted system-level access. Problems such as scattered secrets across configuration files, long-lived static credentials, excessive permissions, and a growing number of unmanaged service accounts increase both complexity and risk.
Mitigating these threats requires assuming that some endpoints will eventually be reached. Security strategies should focus on limiting impact. Access should follow strict least-privilege principles for both people and systems. Elevated rights should be granted only temporarily and revoked automatically. Sensitive sessions should be logged and reviewed. Credentials must be rotated regularly, and long-standing static secrets should be eliminated wherever possible.
Because LLM systems operate autonomously and at scale, traditional access models are no longer sufficient. Strong endpoint privilege governance, continuous verification, and reduced standing access are essential to protecting AI-driven infrastructure from escalating compromise.
A recent cyber incident has brought to light how one weak link in software integrations can expose sensitive business information. Salesloft, a sales automation platform, confirmed that attackers exploited its Drift chat integration with Salesforce to steal tokens that granted access to customer environments.
Between August 8 and August 18, 2025, threat actors obtained OAuth and refresh tokens connected to the Drift–Salesforce integration. These tokens work like digital keys, allowing connected apps to access Salesforce data without repeatedly asking for passwords. Once stolen, the tokens were used to log into Salesforce accounts and extract confidential data.
According to Salesloft, the attackers specifically searched for credentials such as Amazon Web Services (AWS) keys, Snowflake access tokens, and internal passwords. The company said the breach only impacted customers who used the Drift–Salesforce connection, while other integrations were unaffected. As a precaution, all tokens for this integration were revoked, forcing customers to reauthenticate before continuing use.
Google’s Threat Intelligence team, which is monitoring the attackers under the name UNC6395, reported that the group issued queries inside Salesforce to collect sensitive details hidden in support cases. These included login credentials, API keys, and cloud access tokens. Investigators noted that while the attackers tried to cover their tracks by deleting query jobs, the activity still appears in Salesforce logs.
To disguise their operations, the hackers used anonymizing tools like Tor and commercial hosting services. Google also identified user-agent strings and IP addresses linked to the attack, which organizations can use to check their logs for signs of compromise.
Security experts are urging affected administrators to rotate credentials immediately, review Salesforce logs for unusual queries, and search for leaked secrets by scanning for terms such as “AKIA” (used in AWS keys), “Snowflake,” “password,” or “secret.” They also recommend tightening access controls on third-party apps, limiting token permissions, and shortening session times to reduce future risk.
While some extortion groups have publicly claimed responsibility for the attack, Google stated there is no clear evidence tying them to this breach. The investigation is still ongoing, and attribution remains uncertain.
This incident underlines the broader risks of SaaS integrations. Connected apps are often given high levels of access to critical business platforms. If those credentials are compromised, attackers can bypass normal login protections and move deeper into company systems. As businesses continue relying on cloud applications, stronger governance of integrations and closer monitoring of token use are becoming essential.
Online security has grown to be of utmost importance in a digital environment that is always changing. Passkeys, a cutting-edge authentication system that is poised to transform how we protect our accounts, are being pushed for by Google and Apple, who are leading the effort.
Passkeys, also known as cryptographic keys, are a form of authentication that rely on public-key cryptography. Unlike traditional passwords, which can be vulnerable to hacking and phishing attacks, passkeys offer a more robust and secure method of verifying user identity. By generating a unique pair of keys – one public and one private – passkeys establish a highly secure connection between the user and the platform.
One of the key advantages of passkeys is that they eliminate the need for users to remember complex passwords or go through the hassle of resetting them. Instead, users can rely on their devices to generate and manage these cryptographic keys. This not only simplifies the login process but also reduces the risk of human error, a common factor in security breaches.
Google and Apple have been at the forefront of this innovation, integrating passkey technology into their platforms. Apple, for instance, has introduced the Passkeys API in iOS, making it easier for developers to implement this secure authentication method in their apps. This move signifies a significant shift towards a more secure and user-friendly digital landscape.
Moreover, passkeys can play a pivotal role in thwarting phishing attacks, which remain a prevalent threat in the online realm. Since passkeys are tied to specific devices, even if a user inadvertently falls victim to a phishing scam, the attacker would be unable to gain access without the physical device.
While passkeys offer a promising solution to enhance online security, it's important to acknowledge potential challenges. For instance, the technology may face initial resistance due to a learning curve associated with its implementation. Additionally, ensuring compatibility across various platforms and devices will be crucial to its widespread adoption.
Passkeys are a major advancement in digital authentication. Google and Apple are leading a push toward a more secure and frictionless internet experience by utilizing the power of public-key cryptography. Users might anticipate a time in the future when the laborious practice of managing passwords is a thing of the past as this technology continues to advance. Adopting passkeys is a step toward improved security as well as a step toward a more user-focused digital environment.
The future of Zero Trust security relies greatly on next-generation firewalls (NGFWs). NGFWs are classified by Gartner Research as "deep packet inspection firewalls that incorporate software inspection, intrusion prevention, and the injection of intelligence from outside the firewall in addition to protocol inspection and blocking." As per Gartner, an NGFW should not be mistaken for a standalone network intrusion prevention system (IPS) that combines a regular firewall and an uncoordinated IPS in the same device.