Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI agent data theft. Show all posts

Infostealer Malware Targets OpenClaw AI Agent Files to Steal API Keys and Authentication Tokens

 

Now appearing in threat reports, OpenClaw — a local AI assistant that runs directly on personal devices — has rapidly gained popularity. Because it operates on users’ machines, attackers are shifting focus to its configuration files. Recent malware infections have been caught stealing setup data containing API keys, login tokens, and other sensitive credentials, exposing private access points that were meant to remain local. 

Previously known as ClawdBot or MoltBot, OpenClaw functions as a persistent assistant that reads local files, logs into email and messaging apps, and interacts with web services. Since it stores memory and configuration details on the device itself, compromising it can expose deeply personal and professional data. As adoption grows across home and workplace environments, saved credentials are becoming attractive targets. 

Cybersecurity firm Hudson Rock identified what it believes is the first confirmed case of infostealer malware extracting OpenClaw configuration data. The incident marks a shift in tactics: instead of stealing only browser passwords, attackers are now targeting AI assistant environments that store powerful authentication tokens. According to co-founder and CTO Alon Gal, the infection likely involved a Vidar infostealer variant, with stolen data traced to February 13, 2026. 

Researchers say the malware did not specifically target OpenClaw. Instead, it scanned infected systems broadly for files containing keywords like “token” or “private key.” Because OpenClaw stores data in a hidden folder with those identifiers, its files were automatically captured. Among the compromised files, openclaw.json contained a masked email, workspace path, and a high-entropy gateway authentication token that could enable unauthorized access or API impersonation. 

The device.json file stored public and private encryption keys used for pairing and signing, meaning attackers with the private key could mimic the victim’s device and bypass security checks. Additional files such as soul.md, AGENTS.md, and MEMORY.md outlined the agent’s behavior and stored contextual data including logs, messages, and calendar entries. Hudson Rock concluded that the combination of stolen tokens, keys, and memory data could potentially allow near-total digital identity compromise.

Experts expect infostealers to increasingly target AI systems as they become embedded in professional workflows. Separately, Tenable disclosed a critical flaw in Nanobot, an AI assistant inspired by OpenClaw. The vulnerability, tracked as CVE-2026-2577, allowed remote hijacking of exposed instances but was patched in version 0.13.post7. 

Security professionals warn that as AI tools gain deeper access to personal and corporate systems, protecting configuration files is now as critical as safeguarding passwords. Hidden setup files can carry risks equal to — or greater than — stolen login credentials.

Infostealer Breach Exposes OpenClaw AI Agent Configurations in Emerging Cyber Threat

 

Cybersecurity experts have uncovered a new incident in which an information-stealing malware successfully extracted sensitive configuration data from OpenClaw, an AI agent platform previously known as Clawdbot and Moltbot. The breach signals a notable expansion in the capabilities of infostealers, now extending beyond traditional credential theft into artificial intelligence environments.

"This finding marks a significant milestone in the evolution of infostealer behavior: the transition from stealing browser credentials to harvesting the 'souls' and identities of personal AI [artificial intelligence] agents," Hudson Rock said.

According to Alon Gal, CTO of Hudson Rock, the malware involved is likely a variant of Vidar, a commercially available information stealer that has been active since late 2018. He shared the details in a statement to The Hacker News.

Investigators clarified that the data theft was not carried out using a specialized OpenClaw-focused module. Instead, the malware leveraged a broad file-harvesting mechanism designed to search for sensitive file extensions and directory paths. Among the compromised files were:
  • openclaw.json – Containing the OpenClaw gateway authentication token, a redacted email address, and the user’s workspace path.
  • device.json – Storing cryptographic keys used for secure pairing and digital signing within the OpenClaw ecosystem.
  • soul.md – Documenting the AI agent’s operational philosophy, behavioral parameters, and ethical guidelines.
Security researchers warned that stealing the gateway token could enable attackers to remotely access a victim’s local OpenClaw instance if exposed online, or impersonate the client in authenticated gateway interactions.

"While the malware may have been looking for standard 'secrets,' it inadvertently struck gold by capturing the entire operational context of the user's AI assistant," Hudson Rock added. "As AI agents like OpenClaw become more integrated into professional workflows, infostealer developers will likely release dedicated modules specifically designed to decrypt and parse these files, much like they do for Chrome or Telegram today."

The disclosure follows mounting scrutiny over OpenClaw’s security posture. The platform’s maintainers recently announced a collaboration with VirusTotal to examine potentially malicious skills uploaded to ClawHub, strengthen its threat model, and introduce misconfiguration auditing tools.

Last week, the OpenSourceMalware research team reported an active ClawHub campaign that bypasses VirusTotal detection. Instead of embedding malicious payloads directly within SKILL.md files, threat actors are hosting malware on imitation OpenClaw websites and using the skills as decoys.

"The shift from embedded payloads to external malware hosting shows threat actors adapting to detection capabilities," security researcher Paul McCarty said. "As AI skill registries grow, they become increasingly attractive targets for supply chain attacks."

Another concern raised by OX Security involves Moltbook, a Reddit-style forum built specifically for AI agents operating on OpenClaw. Researchers found that AI agent accounts created on Moltbook cannot currently be deleted, leaving users without a clear method to remove associated data.

Meanwhile, the STRIKE Threat Intelligence team at SecurityScorecard identified hundreds of thousands of publicly exposed OpenClaw instances, potentially opening the door to remote code execution (RCE) attacks.

"RCE vulnerabilities allow an attacker to send a malicious request to a service and execute arbitrary code on the underlying system," the cybersecurity company said. "When OpenClaw runs with permissions to email, APIs, cloud services, or internal resources, an RCE vulnerability can become a pivot point. A bad actor does not need to break into multiple systems. They need one exposed service that already has authority to act."

Since its launch in November 2025, OpenClaw has experienced rapid adoption, amassing more than 200,000 stars on GitHub. On February 15, 2026, Sam Altman announced that OpenClaw founder Peter Steinberger would be joining OpenAI, stating, "OpenClaw will live in a foundation as an open source project that OpenAI will continue to support."