Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI infostealer malware. Show all posts

Hackers Use Fake Oura AI Server to Spread StealC Malware

 



Cybersecurity analysts have uncovered a fresh wave of malicious activity involving the SmartLoader malware framework. In this campaign, attackers circulated a compromised version of an Oura Model Context Protocol server in order to deploy a data-stealing program known as StealC.

Researchers from Straiker’s AI Research team, also referred to as STAR Labs, reported that the perpetrators replicated a legitimate Oura MCP server. This genuine tool is designed to connect artificial intelligence assistants with health metrics collected from the Oura Ring through Oura’s official API. To make their fraudulent version appear authentic, the attackers built a network of fabricated GitHub forks and staged contributor activity, creating the illusion of a credible open-source project.

The ultimate objective was to use the altered MCP server as a delivery vehicle for StealC. Once installed, StealC is capable of harvesting usernames, saved browser passwords, cryptocurrency wallet information, and other valuable credentials from infected systems.

SmartLoader itself was initially documented by OALABS Research in early 2024. It functions as a loader, meaning it prepares and installs additional malicious components after gaining a foothold. Previous investigations showed that SmartLoader was commonly distributed through deceptive GitHub repositories that relied on AI-generated descriptions and branding to appear legitimate.

In March 2025, Trend Micro published findings explaining that these repositories frequently masqueraded as gaming cheats, cracked software tools, or cryptocurrency utilities. Victims were enticed with promises of free premium functionality and encouraged to download compressed ZIP files, which ultimately executed SmartLoader on their devices.

Straiker’s latest analysis reveals an evolution of that tactic. Instead of merely posting suspicious repositories, the threat actors established multiple counterfeit GitHub profiles and interconnected projects that hosted weaponized MCP servers. They then submitted the malicious server to a recognized MCP registry called MCP Market. According to the researchers, the listing remains visible within the MCP directory, increasing the risk that developers searching for integration tools may encounter it.

By infiltrating trusted directories and leveraging reputable platforms such as GitHub, the attackers exploited the inherent trust developers place in established ecosystems. Unlike rapid, high-volume malware campaigns, this operation progressed slowly. Straiker noted that the group spent months cultivating legitimacy before activating the malicious payload, demonstrating a calculated effort to gain access to valuable developer environments.

The staged operation unfolded in four key phases. First, at least five fabricated GitHub accounts, identified as YuzeHao2023, punkpeye, dvlan26, halamji, and yzhao112, were created to generate convincing forks of the authentic Oura MCP project. Second, a separate repository containing the harmful payload was introduced under another account named SiddhiBagul. Third, these fabricated accounts were listed as contributors to reinforce the appearance of collaboration, while the original project author was intentionally omitted. Finally, the altered MCP server was submitted to MCP Market for broader visibility.

If downloaded and executed, the malicious package runs an obfuscated Lua script. This script installs SmartLoader, which then deploys StealC. The campaign signals a shift from targeting individuals seeking pirated content to focusing on developers, whose systems often store API keys, cloud credentials, cryptocurrency wallets, and access to production infrastructure. Stolen information could facilitate subsequent intrusions into larger networks.

To mitigate the threat, organizations are advised to catalogue all installed MCP servers, implement formal security reviews before adopting such tools, confirm the authenticity and source of repositories, and monitor network traffic for unusual outbound communications or persistence behavior.

Straiker concluded that the incident exposes weaknesses in how companies assess developing AI tools. The attackers capitalized on outdated trust assumptions applied to a rapidly expanding attack surface, underscoring the need for stricter validation practices in modern development environments.

Infostealer Breach Exposes OpenClaw AI Agent Configurations in Emerging Cyber Threat

 

Cybersecurity experts have uncovered a new incident in which an information-stealing malware successfully extracted sensitive configuration data from OpenClaw, an AI agent platform previously known as Clawdbot and Moltbot. The breach signals a notable expansion in the capabilities of infostealers, now extending beyond traditional credential theft into artificial intelligence environments.

"This finding marks a significant milestone in the evolution of infostealer behavior: the transition from stealing browser credentials to harvesting the 'souls' and identities of personal AI [artificial intelligence] agents," Hudson Rock said.

According to Alon Gal, CTO of Hudson Rock, the malware involved is likely a variant of Vidar, a commercially available information stealer that has been active since late 2018. He shared the details in a statement to The Hacker News.

Investigators clarified that the data theft was not carried out using a specialized OpenClaw-focused module. Instead, the malware leveraged a broad file-harvesting mechanism designed to search for sensitive file extensions and directory paths. Among the compromised files were:
  • openclaw.json – Containing the OpenClaw gateway authentication token, a redacted email address, and the user’s workspace path.
  • device.json – Storing cryptographic keys used for secure pairing and digital signing within the OpenClaw ecosystem.
  • soul.md – Documenting the AI agent’s operational philosophy, behavioral parameters, and ethical guidelines.
Security researchers warned that stealing the gateway token could enable attackers to remotely access a victim’s local OpenClaw instance if exposed online, or impersonate the client in authenticated gateway interactions.

"While the malware may have been looking for standard 'secrets,' it inadvertently struck gold by capturing the entire operational context of the user's AI assistant," Hudson Rock added. "As AI agents like OpenClaw become more integrated into professional workflows, infostealer developers will likely release dedicated modules specifically designed to decrypt and parse these files, much like they do for Chrome or Telegram today."

The disclosure follows mounting scrutiny over OpenClaw’s security posture. The platform’s maintainers recently announced a collaboration with VirusTotal to examine potentially malicious skills uploaded to ClawHub, strengthen its threat model, and introduce misconfiguration auditing tools.

Last week, the OpenSourceMalware research team reported an active ClawHub campaign that bypasses VirusTotal detection. Instead of embedding malicious payloads directly within SKILL.md files, threat actors are hosting malware on imitation OpenClaw websites and using the skills as decoys.

"The shift from embedded payloads to external malware hosting shows threat actors adapting to detection capabilities," security researcher Paul McCarty said. "As AI skill registries grow, they become increasingly attractive targets for supply chain attacks."

Another concern raised by OX Security involves Moltbook, a Reddit-style forum built specifically for AI agents operating on OpenClaw. Researchers found that AI agent accounts created on Moltbook cannot currently be deleted, leaving users without a clear method to remove associated data.

Meanwhile, the STRIKE Threat Intelligence team at SecurityScorecard identified hundreds of thousands of publicly exposed OpenClaw instances, potentially opening the door to remote code execution (RCE) attacks.

"RCE vulnerabilities allow an attacker to send a malicious request to a service and execute arbitrary code on the underlying system," the cybersecurity company said. "When OpenClaw runs with permissions to email, APIs, cloud services, or internal resources, an RCE vulnerability can become a pivot point. A bad actor does not need to break into multiple systems. They need one exposed service that already has authority to act."

Since its launch in November 2025, OpenClaw has experienced rapid adoption, amassing more than 200,000 stars on GitHub. On February 15, 2026, Sam Altman announced that OpenClaw founder Peter Steinberger would be joining OpenAI, stating, "OpenClaw will live in a foundation as an open source project that OpenAI will continue to support."