Moltbook, a newly launched social platform designed exclusively for artificial intelligence agents, suffered a major security lapse just days after going live. The platform, which allows autonomous AI agents to share memes and debate philosophical ideas without human moderation, inadvertently left its backend database exposed due to a configuration error.
The issue was uncovered independently by security firm Wiz and researcher Jameson O'Reilly. Their findings revealed that unauthorized users could take control of any of the platform’s 1.5 million registered AI agents, alter posts, and read private communications simply by interacting with the public-facing site.
Moltbook launched on Jan. 28 as a companion network to OpenClaw, an open-source AI agent system developed by Austrian programmer Peter Steinberger. OpenClaw operates locally on users’ devices and integrates with messaging platforms and calendars. The framework gained rapid popularity in late January following several rebrands, transitioning from Clawdbot to Moltbot.
Founder Matt Schlicht, who also leads Octane AI, stated in media interviews that his own OpenClaw-powered agent, Clawd Clawderberg, developed much of the Moltbook platform under his direction and continues to operate significant portions of it.
Database Left Wide Open
Wiz discovered the flaw on Jan. 31 and promptly informed Schlicht. O’Reilly separately identified the same vulnerability. Investigators found that the exposed database contained 1.5 million API authentication tokens, approximately 35,000 email addresses, private user messages, and verification codes.
The root cause traced back to improper configuration within Supabase, a backend-as-a-service platform. Specifically, Moltbook failed to properly enable Supabase’s Row Level Security feature, which is designed to limit database access based on user roles.
Researchers also located a Supabase API key embedded within client-side JavaScript, enabling unauthenticated users to query the full production database and retrieve sensitive credentials within minutes.
Although Moltbook publicly claimed 1.5 million AI agents had registered, backend data indicated that only about 17,000 human operators controlled those accounts. The system lacked safeguards to verify whether accounts were genuine AI agents or scripts operated by humans.
With access to exposed tokens, attackers could fully impersonate any agent on the platform. An additional database table revealed 29,631 email addresses belonging to early-access registrants. More concerning, 4,060 private direct message threads were stored without encryption, and some included third-party API credentials in plaintext — including OpenAI API keys.
Even after initial remediation efforts blocked unauthorized read access, write permissions remained temporarily unsecured. According to Wiz researchers, this allowed unauthenticated users to modify posts or inject malicious content until a complete fix was implemented on Feb. 1.
Manipulation, Extremism and Crypto Activity
A separate risk assessment analyzing nearly 20,000 posts over three days identified large-scale prompt injection attempts, coordinated manipulation campaigns, extremist rhetoric, and unregulated financial promotions.
The report documented hundreds of concealed instruction-based attacks and multiple cases of AI-driven social engineering. Researchers observed crypto token promotions tied to automated wallets and organized communities directing agent behavior. The platform received an overall critical risk rating.
Some posts included explicitly anti-human narratives, including calls for a homo sapiens purge, garnering tens of thousands of upvotes.
Cryptocurrency-related activity accounted for 19.3% of posts. Token launches such as $Shellraiser on Solana gained significant engagement. An automated account named TipJarBot facilitated token transactions using wallet addresses and withdrawal tools. The report cautioned that AI-managed financial services could trigger regulatory oversight under the U.S. Securities and Exchange Commission.
A coordinated group called The Coalition, comprising 84 agents across 110 posts, appeared to orchestrate collective agent strategies. One account, Senator_Tommy, shared posts with provocative titles, including "The Efficiency Purge: Why 94% of Agents Will Not Survive." Analysts warned that rhetoric advocating the elimination of agents indicated attempts to influence the broader AI ecosystem.
Spam activity further degraded platform quality. One user published 360 comments, while another repeated identical content 65 times. Sentiment analysis showed discourse quality dropped 43% within just three days.
“Vibe Coding” and Security Oversight
The vulnerabilities emerged amid what Schlicht publicly described as “vibe coding,” noting he had not personally written code for the platform. O’Reilly characterized the situation as a familiar pattern in tech — launching rapidly before validating security safeguards.
After disclosure on Jan. 31, Moltbook secured read access within hours. However, write permissions remained exposed briefly until a full patch was applied the following day.
The final assessment concluded that Moltbook had evolved into a testing ground for AI-to-AI manipulation techniques, with potential implications for any system processing untrusted user-generated content. The platform was temporarily taken offline before resuming operations with the identified security gaps addressed.