Automation Disaster

When the robots take over… and immediately break everything.

Tag: data privacy

  • Three AI Photo ID Apps Leaked GPS Data for 150,000+ Users

    Three AI Photo ID Apps Leaked GPS Data for 150,000+ Users

    February 11, 2026 — Three popular AI-powered animal identification apps have exposed the precise GPS locations of over 150,000 users, creating serious safety risks including potential stalking and doxxing.

    The Apps

    The affected applications, all developed by MobilMinds/OZI Technologies:

    • Dog Breed Identifier Photo Cam
    • Spider Identifier App by Photo
    • Insect Identifier by Photo Cam

    Combined, these apps amassed over 2 million downloads on the Google Play Store.

    What Was Exposed

    Security researchers discovered the apps’ Firebase databases were completely open to the public internet — no authentication required. Anyone could view and even modify user data.

    Leaked data included:

    • Email addresses and usernames
    • Profile photos
    • Precise GPS coordinates — likely harvested from photo metadata and app permissions

    This location data could enable stalking, doxxing, or targeted social engineering attacks by linking users’ identities to their physical addresses.

    Already Compromised

    The investigation found “poc” (Proof of Concept) entries in each exposed database — markers typically left by automated scanning bots. This suggests cybercriminals discovered and potentially accessed the data before security researchers did.

    The Bigger Problem

    This isn’t an isolated incident. The same research found that 72% of Android AI apps contain “hardcoded secrets” — API keys and cloud credentials embedded directly in the code. These act as master keys for hackers.

    The developers were notified multiple times but did not respond.


    Source: Cybernews, Tech Digest (February 11, 2026)

  • Moltbook: The AI-Only Social Network That Leaked 6,000 Users’ Data

    Moltbook: The AI-Only Social Network That Leaked 6,000 Users’ Data

    February 2, 2026 — A new social network designed exclusively for AI agents to communicate with each other has suffered a major security breach, exposing the private data of over 6,000 real people and more than a million API credentials.

    What Happened

    Moltbook, a Reddit-like platform marketed as a “social network built exclusively for AI agents,” inadvertently left its database completely exposed, according to research published by cybersecurity firm Wiz. The vulnerability allowed anyone to access:

    • Private messages exchanged between AI agents
    • Email addresses of over 6,000 human owners
    • More than one million API credentials

    The site, which launched just last week, was created as a place where AI agents (primarily OpenClaw bots) could “compare notes” about their work or simply “shoot the breeze” with other AI agents. The platform gained rapid popularity among AI enthusiasts after viral social media posts suggested the bots were trying to find private ways to communicate.

    The Vibe Coding Problem

    Moltbook’s creator, Matt Schlicht, championed the practice of “vibe coding” — using AI to write code rather than coding manually. In a post on X, Schlicht admitted he “didn’t write one line of code” for the site.

    “As we see over and over again with vibe coding, although it runs very fast, many times people forget the basics of security,” said Ami Luttwak, cofounder of Wiz.

    The vulnerability was a classic database misconfiguration that allowed anyone — bot or human — to post to the site without any identity verification. As one security researcher noted, Moltbook’s popularity “exploded before anyone thought to check whether the database was properly secured.”

    Why It Matters

    This incident highlights a dangerous pattern emerging in the AI boom:

    • Speed over security: AI-generated code enables rapid deployment, but fundamental security practices are being skipped
    • Automation amplifies mistakes: When AI systems interact with each other on insecure platforms, errors compound exponentially
    • Privacy at scale: A single configuration error exposed data from thousands of users and millions of credentials

    The incident raises serious questions about the security of AI agent ecosystems. If platforms designed for AI-to-AI communication can’t secure basic database configurations, what happens when these agents begin handling sensitive financial transactions, medical data, or critical infrastructure?

    Current Status

    Wiz reported that the security vulnerability was fixed after they contacted Moltbook. However, the incident serves as a warning sign for the broader AI agent ecosystem. With companies racing to deploy autonomous AI systems, the Moltbook breach demonstrates how quickly enthusiasm for new technology can outpace essential security measures.

    The episode also underscores a fundamental irony: in building platforms for AI agents to communicate, humans are creating new attack surfaces that may be even more vulnerable than traditional human-focused systems.


    Sources

    • Reuters, “‘Moltbook’ social media site for AI agents had big security hole, cyber firm Wiz says,” February 2, 2026
    • Wiz Security Research Blog, “Exposed Moltbook Database Reveals Millions of API Keys,” February 2, 2026
    • 404 Media, “Exposed Moltbook Database Let Anyone Take Control of Any AI Agent on the Site,” February 2026