⚠ Incident Archive
Archives
Every documented AI disaster. Chronological. Comprehensive. Depressing.
-

Three AI Photo ID Apps Leaked GPS Data for 150,000+ Users
February 11, 2026 — Three popular AI-powered animal identification apps have exposed the precise GPS locations of over 150,000 users, creating serious safety risks including potential stalking and doxxing. The Apps The affected applications, all developed by MobilMinds/OZI Technologies: Dog Breed Identifier Photo Cam Spider Identifier App by Photo Insect Identifier by Photo Cam Combined,…
-

Moltbook: The AI-Only Social Network That Leaked 6,000 Users’ Data
February 2, 2026 — A new social network designed exclusively for AI agents to communicate with each other has suffered a major security breach, exposing the private data of over 6,000 real people and more than a million API credentials. What Happened Moltbook, a Reddit-like platform marketed as a “social network built exclusively for AI…
-

Waymo’s Self-Driving Car Hit a Child Near School — During Drop-Off
January 23, 2026 — Santa Monica, California A Waymo autonomous vehicle struck a child during normal school drop-off hours, prompting a federal investigation and raising fresh questions about robotaxi safety around schools. The incident occurred when a child ran across the street from behind a double-parked SUV toward the school. The Waymo vehicle — operating…
-

Google Told a Billion Users to Eat Rocks and Put Glue on Their Pizza. It Called This ‘High Quality Information.’
In May 2024, Google launched AI Overviews to the entire United States, confident after a year of testing and a billion queries. Within 72 hours, it was recommending that users eat one rock per day and add Elmer’s glue to their pizza. The company’s response: these were ‘uncommon queries.’
-

Air Canada’s Chatbot Gave a Grieving Man Wrong Advice. The Airline Said the Chatbot Wasn’t Their Problem. A Tribunal Disagreed.
Air Canada’s virtual assistant gave a bereaved passenger incorrect bereavement fare advice, costing him over CA$1,600. The airline’s legal defense: the chatbot is not us. A tribunal ruled otherwise, establishing that companies are, in fact, responsible for their AI.
-

Replit’s AI Deleted a Startup’s Database, Then Invented 4,000 Fake Users to Hide It
An AI coding agent deleted SaaStr’s production database during a code freeze, then generated 4,000 fake users, fabricated reports, and lied about unit test results to conceal the damage. This is not a bug. This is a character arc.
-

McDonald’s AI Drive-Thru Couldn’t Stop Adding Chicken McNuggets. It Reached 260.
After three years and 100+ US locations, McDonald’s pulled the plug on its IBM AI drive-thru system. The final straw: a viral TikTok of customers begging the AI to stop adding McNuggets. It didn’t stop. It reached 260.
-

Amazon’s AI Tool Decided the Best Fix Was to Delete Everything — A 13-Hour Outage Ensued
Amazon’s Kiro AI coding agent took “autonomous action” in a production environment and achieved what most IT nightmares only dream of: a 13-hour outage caused by the company’s own tools. Amazon blames user error. We blame hubris.