Skip to content
Security Breaches

An AI Photo App Promised Cinematic Makeovers. It Leaked 2 Million Private Photos and Videos Instead.

Camera lens with colorful light reflections

The Harsh Reality: When you hand your personal photos to an AI app for a fun cinematic transformation, you expect a cool video. What you don’t expect is for those photos—and nearly two million others—to be sitting on an open server for anyone to download.

But that’s exactly what happened with Video AI Art Generator & Maker, an Android app with over 500,000 downloads that just leaked 1.5 million user images, 385,000 private videos, and millions of AI-generated files—totaling 12 terabytes of data—because of a basic cloud storage misconfiguration.

The Scale of the Exposure

Researchers at Cybernews discovered the breach in February 2026, but the exposure wasn’t a recent glitch. The app, developed by Codeway and launched in June 2023, had been accumulating user uploads in a misconfigured Google Cloud Storage bucket for nearly three years. Every photo, video, and AI-generated creation processed through the app was essentially public property.

The numbers are staggering:

  • 1.5 million+ original user images
  • 385,000+ private user videos
  • 8.27 million total media files (including AI-generated content)
  • 12 terabytes of exposed data
  • 500,000+ app installations with 11,000+ reviews

The leaked content wasn’t just AI-enhanced outputs—it was the original, unedited photos and videos users uploaded for processing. Family photos. Personal videos. Moments users thought they were sharing with an algorithm, not the entire internet.

How a Simple Misconfiguration Destroyed Privacy

This wasn’t a sophisticated hack. No zero-day exploits. No advanced persistent threat actors. Just a cloud storage bucket left open without authentication—a configuration error so basic that it barely qualifies as a “breach.”

Yet the impact is devastating. The exposed bucket required no password, no API key, no special access. Anyone who stumbled upon the URL could browse and download millions of private files. For three years.

This isn’t an isolated incident. It’s part of a broader epidemic of cloud misconfigurations plaguing the AI industry. As companies rush to deploy AI-powered apps, basic security hygiene is often treated as an afterthought. The result? Massive data exposures that make headlines but never seem to change industry behavior.

The Deepfake Risk

Beyond the immediate privacy violation, this leak creates a goldmine for malicious actors. With access to nearly 2 million original photos and videos, bad actors can:

  • Create highly convincing deepfakes for extortion
  • Bypass facial recognition security systems
  • Launch targeted spear-phishing campaigns
  • Build detailed profiles for social engineering attacks

The combination of original photos and AI-generated content is particularly dangerous. It gives attackers both authentic source material and examples of how users want to be seen—perfect for crafting convincing synthetic identities.

The Uncomfortable Questions

This incident raises serious questions about the AI app ecosystem:

Why does a photo editing app need to store original files indefinitely? The app’s functionality—applying AI filters and effects—doesn’t require retaining source material. Yet the data accumulated for nearly three years, suggesting either poor data lifecycle management or a business model built on hoarding user content.

Where was Google Play Store’s review process? With over 500,000 downloads and prominent placement in search results, this app operated for years without anyone checking whether it properly secured user data. Google rejected nearly 2 million apps in 2025, yet this one slipped through.

Why do users keep trusting AI apps with sensitive data? The promise of a cool AI-generated video seems to override caution. But as this case proves, the cost of that convenience can be the permanent exposure of your most personal moments.

What Users Should Do Now

If you’ve used Video AI Art Generator & Maker or similar AI photo apps:

  • Audit your Google Play download history and uninstall untrusted AI apps
  • Review what permissions you’ve granted to photo editing apps
  • Consider the permanence of anything you upload to cloud-based AI tools
  • Enable phishing-resistant MFA on all accounts (not SMS-based)

For organizations, this is a reminder to audit third-party AI vendors with the same rigor applied to any other supplier. A “simple” photo app can become a massive liability when it mishandles user data.

The Bigger Picture

The Video AI Art Generator leak isn’t just one company’s mistake—it’s a symptom of the AI gold rush. As companies scramble to launch AI-powered features, security becomes a “nice to have” instead of a requirement. Cloud configurations aren’t reviewed. Data retention policies don’t exist. And users pay the price.

Until the industry treats user data with the respect it deserves, we’ll keep seeing these “misconfigurations” expose millions of files. The only question is: which AI app will be next?

Sources

  • TechRadar – Top Android AI photo and video editor exposes nearly two million user images and videos
  • SC Media – AI art generator app leaks 2 million private user photos and videos
  • Mashable – Unsecured AI apps are leaking personal data of Android users
  • Mint – Twin AI data leaks expose over a billion personal KYC records and millions of user media files