Artificial intelligence-generated videos are rapidly taking over social media feeds, blurring the line between reality and fabrication. Experts say that spotting fakes is becoming increasingly difficult — but for now, one simple clue often gives them away: poor image quality.
According to digital forensics specialists, many AI-generated clips appear grainy, pixelated, or oddly compressed — the kind of footage that looks like it was “filmed on a potato.” While this isn’t definitive proof of fakery, researchers say low-quality visuals are often used deliberately to conceal digital imperfections.
Hany Farid, a computer science professor at the University of California, Berkeley, and founder of deepfake detection firm GetReal Security, said such quality issues are among the first indicators his team examines. “If I’m trying to fool people, I generate my fake video, then reduce the resolution and add compression,” Farid explained. “That way, any artefacts that could give it away are harder to see.”
Recent viral examples show how convincing these low-quality clips can be. A fake video of rabbits bouncing on a trampoline racked up more than 240 million views on TikTok, while a fabricated clip of a couple meeting on the New York subway melted millions of hearts before being exposed as an AI creation. Another video, depicting an American preacher railing against billionaires, fooled viewers with its grainy, zoomed-in footage.
“The leading text-to-video generators, like Google’s Veo and OpenAI’s Sora, are already producing highly realistic results,” Farid said. “The inconsistencies today are subtle — smooth skin textures, unnatural hair movement, or shifting backgrounds. But when the footage is low quality, it hides these clues.”
Matthew Stamm, who heads the Multimedia and Information Security Lab at Drexel University, warned that this window for visual detection is closing fast. “I’d anticipate that within two years, the obvious visual cues will disappear from AI videos entirely,” he said. “We’re already seeing that happen with AI-generated images.”
Researchers are now working on new verification tools that could help distinguish real footage from synthetic content. Some involve digital “fingerprints” embedded into files at the time of creation, while others use statistical analysis to detect manipulation invisible to the human eye.
Yet experts say technology alone won’t be enough. Digital literacy specialist Mike Caulfield believes people must learn to approach online video with skepticism. “We have to stop assuming that a video means anything out of context,” he said. “In the long run, provenance — where a video comes from — will matter more than what it looks like.”
As AI continues to advance, analysts warn that the internet’s “seeing is believing” era is coming to an end. For now, a blurry, low-resolution video might be more than just bad quality — it could be the latest in a growing wave of artificial illusions.
