Artificial intelligence is transforming how videos are created and shared, and the change is happening at a startling pace. In only a few months, AI-powered video generators have advanced so much that people are struggling to tell whether a clip is real or synthetic. Experts say that this is only the beginning of a much larger shift in how the public perceives recorded reality.
The uncomfortable truth is that most of us will eventually fall for a fake video. Some already have. The technology is improving so quickly that it is undermining the basic assumption that a video camera captures the truth. Until we adapt, it is important to know what clues can still help identify computer-generated clips before that distinction disappears completely.
The Quality Clue: When Bad Video Looks Suspicious
At the moment, the most reliable sign of a potentially AI-generated video is surprisingly simple, poor image quality. If a clip looks overly grainy, blurred, or compressed, that should raise immediate suspicion. Researchers in digital forensics often start their analysis by checking resolution and clarity.
Hany Farid, a digital-forensics specialist at the University of California, Berkeley, explains that low-quality videos often hide the subtle visual flaws created by AI systems. These systems, while impressive, still struggle to render fine details accurately. Blurring and pixelation can conveniently conceal these inconsistencies.
However, it is essential to note that not all low-quality clips are fake. Some authentic videos are genuinely filmed under poor lighting or with outdated equipment. Likewise, not every AI-generated video looks bad. The point is that unclear or downgraded quality makes fakes harder to detect.
Why Lower Resolution Helps Deception
Today’s top AI models, such as Google’s Veo and OpenAI’s Sora, have reduced obvious mistakes like extra fingers or distorted text. The issues they produce are much subtler, unusually smooth skin textures, unnatural reflections, strange shifts in hair or clothing, or background movements that defy physics. When resolution is high, those flaws are easier to catch. When the video is deliberately compressed, they almost vanish.
That is why deceptive creators often lower a video’s quality on purpose. By reducing resolution and adding compression, they hide the “digital fingerprints” that could expose a fake. Experts say this is now a common technique among those who intend to mislead audiences.
Short Clips Are Another Warning Sign
Length can be another indicator. Because generating AI video is still computationally expensive, most AI-generated clips are short, often six to ten seconds. Longer clips require more processing time and increase the risk of errors appearing. As a result, many deceptive videos online are short, and when longer ones are made, they are typically stitched together from several shorter segments. If you notice sharp cuts or changes every few seconds, that could be another red flag.
The Real-World Examples of Viral Fakes
In recent months, several viral examples have proven how convincing AI content can be. A video of rabbits jumping on a trampoline received over 200 million views before viewers learned it was synthetic. A romantic clip of two strangers meeting on the New York subway was also revealed to be AI-generated. Another viral post showed an American priest delivering a fiery sermon against billionaires; it, too, turned out to be fake.
All these videos shared one detail: they looked like they were recorded on old or low-grade cameras. The bunny video appeared to come from a security camera, the subway couple’s clip was heavily pixelated, and the preacher’s footage was slightly zoomed and blurred. These imperfections made the fakes seem authentic.
Why These Signs Will Soon Disappear
Unfortunately, these red flags are temporary. Both Farid and other researchers, like Matthew Stamm of Drexel University, warn that visual clues are fading fast. AI systems are evolving toward flawless realism, and within a couple of years, even experts may struggle to detect fakes by sight alone. This evolution mirrors what happened with AI images where obvious errors like distorted hands or melted faces have mostly disappeared.
In the future, video verification will depend less on what we see and more on what the data reveals. Forensic tools can already identify statistical irregularities in pixel distribution or file structure that the human eye cannot perceive. These traces act like invisible fingerprints left during video generation or manipulation.
Tech companies are now developing standards to authenticate digital content. The idea is for cameras to automatically embed cryptographic information into files at the moment of recording, verifying the image’s origin. Similarly, AI systems could include transparent markers to indicate that a video was machine-generated. While these measures are promising, they are not yet universally implemented.
Experts in digital literacy argue that the most important shift must come from us, not just technology. As Mike Caulfield, a researcher on misinformation, points out, people need to change how they interpret what they see online. Relying on visual appearance is no longer enough.
Just as we do not assume that written text is automatically true, we must now apply the same scepticism to videos. The key questions should always be: Who created this content? Where was it first posted? Has it been confirmed by credible sources? Authenticity now depends on context and source verification rather than clarity or resolution.
The Takeaway
For now, blurry and short clips remain practical warning signs of possible AI involvement. But as technology improves, those clues will soon lose their usefulness. The only dependable defense against misinformation will be a cautious, investigative mindset: verifying origin, confirming context, and trusting only what can be independently authenticated.
In the era of generative video, the truth no longer lies in what we see but in what we can verify.
The security bug is now listed as CVE-2025-54135 and can be exploited by giving the AI agent a malicious prompt to activate threat actor control commands.
The Cursor combined development environment (IDE) relies on AI agents to allow developers to code quicker and more effectively, helping them to connect with external systems and resources using Model Context Protocol (MCP).
According to the experts, a threat actor effectively abusing the CurXecute bug could trigger ransomware and ransomware data theft attacks.
CurXecute shares similarities to the EchoLeak bug in Microsoft 365 CoPilot that hackers can use to extort sensitive data without interacting with the users.
After finding and studying EchoLeak, the experts from the cybersecurity company Aim Security found that hackers can even exploit the local AI agent.
Cursor IDE supports the MCP open-standard framework, which increases an agent’s features by connecting it to external data tools and sources.
But the experts have warned that doing so can exploit the agent, as it is open to external, suspicious data that can impact its control flow. The threat actor can take advantage by hacking the agent’s session and features to work as a user.
According to the experts, Cursor doesn’t need permission to run new entries to the ~/.cursor/mcp.json file. When the target opens the new conversation and tells the agent to summarize the messages, the shell payload deploys on the device without user authorization.
“Cursor allows writing in-workspace files with no user approval. If the file is a dotfile, editing it requires approval, but creating one if it doesn't exist doesn't. Hence, if sensitive MCP files, such as the .cursor/mcp.json file, don't already exist in the workspace, an attacker can chain an indirect prompt injection vulnerability to hijack the context to write to the settings file and trigger RCE on the victim without user approval,” Cursor said in a report.