Search

Saved articles

You have not yet added any article to your bookmarks!

Newsletter image

Subscribe to the Newsletter

Join 10k+ people to get notified about new posts, news and tips.

Do not worry we don't spam!

A.I. Videos Have Never Been Better. Can You Tell What’s Real?

A.I. Videos Have Never Been Better. Can You Tell What’s Real?

Post by : Anis Farhan

Blurred lines emerge

It used to be easy to tell when a video was fake. But not anymore. AI-generated videos have improved at a breakneck pace in recent months, thanks to tools like OpenAI’s Sora, Runway Gen-3, and Pika Labs. These platforms can now generate hyper-realistic scenes—complete with human-like movements, facial expressions, and dynamic lighting—that are almost indistinguishable from real footage. The result is a growing wave of content that looks authentic but is entirely artificial.

 

How the tech got here

The leap in realism comes from major advances in video diffusion models—machine learning systems that generate visuals frame-by-frame using prompts or source images. Early AI videos looked dreamy, glitchy, and distorted. But now, platforms like Sora can produce detailed cinematic shots, smooth transitions, and complex physics simulations, often in 1080p or higher resolution.

Crucially, these tools are now accessible to the public. Anyone with a decent prompt and a few minutes of processing time can create fake interviews, fake protests, or fake natural disasters that feel disturbingly real.

 

Why AI videos feel “off”

Even as these videos dazzle, many viewers report an eerie sensation while watching them—a kind of “uncanny valley” effect. Experts say that’s because AI-generated humans often lack the micro-details of real life. Their blinks are slightly too rhythmic, their gestures too fluid, their expressions just a bit too polished. This perfection, ironically, is what makes the content feel subtly wrong.

Still, for viewers scrolling fast or watching on mobile screens, these small flaws are easy to miss. And once they go viral, AI clips can be mistaken for genuine news or firsthand footage.

 

Deepfakes and the disinformation threat

The most alarming side of this trend is its use in deepfakes—AI videos that impersonate real people, often without consent. Political deepfakes have already appeared in elections from the U.S. to India. Earlier this year, an AI-generated robocall mimicked President Joe Biden’s voice, urging voters to skip a primary election—a move condemned as voter manipulation.

Celebrities and influencers are regular targets, too, with their faces and voices cloned into fake endorsements, interviews, or worse. Beyond defamation, these tools have been used for financial fraud, blackmail, and the spread of conspiracy theories—posing a global risk to digital trust.

 

Why people fall for fakes

Researchers say humans are surprisingly bad at detecting AI-generated content. A 2024 study by the University of Zurich and RAND Corporation found that participants were more likely to believe AI-created social media posts—both true and false—than ones written by actual humans. When it comes to video, the illusion is even stronger. The combination of visuals, voice, and narrative tricks the brain into assuming what it’s seeing must be real.

Even after a clip is debunked, the initial impression often sticks. Psychologists call this the “continued influence effect,” and it’s one of the reasons disinformation spreads so easily.

 

Can you still spot a fake?

Spotting a deepfake or AI-generated video isn’t easy—but there are still clues. Look for unnatural blinking, mismatched shadows, poorly rendered hands, or jerky lip-syncing. Check if the background glitches, if clothing logos are warped, or if text within the scene doesn’t make sense.

Sound can be another giveaway. AI-generated voices often sound too smooth or lack background noise. Some tools still struggle with consistent accents, intonation, or emotional depth.

But as models improve, even these tells are fading. Which means relying on gut instinct is no longer enough.

 

What platforms and regulators are doing

Social media platforms are under pressure to address AI content. Some, like Meta, now label AI-generated images and videos using invisible watermarks or metadata. YouTube and TikTok have added disclosure requirements for synthetic content. But enforcement is inconsistent, and bad actors can still post fakes that go undetected for hours—or even days.

Meanwhile, governments around the world are drafting legislation. The EU’s AI Act mandates disclosure of synthetic media, while the U.S. and India have both proposed new regulations targeting AI misuse in elections and public safety contexts.

Still, laws are often reactive. And AI tech is evolving faster than any legal framework can keep up.

 

How to stay alert

Experts recommend a mindset shift. Rather than assuming what you see is true, approach sensational videos with skepticism. Ask: Who posted this? Is it verified? Does it appear on trusted news sites? Use reverse image and video search tools. And remember—if something seems perfectly staged or too outrageous to be real, it might be AI.

In the future, digital literacy will be as essential as reading and writing. Knowing how to identify false content, understand context, and question sources may be our best defense in a world where video evidence can be easily faked.

 

Disclaimer:

This article has been prepared by Newsible Asia purely for informational and editorial purposes. The information is based on publicly available sources as of June 2025 and does not constitute financial, medical, or professional advice.

June 30, 2025 2:24 p.m. 441

Indonesia Blocks Elon Musk’s Grok AI Over Unsafe AI Content
Jan. 10, 2026 3:19 p.m.
Indonesia temporarily blocks Elon Musk’s Grok chatbot due to unsafe AI-generated images. The move aims to prevent misuse and protect vulnerable users
Read More
Autorun OJC Unveils Second Showroom at Oasis Mall
Jan. 10, 2026 3:11 p.m.
Autorun OJC expands its Dubai presence with a new showroom at Oasis Mall, enhancing customer access along Sheikh Zayed Road.
Read More
Bangladesh’s T20 World Cup Spot Uncertain Over Safety Concerns in India
Jan. 10, 2026 3:05 p.m.
Bangladesh cricket team’s T20 World Cup future is unclear after safety concerns prompted a request to shift matches from India, says captain Najmul Hossain Shan
Read More
US and Venezuela Initiate Dialogue to Mend Relations Post-Maduro
Jan. 10, 2026 2:59 p.m.
Venezuela and the US have initiated discussions to re-establish diplomatic ties after Maduro's ousting, with a focus on oil investments and prisoner releases.
Read More
PV Sindhu’s Malaysia Open Run Ends with Semifinal Loss to Wang Zhiyi
Jan. 10, 2026 2:49 p.m.
PV Sindhu’s comeback at Malaysia Open ends in semifinals as China’s Wang Zhiyi wins 21-16, 21-15. Sindhu showed fight but errors cost her the match
Read More
New Emergency Centre Launched in Al Dhafra by ADCMC
Jan. 10, 2026 2:48 p.m.
ADCMC enhances emergency response capabilities with a new centre in Al Dhafra.
Read More
Denmark Navigates Greenland’s Independence Amid Rising U.S. Pressures
Jan. 10, 2026 2:40 p.m.
Denmark grapples with Greenland's independence ambitions and U.S. pressures, highlighting geopolitical complexities in the Arctic.
Read More
Disruptions in Air Travel: Flights from Dubai and Turkey to Iran Canceled
Jan. 10, 2026 2:39 p.m.
Multiple flights to Iran from Dubai and Turkey were canceled due to ongoing unrest over economic issues, affecting regional travel.
Read More
Trump Claims Nobel-Worthy Peace Role in India-Pakistan Conflict
Jan. 10, 2026 2:32 p.m.
Donald Trump asserts he ended the India-Pakistan conflict, criticizes Obama’s Nobel Prize, and highlights his peace efforts and global war resolutions
Read More
Trending News