Search

Saved articles

You have not yet added any article to your bookmarks!

Newsletter image

Subscribe to the Newsletter

Join 10k+ people to get notified about new posts, news and tips.

Do not worry we don't spam!

A.I. Videos Have Never Been Better. Can You Tell What’s Real?

A.I. Videos Have Never Been Better. Can You Tell What’s Real?

Post by : Anis Farhan

Blurred lines emerge

It used to be easy to tell when a video was fake. But not anymore. AI-generated videos have improved at a breakneck pace in recent months, thanks to tools like OpenAI’s Sora, Runway Gen-3, and Pika Labs. These platforms can now generate hyper-realistic scenes—complete with human-like movements, facial expressions, and dynamic lighting—that are almost indistinguishable from real footage. The result is a growing wave of content that looks authentic but is entirely artificial.

 

How the tech got here

The leap in realism comes from major advances in video diffusion models—machine learning systems that generate visuals frame-by-frame using prompts or source images. Early AI videos looked dreamy, glitchy, and distorted. But now, platforms like Sora can produce detailed cinematic shots, smooth transitions, and complex physics simulations, often in 1080p or higher resolution.

Crucially, these tools are now accessible to the public. Anyone with a decent prompt and a few minutes of processing time can create fake interviews, fake protests, or fake natural disasters that feel disturbingly real.

 

Why AI videos feel “off”

Even as these videos dazzle, many viewers report an eerie sensation while watching them—a kind of “uncanny valley” effect. Experts say that’s because AI-generated humans often lack the micro-details of real life. Their blinks are slightly too rhythmic, their gestures too fluid, their expressions just a bit too polished. This perfection, ironically, is what makes the content feel subtly wrong.

Still, for viewers scrolling fast or watching on mobile screens, these small flaws are easy to miss. And once they go viral, AI clips can be mistaken for genuine news or firsthand footage.

 

Deepfakes and the disinformation threat

The most alarming side of this trend is its use in deepfakes—AI videos that impersonate real people, often without consent. Political deepfakes have already appeared in elections from the U.S. to India. Earlier this year, an AI-generated robocall mimicked President Joe Biden’s voice, urging voters to skip a primary election—a move condemned as voter manipulation.

Celebrities and influencers are regular targets, too, with their faces and voices cloned into fake endorsements, interviews, or worse. Beyond defamation, these tools have been used for financial fraud, blackmail, and the spread of conspiracy theories—posing a global risk to digital trust.

 

Why people fall for fakes

Researchers say humans are surprisingly bad at detecting AI-generated content. A 2024 study by the University of Zurich and RAND Corporation found that participants were more likely to believe AI-created social media posts—both true and false—than ones written by actual humans. When it comes to video, the illusion is even stronger. The combination of visuals, voice, and narrative tricks the brain into assuming what it’s seeing must be real.

Even after a clip is debunked, the initial impression often sticks. Psychologists call this the “continued influence effect,” and it’s one of the reasons disinformation spreads so easily.

 

Can you still spot a fake?

Spotting a deepfake or AI-generated video isn’t easy—but there are still clues. Look for unnatural blinking, mismatched shadows, poorly rendered hands, or jerky lip-syncing. Check if the background glitches, if clothing logos are warped, or if text within the scene doesn’t make sense.

Sound can be another giveaway. AI-generated voices often sound too smooth or lack background noise. Some tools still struggle with consistent accents, intonation, or emotional depth.

But as models improve, even these tells are fading. Which means relying on gut instinct is no longer enough.

 

What platforms and regulators are doing

Social media platforms are under pressure to address AI content. Some, like Meta, now label AI-generated images and videos using invisible watermarks or metadata. YouTube and TikTok have added disclosure requirements for synthetic content. But enforcement is inconsistent, and bad actors can still post fakes that go undetected for hours—or even days.

Meanwhile, governments around the world are drafting legislation. The EU’s AI Act mandates disclosure of synthetic media, while the U.S. and India have both proposed new regulations targeting AI misuse in elections and public safety contexts.

Still, laws are often reactive. And AI tech is evolving faster than any legal framework can keep up.

 

How to stay alert

Experts recommend a mindset shift. Rather than assuming what you see is true, approach sensational videos with skepticism. Ask: Who posted this? Is it verified? Does it appear on trusted news sites? Use reverse image and video search tools. And remember—if something seems perfectly staged or too outrageous to be real, it might be AI.

In the future, digital literacy will be as essential as reading and writing. Knowing how to identify false content, understand context, and question sources may be our best defense in a world where video evidence can be easily faked.

 

Disclaimer:

This article has been prepared by Newsible Asia purely for informational and editorial purposes. The information is based on publicly available sources as of June 2025 and does not constitute financial, medical, or professional advice.

June 30, 2025 2:24 p.m. 483

UAE Relief Flight Brings 100 Tonnes of Food Aid to Gaza via Egypt
April 20, 2026 6:04 p.m.
A UAE relief flight delivered 100 tonnes of food to Egypt’s Al Arish as part of Operation Chivalrous Knight 3, aiding those in Gaza.
Read More
Vancouver’s John Fluevog Pays Tribute to Kidney Donor with Unique Shoe
April 20, 2026 6:01 p.m.
Designer John Fluevog honors a friend who donated her kidney by creating a special shoe, raising awareness for organ donation.
Read More
Tragic Aircraft Crash in Jashpur, Chhattisgarh Claims Lives of Two Pilots
April 20, 2026 5:54 p.m.
A chartered plane crashed in Jashpur, Chhattisgarh, killing both the pilot and co-pilot. Investigations are underway.
Read More
Urgent Plea to Safeguard Canada’s Residential School Testimonies
April 20, 2026 5:51 p.m.
Indigenous survivors push for action as testimony destruction deadline looms in 2027, raising concerns over justice and truth preservation.
Read More
Israel Rebukes Soldier Following Crucifix Desecration in Southern Lebanon
April 20, 2026 5:45 p.m.
Israel's leaders denounce a soldier's act of desecrating a crucifix in Lebanon, raising concerns about respect for religious symbols.
Read More
Ontario's Doug Ford to Auction Off $28.9 Million Private Jet Amid Backlash
April 20, 2026 5:39 p.m.
Premier Doug Ford decides to sell a $28.9 million private jet following substantial public and political criticism regarding its necessity.
Read More
Emirates Development Bank Achieves AED 1 Billion Monthly Financing in UAE
April 20, 2026 5:35 p.m.
Emirates Development Bank's recent AED1 billion financing marks a significant boost for the UAE's industrial sectors.
Read More
Canada's Trade Dependency on the US Is Now a Weakness, Says PM
April 20, 2026 5:32 p.m.
PM Mark Carney emphasizes the importance of diversifying trade as reliance on the US poses risks amid rising tariffs.
Read More
Israel Enhances Military Presence in Southern Lebanon, Urging Civilians to Avoid Borders
April 20, 2026 5:30 p.m.
Israel boosts military control in southern Lebanon, advising residents to steer clear of border areas amid ongoing ceasefire tensions.
Read More