Since the outbreak of the war in Iran, social media platforms have been flooded with AI-generated videos, recycled footage, and false claims, complicating the public’s understanding of events on the ground. Experts warn that these digital fabrications are shaping perceptions in highly emotional ways, prompting governments to adopt stricter information controls.
Easy access to artificial intelligence tools has allowed anyone to produce realistic deepfakes depicting combat, missile strikes, and destruction of civilian areas. These videos circulate rapidly, often misleading millions of viewers and amplifying the influence of state narratives as well as individual content creators seeking clicks and revenue.
“Dramatic images and videos claiming to show real-time battle scenes are flooding social media feeds, spreading rapidly and misleading millions,” said Marc Owen Jones of Northwestern University in Qatar. He added that social media has become a battleground where all sides of the conflict compete for “hearts and minds.”
Jones noted that the content differs by region. On the American side, videos often combine Hollywood clips and memes designed to appeal to far-right audiences, while Iranian-produced AI content exaggerates military successes, aiming to influence Gulf states and shape perceptions abroad.
Some of the most widely shared examples include AI-generated videos showing the US aircraft carrier USS Abraham Lincoln on fire. The images were so convincing that President Donald Trump said he called his generals to confirm their authenticity before later posting on Truth Social that the ship had not been attacked. Other false clips purported to show US troops in distress or buildings destroyed in Gulf cities.
Rapid circulation of unverified content creates a vacuum that misinformation quickly fills. “When people are worried, they crave information, but that information is often false,” Jones said. He highlighted a recent wave of rumors claiming Israeli Prime Minister Benjamin Netanyahu had died, triggered by glitches in a video that some argued was AI-generated. Netanyahu later released videos to prove he was alive, yet speculation persisted online.
Some content is linked to coordinated campaigns, including anonymous accounts or bots that amplify messages to appear more popular or credible. Others are satirical or parodic, depicting world leaders such as Trump or Netanyahu in exaggerated or absurd scenarios. These clips can still be mistaken for real events, complicating efforts to discern truth from fabrication.
Jones warned that the sheer volume of misleading information makes it difficult for the public to distinguish fact from fiction. “False information can spread up to ten times faster than accurate reporting, and corrections rarely reach the same audience,” he said. He urged caution in interpreting dramatic footage, stressing that visual realism alone is no longer sufficient proof of authenticity.
As the conflict in Iran continues, social media remains a contested space where AI-generated content, state narratives, and viral rumors intersect, leaving ordinary users to navigate an increasingly complex and manipulated information environment.
