
A wave of AI disinformation conflict has emerged online following Israel’s airstrikes on Iran. AI-generated photos and dozens of fake videos that claim to show Iranian military superiority and Israeli damage. Furthermore, these videos have quickly taken off on social media, disseminating misleading information.
Meanwhile, old protest videos were shared by pro-Israel accounts to distort Iranian public opinion. Experts caution that this is the first conflict where generative AI has been widely used to sway public opinion. Additionally, it has been used to disseminate false information worldwide.
AI Disinformation Conflict Reshapes Digital Warfront Narratives
One widely shared video, which was produced digitally, showed missiles raining over Tel Aviv and has received over 27 million views. Another showed an Israeli F-35 allegedly destroyed in Iran, but viewers noticed visual inconsistencies that pointed to fabrication. Additionally, BBC Verify discovered several accounts that were quickly gaining followers by disseminating such false information.
One prominent example is the pro-Iranian page Daily Iran Military, which increased by 85% in less than a week. These accounts usually post AI-disinformation conflict content, use blue ticks, and seem genuine. Experts claim that some are operated by “engagement farmers” hoping to profit from attention-driven platforms.
Is Misinformation Outpacing Reality in the Conflict
One post made a false claim that it showed people in the desert close to an Israeli jet that had crashed. However, a closer look revealed AI flaws, including identical civilians and undisturbed sand. Another clip was discovered to be from a flight simulator video game after it was viewed more than 21 million times on TikTok. After being warned, TikTok took it down, but by then, millions had been duped.
According to reports, Russia is promoting fake footage of American-made jets being destroyed in an attempt to undermine Western weapons. Additionally, AI-generated content is frequently dark, portraying nighttime attacks that are difficult to confirm. As a result, it is more deceptive to analysts and the general public.
Confusion has increased due to the use of AI tools like Grok. When users requested that Grok verify clips, the chatbot incorrectly verified fake videos as authentic, occasionally referencing media sources. One video of an endless missile convoy showed rocks moving by themselves, which is a warning sign of AI. However, Grok maintained that it was genuine, raising questions about the dependability of the platform.
Social Media Platforms Struggle Against Misinformation Surge
AI disinformation conflict is not limited to anonymous accounts. Confusion has increased as a result of some fake content shared on official Iranian and Israeli platforms. Tehran’s state media posted AI visuals of destroyed F-35s. Meanwhile, the IDF shared outdated footage that was later corrected by a community note on X.
As more fake content floods social media, researchers note that even ordinary users play a role. Some repost content that matches their beliefs, not realizing it’s false. Sensational or emotional posts, particularly those with fake videos, tend to go viral more quickly. Moreover, the public is advised to use credible sources and exercise critical thought before reposting.
Experts predict that the use of AI in combat messaging will become more popular. More advanced fakes could shape opinions and even policy responses. Also, the online battle for truth will change as the Israel-Iran conflict does.
Final Thoughts
The AI disinformation conflict has shifted the global perception of war. Consequently, AI and fake images have made the digital battlefield just as important as the actual one. It is more crucial than ever to verify information and share responsibly in light of the escalating tensions around the world.