AI’s Role in Misinformation During Conflicts

Publish Date: Last Updated: 29th June 2025
Author: nick smith - With the help of GROK3
Introduction
Artificial intelligence (AI) has reshaped the landscape of information dissemination, particularly during conflicts, where its potential for both innovation and misuse is starkly evident. The ability of AI to generate hyper-realistic content has fueled the spread of misinformation, from deepfake videos to fabricated narratives, complicating the information environment in war zones. This article examines AI’s role in amplifying misinformation during recent conflicts, including the Russia-Ukraine war, the Israel-Iran conflict, and the India-Pakistan conflict, exploring its mechanisms, impacts, and efforts to counter its spread.
The Rise of AI-Generated Misinformation
AI technologies, particularly generative AI, have been weaponized to create convincing yet false content during conflicts. In the Russia-Ukraine war, which intensified following Russia’s invasion in February 2022, AI-generated deepfakes and propaganda have been rampant. For instance, a deepfake video of Ukrainian President Volodymyr Zelenskyy falsely urging surrender circulated in March 2022, exploiting hacked media services to amplify its reach. By November 2023, more sophisticated deepfakes of Zelenskyy and General Valerii Zaluzhnyi falsely blamed Zelenskyy for internal military issues, showing a leap in AI technology’s realism. These videos, shared on pro-Kremlin Telegram channels, aimed to sow discord within Ukraine’s leadership and demoralize its population.
Similarly, in the Israel-Iran conflict of 2025, AI-generated videos and images depicting fabricated missile strikes and downed jets garnered over 100 million views on social media platforms like X. During the India-Pakistan conflict in May 2025, deepfaked videos and fabricated news clippings, such as a false report praising Pakistan’s air force, went viral, escalating tensions. Emmanuelle Saliba, Chief Investigative Officer at Get Real, noted that the Israel-Iran conflict marked "the first time we've seen generative AI be used at scale during a conflict," highlighting the accessibility of tools like Google’s Veo 3, which enable even non-experts to create deceptive content.
Mechanisms of AI-Driven Misinformation
AI-driven misinformation operates through several key mechanisms:
- Deepfakes and Synthetic Media: Tools like Google’s Veo 3 and generative adversarial networks produce hyper-realistic videos and images. In Ukraine, a fake E! News segment claiming USAID funded Hollywood celebrity trips to Kyiv amassed 31 million views after being shared by high-profile figures like Elon Musk and Donald Trump Jr. Similarly, AI-generated images of destroyed cities or crying babies in Gaza went viral, exploiting emotional triggers to manipulate public sentiment.
- Chatbot-Generated Falsehoods: AI chatbots, designed for quick responses, often spread misinformation when verifying content. During the India-Pakistan conflict, xAI’s Grok misidentified old Sudan footage as a recent attack. Studies show up to 40% of responses from models like Grok and ChatGPT contain inaccuracies on controversial topics. In Ukraine, chatbots like Perplexity and Bing Chat (now Copilot) propagated false Russian narratives in over 25% of responses, often failing to include disclaimers debunking Kremlin claims.
- Amplification by Social Media: Platforms like X and TikTok amplify AI-generated content through algorithmic recommendation systems. In Ukraine, a Russia-based network of nearly 800 fake TikTok accounts spread disinformation targeting Ukrainian officials, contributing to the dismissal of Defense Minister Oleksiy Reznikov in September 2023. These accounts, using stolen celebrity images, posted single videos to evade detection, exploiting TikTok’s algorithms.
Consequences of AI Misinformation in Conflicts
The impacts of AI-driven misinformation are profound:
- Erosion of Public Trust: In Ukraine, false claims about Zelenskyy’s corruption or surrender undermined trust in leadership, with 20% of commenters on deepfake videos believing them to be real. In India-Pakistan, fabricated reports filled information vacuums, shaping public narratives. This erosion extends to media, as real videos are often dismissed as fake, fueling conspiracy theories.
- Conflict Escalation: Misinformation can provoke real-world consequences. In Ukraine, AI-generated content alleging Ukrainian aggression in Donbas or corruption fueled Russian propaganda, potentially influencing military decisions. In India-Pakistan, false missile strike footage risked escalating public anger and military responses.
- Undermining Democratic Processes: AI misinformation has targeted elections globally, with 215 documented instances in 2024. In Ukraine, false narratives about Western aid misuse aimed to weaken international support, particularly in Europe and the U.S.
- Challenges for Journalism: The flood of misinformation overwhelms newsrooms. In Ukraine, fact-checking units like Le Monde’s Les Décodeurs struggled with the volume of visual disinformation. Indian journalists reported similar challenges during the India-Pakistan conflict, with unverified sources dominating narratives.
Efforts to Combat AI Misinformation
Efforts to counter AI-driven misinformation include:
- Fact-Checking Initiatives: Organizations like BBC Verify, Ukraine’s StopFake, and Fatabyyano use AI to detect fakes. Ukraine’s War of Words tool analyzes Russian media to expose propaganda, updating daily to track narratives since 2012.
- Legislative Measures: Ukraine’s Law on Counter Disinformation, enacted as an emergency response to Russian aggression, balances free speech with penalties for false reporting and war propaganda. Denmark’s proposed ban on unauthorized deepfakes sets a precedent for regulating AI content.
- AI Detection Tools: Companies like GetReal Security develop tools to identify manipulated media via metadata analysis. However, these tools struggle with nuanced content and require constant updates to match AI advancements.
- Public Awareness: Experts advocate for media literacy to foster critical thinking. Posts on X emphasize verifying sources before sharing, with users like @BohuslavskaKate urging supporters to avoid spreading “AI slop.” Ukraine’s public awareness campaigns promote resilience against disinformation.
Challenges and Ethical Considerations
AI’s role in misinformation poses ethical dilemmas. While AI aids detection, its reliance on vast datasets can perpetuate false narratives, as seen with the pro-Kremlin Pravda network, which chatbots repeated 33% of the time. The reduction of human fact-checkers by tech platforms increases reliance on imperfect AI systems. Traditional detection methods, reliant on linguistic cues, are obsolete against AI-generated content mimicking human outputs.
In Ukraine, Russia’s use of AI to optimize strikes by analyzing social media posts raises further concerns, exploiting civilian data to maximize disruption. Balancing free speech with regulation remains a challenge, as Ukraine’s laws highlight the tension between security and democratic principles.
Conclusion
AI’s role in misinformation during conflicts, exemplified by the Russia-Ukraine war, Israel-Iran, and India-Pakistan conflicts, underscores its dual nature as a tool for truth and deception. The rapid evolution of deepfakes, chatbot errors, and social media amplification demands robust countermeasures, from fact-checking and legislation to public education. As conflicts continue, addressing AI-driven misinformation requires global cooperation, ethical AI development, and a commitment to preserving truth in the digital age.
Latest AI News
AI Questions and Answers section for AI’s Role in Misinformation During Conflicts
Welcome to a new feature where you can interact with our AI called Jeannie. You can ask her anything relating to this article. If this feature is available, you should see a small genie lamp in the bottom right of the page. Click on the lamp to start a chat or view the following questions that Jeannie has answered relating to AI’s Role in Misinformation During Conflicts.
Be the first to ask our Jeannie AI a question about this article
Look for the gold latern at the bottom right of your screen and click on it to enable Jeannie AI Chat.