The year 2024 was pivotal for AI interference in elections. Ironically, commentators were quick to swing from declaring elections in 2024 as doomed by artificial intelligence (AI), to diminishing the impact it had to a minimum — or even denying that AI had any impact at all. Instead of operating in these cycles of doom and reprieve, we should be looking at the data and deriving lessons for future elections. In 2025, ballots have already taken place in Australia, Canada, Germany, Lebanon and Romania. In the second half of 2025, the world anticipates important elections in Argentina, Chile, Iraq, Moldova and the Netherlands, among others. Electoral observers and participants have flagged close ballot races and hotly debated political surprises in countries where electoral processes have already concluded, including Australia, Canada, Portugal and various subnational US elections.
AI is a groundbreaking technology that will shape the world as we know it. But how will it impact future politics? As part of a recent study by the International Panel on the Information Environment (IPIE), we examined how AI has been employed in elections around the world in 2024, and what this might teach us about current and future trends.
As with any such controversy in the twenty-first century, widespread access to the internet, social media platforms and online discourse frequently expose individuals to misleading and often harmful narratives online, limiting their ability to distinguish between amplified falsehoods and reality. Even more evident in the modern political environment, our collective online interconnectedness now presents an irreplaceable opportunity to shape the potential voting patterns of billions around the world who rely on the digital space to educate themselves and others on key political issues. Whether national or subnational, first-round voting or a complete rerun (as in the case of Romania), modern and future elections share this common theme: an increasingly digital nature vulnerable to manipulation. In the modern context, this commonality is intrinsically tied to the workings of AI, predominantly through AI’s contribution to the generation of mass information and misinformation.
Across countries as diverse as Bangladesh, France, Namibia, South Africa and Taiwan, AI was relied upon to create content designed to misguide voters, defame candidates or prop up unrealistic images of political parties.
Dominance of AI Content Creation and the Internationalization of Misinformation
In 2024, more than 80 percent of countries experienced observable instances of AI usage relevant to their electoral processes. By far, the most popular employment of AI across elections was for content creation (accounting for 90 percent of all observed cases), compared to content proliferation (24 percent), hypertargeting (three percent) and unclear uses (four percent). AI content creation could include anything from audio messages to AI-powered avatars, fake political endorsements by celebrities and AI-generated messages from dead politicians.
Across countries as diverse as Bangladesh, France, Namibia, South Africa and Taiwan, AI was relied upon to create content designed to misguide voters, defame candidates or prop up unrealistic images of political parties. Notably, content creation was often highly targeted, specifically aimed at underlying societal predispositions or prejudices. For instance, in elections across India, Indonesia and Mexico, AI was used to create defamatory images of female candidates, specifically building on and amplifying misogynistic stereotypes. This finding builds on a substantial body of evidence suggesting that disinformation has disproportionate consequences for minority groups typically targeted by racist, misogynist, xenophobic and other hateful beliefs.
In general, our data confirms that AI allows for high proportions of tailored content. As part of this trend, in countries across Africa and Asia, political campaigners produced AI content, such as video deepfakes, of former US President Joe Biden and current US President Donald Trump endorsing local political parties and candidates. AI technologies offered these campaigners an opportunity to have world-leading figures “speak” on highly localized, niche topics (imagine Trump commenting on African agriculture). This increased granularity of misinformation has brought with it the trend of further internationalization by injecting avatars of world leaders or celebrities into local election campaigns.
Increased AI Experimentation
While AI interference in elections is frequently imagined at the national level, subnational elections offer insights into AI experiments that some campaigners would be cautious to emulate nationwide. Further, with some improvements of information security measures in national elections around the world, nefarious actors can look to local and regional campaigns as easy targets for influence. With less regulations and resources available to authorities, local campaigns provide a key testing ground for new AI technology, which can similarly influence significant election results, albeit on a smaller and more localized scale. Elections across the Americas, Africa, Asia and Europe saw cases of AI content emulating specific regional dialects and slang, or the presence of synthetic audio to impersonate local officials in a key battleground state, province or region.
This development is particularly worrisome as the AI capabilities of language translation are among the main benefits for political candidates trying to reach more voters. India’s Prime Minister Narendra Modi, for example, used AI to translate campaign speeches into more than 100 languages to diverse constituents across the country. This example underlines how these technological advancements can be beneficial to democracy if they are used transparently. In Japan, an independent candidate for the Tokyo gubernatorial race, Anno Takahiro, used an AI avatar to respond to 8,600 questions from voters, which increased voter engagement and potentially interest in the electoral race. Given these examples, the question is how to harness positive use cases while reigning in malevolent AI exploitation.
Arguably, the most concerning observation our research posits is that, in some cases, AI can contribute to an outsized, manipulative impact on elections. This finding is most evident in the case of Romania, where its 2024 presidential election results were annulled after evidence showed AI-powered interference using manipulated videos. Although this case accounts for a very small portion of the global data set, the targeted and proficient nature of the interference — very likely foreign sponsored — had serious ramifications for future Romanian electoral integrity. Cases like that demonstrate an important lesson in understanding that AI can act as an accelerator in the tool kit of traditional disinformation campaigns.
2025 Elections: Same, Same but Different
Elections around the world this year have underscored many of the standout takeaways presented in the IPIE report; that is, that the majority of races are seeing AI-generated content entering the fray, that the impact of AI is often supplementary but in some instances pre-eminent and that, generally, AI allows for even more targeted campaigning.
In the Australian and Canadian elections this year, the ruling political parties experienced unexpected boosts in support in the final weeks before voting, contrary to earlier polling that anticipated results favouring conservative parties in both countries. While both countries now settle into their current political leadership, federal authorities in both Canada and Australia warned against the risk of AI interference, predominantly singling out hostile foreign actors as potential culprits.
In a notable Canadian case, an AI deepfake of Prime Minister Mark Carney, originally released directly before the election, reached more than one million views on social media by June. With an August byelection in Alberta, Canada, to determine the political future of Conservative opposition leader, Pierre Poilievre, AI-related threats remain latent. This example shows that foreign actors continue their assault on democracies through information campaigns. In our 2024 data, a fifth of all observable AI incidents were produced by foreign actors (20 percent), but a declared 46 percent of incidents had “no known source” since attribution is often hard.
South of the Canadian border, subnational US elections continue to face AI-generated content from a wide variety of sources. Even in the wake of the 2024 US presidential election, local, regional and state elections offered a breeding ground for wider polarization, conspiracy theories and disinformation spurred on by nefarious applications of AI. In New York state, accusations were levied against both Zohran Mamdani in New York City’s mayoral primary and Democratic candidate Blake Gendebien in the state’s twenty-first congressional district for potentially improper AI usage. Whether these allegations are accurate or not, these cases affirm that even an accusation of AI use (false or otherwise) can be a useful political means to sway voters against a candidate.
Across Europe, worsening polarization and political division increase national susceptibility to AI-generated interference — a trend that is exacerbated during election years. In Germany, although it had a limited impact, an AI-generated deepfake containing nostalgic images of the past was employed by candidates of the far-right party, Alternative for Deutschland (AfD). This case reminds us of the earlier mentioned trend to resurrect dead politicians with AI to bolster the popularity of current political candidates and parties. It remains unclear whether voters consider this tactic creepy or are positively impacted. Regardless, the AfD saw a substantial popularity boost, receiving the second-highest share of votes (after the Christian Democratic Union).
AI is not a stand-alone disruptor but rather a powerful new layer in existing influence operations, with the potential to outpace rules and regulations if not managed appropriately.
Fostering Democratic Developments with AI
In 2025, we should move away from considering AI a looming threat and instead acknowledge its incorporation into daily activities. This shift includes the fact that political campaigners have added AI to their tool kit: they are testing the boundaries of what is acceptable not only from a regulatory standpoint but also for citizens. This attempt at AI mainstreaming does not only happen during election seasons, as seen in how President Trump repeatedly embeds AI into his communications from Gaza to the Vatican.
Importantly, data shows that at least 16 percent of AI employment was for seemingly benevolent purposes, such as translating content and using generative AI chatbots to connect with more voters.
In order to curtail the overwhelming amount of malevolent applications of AI, robust, democratic and forward-looking policy responses are needed. These include recommendations ranging from short-term transparency requirements and platform accountability to long-term legislative frameworks and AI watermarking. A strategic road map is needed, not only for mitigating the immediate threats but also to reshape and strengthen the information ecosystems to reinforce our collective resilience.
Of particular concern is the current ability to self-detect the use of generative AI, especially as AI content progressively improves and rapidly increases its sense of realism. Ultimately, AI is not a stand-alone disruptor but rather a powerful new layer in existing influence operations, with the potential to outpace rules and regulations if not managed appropriately. To protect electoral integrity, governments, platforms and civil society should act together, utilizing both the technical know-how and institutional reforms necessary to safeguard democratic legitimacy in a new information age.