This will be an unprecedented year for democracy. More than half the world’s adults will have the chance to vote in major elections. These include citizens of most of the largest democracies, including India, Indonesia, Mexico, South Africa and the United States, not to mention those governed by the European Parliament. The United Kingdom and even Canada might also have elections this year.
While it’s exciting that so many people will exercise their voting rights, there are widespread fears that elections in 2024 will contribute to democracy’s global decline. Autocrats such as Russia’s Vladimir Putin will surely “win” elections, and so could weak-on-democracy politicians such as Donald Trump. Why is there so much pessimism about democracy’s health? While there are many factors, the polluted online information environment is often mentioned. As Nobel Prize-winner Maria Ressa puts it, provocatively, a “tech-enabled Armageddon” is undermining democracy everywhere.
Election campaigns are increasingly contested online, whether on platforms such as Facebook, mass messaging services such as Telegram or search engines such as Google, not to mention the backend ad markets and data repositories that shape so much campaign messaging. And that’s even before we factor in the potentially disruptive role of generative artificial intelligence (AI). The World Economic Forum recently identified AI-enhanced misinformation and disinformation as the top source of catastrophic risk globally in the next two years.
The online component of this year’s elections deserves scrutiny. Yet there is a danger that this attention will fall into the traps of hot takes, overblown assertions about the power of tech to affect election outcomes, and US-centrism.
To counter these shortcomings, we are collaborating with CIGI to produce a series of essays on key countries facing elections this year. The series will enable us to underscore three broader points about the global collision of technology and politics in 2024.
First, we need clearer-eyed evaluations of the use and misuse of digital technologies in election campaigns. It is too simplistic to blame platforms for an outcome that someone does not like. But it is also too simplistic to dismiss the role of platforms and online campaigning altogether. While online messages — even deepfake videos — are unlikely to single-handedly change voters’ preferences, they can be used to manipulate voters.
Tech regulation can be used to counteract foreign interference, online violence and intentional falsehoods, and it can also be used to silence voices and amplify harm.
As an example, in the months prior to Bangladesh’s recent election, an investigation found government-aligned commentaries written by fake experts and then widely referenced in news media, and another identified a faked video message of an opposition candidate declaring she had dropped out of the race.
Furthermore, it is important to analyze the political conflict over the information environment during campaigns. Preceding Taiwan’s recent election, for instance, many Western publications emphasized the likelihood of China-backed influence operations, echoing the narrative of the ruling Democratic Progressive Party (DPP). However, the other two major parties instead accused the DPP of being the primary originator of “fake news” during the campaign. Ultimately, the election was dominated by entrenched party allegiances and disagreements over fundamental domestic issues.
As Panthea Pourmalek, Yves Tiberghien and Heidi Tworek suggested in a CIGI piece comparing four elections in 2022, “social media and platforms can play a profound role in electoral disruption, but the ways this disruption plays out can vary significantly, and are both time- and context-specific.” For that reason, our series will bring together analysts with deep knowledge of the countries in question.
Second, 2024 will reveal how new regulatory approaches shape the online environment during elections. Europe’s Digital Services Act (DSA) is the world’s most prominent effort to address online harms by large platforms and search engines. European regulators launched the first DSA-based investigation into platform wrongdoing in December, probing whether X (formerly Twitter) failed to adequately address illegal content and information manipulation. The European Parliament elections in June will present a major stress test for the DSA, perhaps revealing whether the European approach is one that other democracies should adopt.
Tech regulation can be used to counteract foreign interference, online violence and intentional falsehoods, and it can also be used to silence voices and amplify harm. Indeed, politicians in power may adopt “fake news” laws and other regulations to stifle criticism and undermine opponents, as Jonathan Corpus Ong previously argued in a CIGI commentary.
Analyzing the conflict over platform policies in elections can reveal how political power shapes our online environments. Cambodia provided a compelling illustration. In January 2023, the country’s former prime minister Hun Sen threatened political opponents with violence in a livestream on Facebook. Although the video violated the company’s policy, it remained online for five months before the independent Oversight Board for Meta recommended Hun Sen’s account be suspended. In retaliation, Hun Sen stopped using Facebook, threatened to ban the platform from Cambodia and led his party to an election win (Hun Sen’s son is now prime minister). Ultimately, Meta decided not to suspend his account. The episode illustrates the tension between platforms’ aims to moderate content and to make money, particularly when facing an entrenched political leader.
Third, the US election is important but should not eclipse attention on the rest of the world. The US-centrism of platform policies has been well established. While Americans may be concerned about the state of content moderation, almost every other country receives far fewer resources and attention from online platforms. In 2020, for example, the United States accounted for 87 percent of the time spent by Facebook contractors and employees on moderating false or misleading content.
Civil society groups have organized to address platform policies’ neglect of many countries and communities. The Global Coalition for Tech Justice was created with the aim to “protect people and elections, not Big Tech” during this “year of democracy.” Convened by Digital Action (on whose board Heidi Tworek sits), the campaign is one of many efforts to ensure that decisions by tech companies — usually headquartered in the United States — are responsive to diverse countries and communities globally.
One year from now, people could inhabit very different political worlds than the ones that currently exist. This series will examine the role that platforms and digital tech have and will play, and the health of our online environment. Our hope is that these analyses will help us understand a most consequential year for democracy.