Computational Propaganda Is Here to Stay: What to Expect in Elections in 2022

Political operatives will be sharpening their digital knives in efforts to manipulate public opinion.

February 18, 2022
modi2019-04-25T123310Z_1917525828_RC1845E55260_RTRMADP_3_INDIA-ELECTION-MODI-ROADSHOW.png
Strategies and patterns of computational propaganda emerged during conversations with digital strategists who worked for Indian Prime Minister Narendra Modi’s Bharatiya Janata Party in India. Modi is pictured here at a rally in Varanasi, April 25, 2019. (Adnan Abidi/REUTERS)

A torrent of course-changing elections are set to take place around the globe in 2022. In the United States, contentious federal elections are likely to follow similarly fraught contests in 2018. In Brazil, Hungary and India, aspiring authoritarian leaders look to further shift once promising democracies toward illiberalism. In Libya and Lebanon, ongoing conflicts could prevent the establishment of much-needed stability. Digital media will play integral roles in all of these elections. And as political operatives sharpen their digital knives, we argue that three dynamics are prevalent in the spread of election-related computational propaganda: cross-platform communication dynamics, the continued rise of social media influencers in the domain of politics, and false and misleading information fuelling hate speech (and vice versa).

Cross-Platform Dynamics of Computational Propaganda

Social media platforms provide public streams of information to sustain their advertising businesses by prioritizing outrage, tribalism and polarization. But increasingly, users self-segregate into private and ephemeral spaces, making oversight difficult. Chat apps like WhatsApp, Signal and Telegram continue to grow in popularity across the world. These platforms, often defined by encryption, provide users with opportunities to withdraw into the intimacy and comfort of small group conversations with like-minded people, to send messages that leave no trace and to keep certain people out.

Wherever users move online, however, industries follow. Computational propagandists and professional trolls, who use social media platforms in efforts to manipulate public opinion, are no exception. Research underscores the fact that disinformation campaigns are often planned on one platform and then planted on others. Examples abound and include the manipulation of Twitter trends during the last Indian general election that was coordinated within the direct chat messaging platform WhatsApp. Similarly, the Russian Internet Research Agency’s manifold online activities during the 2016 US presidential election included trial-ballooning messages on Reddit that were later spread within the mostly public forum of Twitter.

In our research, we observe that false information often finds its way from within the relative privacy of chat apps into more public territories, such as public-facing social media platforms and even legacy media like television and radio. Such information cascades propel false information not only within but also across platforms. The relative privacy of some platforms is useful as a springboard and safe haven to coordinate activities on other platforms: who should share false or misleading information (bots, humans or combinations thereof); how to share it; or what to share. Such strategies and patterns emerged during our conversations with digital strategists who worked for Indian Prime Minister Narendra Modi’s Bharatiya Janata Party in India, with independent political campaign contractors in Mexico and with former white nationalist activists in the United States.

The relative privacy of some platforms is useful as a springboard and safe haven to coordinate activities on other platforms.

With this in mind, encrypted platforms are likely to be pivotal arenas for harmful, misleading and false content during global elections in 2022. This is particularly concerning because users assume these platforms’ privacy enables more safety (both politically and otherwise) than it actually does. Any platform that allows for the exclusion of certain publics also provides a vacuum and opportunity for the spread of computational propaganda. In other words, the chances that correction might happen, that fact-checkers can intervene, or that content moderation regimes of platforms weed out problematic content are slim in a private WhatsApp group. At the same time, breaking or creating loopholes circumventing the end-to-end encryption behind these platforms is undesirable for most privacy advocates.

Influencers and the Expansion of the “Political”

Influencers harness and exploit algorithmic recommendation and amplification provided by platforms, and tether their personal brands to social or cultural issues, as well as to corporations. Far-right extremist influencers such as Richard Spencer or Milo Yiannopoulos play outsized roles in manipulative information ecosystems. But even influencers acting in more benign territories are inevitably confronted with the question of how much politics they want to invite into their feeds. Just like certain Hollywood celebrities, some influencers attempt to never admit to a particularly political bent. Meanwhile, others wholeheartedly embrace political campaigns, get involved (and even get paid), and seed campaign content. In 2020, for instance, reproductive rights activist and influencer Deja Foxx joined Kamala Harris’s campaign for US president. In India, the 2019 general election saw an increased focus on using influencers as a political tool and also relied on Bollywood figures.

In our research, we have explored the motivations of influencers on TikTok and Instagram who decide to get involved with political campaigns, especially small-scale influencers. Many are paid or otherwise compensated to share campaign content or speak out on behalf of political campaigns. We have identified a diverging set of motivations in such cases. Some are genuinely attempting to connect their brand more closely to their values or, in other words, wear their values onscreen. Similarly, others are taking a stance in light of ongoing social justice events, such as the 2020 Black Lives Matter protests, or mobilizing young people to take up environmental activism in Kazakhstan. Meanwhile, some influencers display support for certain candidates or issues only because they think their audience expects them to do so or for personal and financial gain.

As we look forward, it will be crucial to brace social media users for the onslaught of political influence being exerted from unexpected sources.

Political campaigns have actively started to embrace influencers. In the United States, presidential candidates Donald Trump, Michael Bloomberg and Bernie Sanders all made use of them. And just as influencers in the United States might manage to sway voters to support one candidate over another, influencers in Libya might tip the balance in 2022 in favour of a harrowing presidential candidate, Muammar Gaddafi’s son Saif al-Islam Gaddafi. Content about Saif has been amplified by Russian networks, but the domino effect of their visibility had already motivated opposing forces to promote counternarratives. In 2022, the Libyan online space is fragmented and little understood, with influencers largely operating below the radar of those interested in stabilizing the country. Researchers have witnessed similar dynamics in volatile places such as Ethiopia.

Political campaigners are likely to continue exploiting these “relational organizing” tactics by contracting with influencers directly, as well as through influencer marketing firms. Therefore, partisan or political influencers are likely going to play an increasingly important role in upcoming races. This is concerning, because such campaigns, particularly with small-scale influencers, are able to harness influencers’ trusted relationships with their audiences, which are often uniquely situated to respond to particular ideas or causes. Similarly, campaigns can recruit individuals with a large sway in small but important publics based on social identity factors such as race, ethnicity, gender identity or political ideology. It is demographic and opinion-leader politics 2.0. As we look forward, it will be crucial to brace social media users for the onslaught of political influence being exerted from unexpected sources. Moreover, we must make it clear that political influencers are often paid for — but do not always publicly disclose — their engagement with particular campaigns and causes.

Hate Speech and Its Relationship with Dis- and Misinformation

The coronavirus disease 2019 (COVID-19) pandemic has increased our understanding of the pervasiveness of the toxic link between hateful speech and false information. Trump, for instance, repeatedly called COVID-19 the “Chinese virus.” Brazil’s President Jair Bolsonaro mischaracterized educational material that aimed to combat homophobia as a “gay kit,” exacerbating homophobia in the country. In Lebanon, political groups sometimes resort to hate speech, for example, when justifying the contentious involvement of Lebanese forces in the Syrian civil war. This dynamic wherein false and misleading information fuels hate speech, and vice versa, is perhaps unsurprising for students of propaganda. Terrorists, in particular, have been relying on disinformation to sow hate for many years.

And this tactic has proven effective. Research shows that, after Trump’s tweet, accounts featuring hate speech ramped up their sharing of misinformation regarding the role of the Chinese government in the origin and spread of COVID-19. And, for Brazil, scholars have pointed out the increase in homophobic attacks and further normalization of homophobia in the country. In Lebanon, the Sabra and Shatila massacre targeting Palestinians and Lebanese Shiites in 1982 provides a harrowing historical example of disinformation also partially contributing to the deaths of hundreds of innocent people.

Academic research and policies concerned with either false information or hate speech have largely emerged separately, albeit in parallel. The pandemic has drawn attention to how these two problems are interwoven in practice. We need to develop policy that addresses them in tandem.In the simplest of terms, democratic leaders must commit to ending these harmful dynamics by not initiating or promoting such dangerous speech. Legislators and the judiciary must enforce these commitments through sound policy and legal processes. While some politicians and thought leaders might brush off their role in spreading falsehoods by feigning ignorance, they will face a steeper challenge in plausibly denying bad intent when spreading hateful speech. Elections bring to the fore the civic duty of individuals to stand up for their beliefs, but digital platforms tend to prioritize provocative and hateful content over civil interaction. One way people can engage while not falling prey to ad hominem antagonism is to practise counterspeech. This approach still might mean engaging in tense online conversations with others and contesting their evidence. But social problems such as computational propaganda are complex and require multilayered solutions that include institutions, leaders and individuals.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Authors

Martin J. Riedl is a postdoctoral research fellow at the Propaganda Research Lab at the Center for Media Engagement at the University of Texas at Austin.

Inga Trauthig is the head of research of the Propaganda Research Lab at the Center for Media Engagement at The University of Texas at Austin.

Samuel Woolley is an assistant professor in the School of Journalism and program director for computational propaganda research at the Center for Media Engagement, both at the University of Texas at Austin.