Countering Digital Propaganda: Can Former Culprits Help?

Propagandists have worked on behalf of democracies and autocracies alike, often pragmatically making use of the newest media tools available to them.

November 8, 2021
An InfoWars sticker is surrounded by footprints on a US Capitol Visitor Center skylight a day after supporters of US President Donald Trump stormed the Capitol Building in Washington, DC, January 7, 2021. (REUTERS/Erin Scott)

In the last two weeks Facebook has proven resilient in terms of its earnings (again) in the face of a flurry of scandals — including some related to COVID-19 misinformation. Simultaneously, the US debate has picked up (again). Many are arguing that the continuation of the pandemic — spurred, at least partly, by slow vaccine uptake and a continued lack of mask wearing — is due to a parallel “infodemic.”

While neither the popularity of social media nor the spread of false or misleading information could be considered novel in fall 2021, the global COVID-19 pandemic has worked to intensify societal repercussions related to both. And while analysts and policy makers from varying backgrounds mostly agree that the spread of false information is having adverse societal effects, the ways in which to engage with said effects are often still defined by siloed perspectives dependent on given experts’ professional backgrounds

It remains true that the ongoing digital deluge of false information is one of many societal challenges tied to growing polarization, populism and hate inhibiting the function of democracy around the world. But this “disinformation” lens is only one way of looking at — and responding to — the informational problems that confront us. Propaganda, more broadly, has been studied by modern social scientists for more than a century. Propagandists have worked on behalf of democracies and autocracies alike, often pragmatically making use of the newest media tools available to them. In recent years, computational propagandists who make use of bots, sock puppets and algorithmic manipulation have been at the forefront of message manipulation in both online and offline spaces.

In our research we’ve interviewed many current and “ex” propagandists across many countries working for all sorts of entities: shady public relations firms, governments, troll farms and political parties among them. Many have approached us in the context of a mea culpa. Some have even offered to help us — or other groups — combat propagandistic messaging online.

The question is, should digital propagandists have a seat at the table as we work to develop solutions and countermeasures?

Who Is Spreading Propaganda?

The spread of computational propaganda features a mix of human and machine amplification. This growing form of information manipulation relies upon automation, anonymity and other parts of the digital tool kit in order to proliferate. For short-term solutions, addressing how to disrupt these digital processes is paramount. But for long-term answers, we must consider the social and cultural drivers of mis- or disinformation, conspiracy theories, and coordinated hate and trolling. To do so, we must interrogate the human portion of this human-machine communication problem. The people who spread computational propaganda — their backgrounds and motivations — can be useful in understanding why and how it spreads and what can be done to stop it.

Online propagandists range from paid employees working for the Chinese government to unpaid ideologues who take it upon themselves to produce and spread propaganda, including, but not only, prominent figures and outlets such as Donald Trump and InfoWars. Other propagandists, often uncompensated financially, see themselves as acting out their duties as good citizens, and, during interviews with us, have insisted that they do not see themselves as part of a state propaganda program. The social capital and circumstantial benefits they assume they are earning are what seem to drive them. Finally, terrorists have repeatedly been relying on propaganda and disinformation as well.

Generally, however, the greatest and continuous challenge is to identify who is behind which propaganda campaign. Propagandists can take advantage of online anonymity, automation and the mere scale of the internet to remain in the shadows as they sow deceptive political messaging. However, some are not propagandists for life, and here is where a largely untapped potential could lie — if we take seriously that our societies have been shaken by polarization processes originating in the online space and call for remedies.

On the plus side, conversations with (former) extremists can lead to new levels of understanding. Furthermore, demonizing terrorists is counterproductive to the pursuit of finding meaningful ways to reach people on their way to radicalization.

Who Should Be at the Table?

Conceptually, propaganda is one among many societal ills — some others being hate speech, the spread of conspiracy theories, defamation, trolling, disinformation and detrimental advertising — that are, in 2021, intrinsically linked to social media platforms. Concentrating our focus on only one of these issues risks overlooking the connections between them and, more importantly, the solutions that a more holistic approach might yield.

There is much to be learned from people who’ve worked to produce propaganda. Their insights can provide unique details on how, for instance, the strategies and technologies used in coordinated disinformation campaigns can be mitigated via inoculation and resilience programming. Terrorism researchers, for instance, have been discussing the topic of if and how to engage with (former) extremists in the process of prevention or deradicalization programs, including counter-narratives, such as those supported by Public Safety Canada for years. The discussion on how to include ex-propagandists or ex-spreaders of disinformation into countering programs is far less advanced.

On the plus side, conversations with (former) extremists can lead to new levels of understanding. Furthermore, demonizing terrorists is counterproductive to the pursuit of finding meaningful ways to reach people on their way to radicalization. Finally, not talking to extremists is unrealistic, in both journalism and academia, but decisions surrounding how to employ the insights revealed through these conversations are crucial. Information gathered from former extremists (or ex-propagandists) should be well scrutinized and carefully presented. Those leveraging these individuals’ knowledge to fight against propaganda must not unintentionally give them an opportunity to voice their ideology or perpetuate deceptive political messaging. In that vein, Whitney Phillips suggests that, when reporting on extremists, antagonists and manipulators online, we need to make sure not to involuntarily prop up, or amplify, their destructive efforts.

The Path Ahead

Although they may employ the same digital tool kit, propagandists and violent extremists are certainly not the same, since violent extremists are defined by their existing commitment to violence. But, if we are aiming to engage with contemporary societies’ myriad and interconnected challenges, it’s essential that we break down the definitional and practical silos that hinder a holistic problem-solving approach. The burgeoning reliance on strategies such as stochastic terrorism — describing a method by which those in power instigate violence and extremist acts in support of their political aims by subtly suggesting it should happen while maintaining plausible deniability — could be a harbinger for what is to come. As Kurt Braddock put it, “When statements reach millions of people, at least some will interpret them as orders.” Calls from scholars such as John Horgan to include psychological counselling and reconciliation efforts in programs attempting deradicalization of domestic terrorists, to help rebuild trust in government and between communities divided along political and cultural disagreements, pinpoint where the future work lies.

Manipulation and deception, as well as violence, have always been part of politics, and no state or society will be able to eliminate them. But society’s immense reliance on cyberspace and social media, in particular, are forcing us to rethink how we manage mediated propaganda and extremism. Psychologists have outlined how the online space can make people more aggressive in their messaging; terrorism studies researchers have shown that, to have impact, countering programs need to work to grasp the complexities of radicalization and be regularly re-evaluated; communication studies researchers have worked to understand the novelties and characteristics of changing information ecosystems. Policy makers working on making democracies stronger are well advised to draw on the insights emerging from these three research areas, because we need to repair the social bonds that malevolent and manipulative uses of social media and other technology are so effective at weakening.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Authors

Samuel Woolley is an assistant professor in the School of Journalism and program director for computational propaganda research at the Center for Media Engagement, both at the University of Texas at Austin.

Inga Trauthig is the head of research of the Propaganda Research Lab at the Center for Media Engagement at The University of Texas at Austin.