How Can We Stem the Tide of Digital Propaganda?

Computational propaganda — the use of automation, algorithms and other digital technology to spread falsehoods and manipulate public opinion — is a growing problem. Stemming this flow is neither simple nor easy. But it must be done.

July 5, 2021
newshutterstock_1331939714.png
(Shutterstock)

The following opinion is an edited précis of the author’s book, The Reality Game: How the Next Wave of Technology Will Break the Truth, available from the publishing house PublicAffairs.


Finding solutions to computational propaganda — the use of automation, algorithms and other digital technology in attempts to manipulate public opinion — is a daunting task. The people working to game public opinion or exert social and political oppression using online tools have access to countless potential targets, and almost unimaginable amounts of data available on these targets. Propagandists can leverage online anonymity, automation and the sheer scale of the internet to remain nearly untrackable and uncatchable as they sow deceptive political ads, disinformation about how/when/where to vote, and conspiracy theories about vaccination and climate change. They continue to use increasingly sophisticated hordes of social media bots to amplify and suppress particular content online. They also use a wide variety of human-borne organizational tactics to artificially generate attention for those they support — and to mobilize smear campaigns against those they oppose.

In an analysis of 240 million tweets pertaining to the US 2020 presidential election, researchers reported to Nature that “around certain political events [such as the national conventions of the US Democratic and Republican parties],” they “observed that the amount of bot activity dwarfed human activity.” They also found that “one in four accounts [using] QAnon hashtags and retweet[ing] Infowars and One America News Network [were] bots.”

But the problem isn’t just on Twitter, although that space remains popular with propagandists hoping to trick media producers, political candidates and influencers with disinformation, in hopes they will reshare it with their large followings.

The team I lead at the Center for Media Engagement at the University of Texas at Austin — the Propaganda Research Lab — has tracked and reported on political groups’ 2020 payments (some of which went unreported) to small-scale, often locally or demographically important influencers with the proviso that they would then endorse a particular political position on sites such as Instagram, Facebook and Telegram. The goal in this case, according to both the influencers and the political action committees and political consulting firms who paid them, is to generate seemingly organic hype around particular ideas within regionally important areas or demographically important groups. The logic goes that these “nano-influencers” are more trusted among their audiences, and that they therefore have more tangible effects upon the political behaviour of their followers than, say, a clunky bot account.

As I’ve argued in my book, The Reality Game: How the Next Wave of Technology Will Break the Truth, the vastness of the internet — where billions of users are creating some 2.5 quintillion byes of new data every day — in combination with some important ethical and legal considerations in trying to track these bad actors, make criminal prosecution and short-term changes to technology alone poor strategies for stamping out computational propaganda. For instance, we must not be swayed by arguments that we need to break encryption to combat organized disinformation campaigns in spaces such as WhatsApp or Telegram — because democratic activists use those same platforms to privately organize against repressive regimes. We must also ask and answer intricate questions about free speech, equity and privacy before sidelining particular digital narratives, redesigning social algorithms or doing away with online anonymity. Fixing the ecosystems that allow these political operatives to flourish is a better strategy. Further, we need to place human rights and democracy at the core of our efforts to design, or redesign, not only the next wave of technology but also our policies to govern this online ecosystem.

It’s helpful to break down our possible responses into solutions for the short term, medium and long term.

Tool-or technology-based responses are the shortest-term fixes of all, given the quixotic nature of technology today. Many of these efforts are temporary approaches, focused on addressing the most egregious issues and oversights associated with the infrastructure of the participatory web, or “Web 2.0” — an internet characterized by websites that emphasize user-generated content and social media. These fixes include tweaks to social media news algorithms, the code that identifies trends or software patches for other existing tools. They also include applications and tools developed to identify junk news, detect social media bots, track false news or catalogue political advertisements.

Such products can be very useful for a time. But many quickly become defunct owing to code-level changes made by social media firms, a lack of funding or upkeep, or propaganda agents simply finding a way around them. With the tactics of online manipulation constantly evolving, such programs need to be constantly updated and translated to an ever-growing range of social media platforms to stay relevant. They present a promising start for tools that alert users to the threats of disinformation but must be combined with action from technology firms, governments, news organizations and others in order to be truly effective. Although identifying and reporting nefarious or automated social media traffic is important, as is notifying users that they may be encountering false news reports, these efforts are too passive and too focused on user-based fixes to counter computational propaganda in future. Also, research has shown that fact-checking is less than effective in the face of pre-existing beliefs and ideology, and that social media firms are constantly battling to catch and delete new and innovative types of bot-, cyborg- and human-based information operations.

The good news is that it’s not only researchers, policy makers and civil society groups worldwide who are fighting to stem the flow of digital propaganda. Groups of employees at the big tech firms are also engaged. Workers at Facebook have had some success in dismantling predatory and disinformative content during elections, and they’ve taken action against predatory payday loan ads. Googlers have stood firm against shady dealings in military drone research and manufacturing. Nonetheless, it’s also clear that the largest tech firms have to get real with themselves — particularly at the leadership level. While employees might want to make sensible changes to platforms, they often aren’t supported by their bosses. CEOs such as Mark Zuckerberg and Jack Dorsey must admit that they run media companies; that they are purveyors of news, curators of information and, yes, arbiters of truth. They owe a debt to both democracy and the free market. Their allegiance to the latter doesn’t mean they can ignore the former.

We need more than piecemeal tweaks employed in the moment as problems become identified. We need better active defence measures against propaganda and systematic, transparent overhauls of our current social media platforms.

It's About More than Patches and Takedowns

In the medium and long term, we need more than piecemeal tweaks employed in the moment as problems become identified. We need better active defence measures against propaganda and systematic, transparent overhauls of our current social media platforms. We also need new social media platforms and new companies — designed from the outset with democracy and human rights in mind — instead of continuing with a system in which the incumbents make only piecemeal changes while still maintaining an overwhelming focus on selling ads, whatever the cost to society. We need new laws and policies, and we need to amend standing ones. We must move toward more methodical solutions to the problem of computational propaganda. For example, we need an early-warning system for digital deception, which, when propagated by social bots and automated systems, is extremely trackable. Comparing this to how scientists track earthquakes and tsunamis by monitoring movements on the ocean floor, we pointed out that “if we can do this for monitoring our oceans, we can do it for our social media platforms. The principles are the same — aggregating multiple streams of data, making such data transparent, applying the best analytical and computational tools to uncover patterns and detect signals of change.” Further, such methodical approaches cannot only be technical and quantitative: they must incorporate social knowledge, human oversight and nuance, policy making and qualitative research.

Using these social means of protecting ourselves against the flood of digital disinformation will allow us to build more informational resilience in our society, a kind of cognitive immunity, and prioritize the values inherent in democracy and human rights in vetting not only our data and our technology, but also the policies and laws that govern them. We can also strive to protect and empower different social groups using different strategies to facilitate their digital or networked connections online and offline, while helping to inoculate them against junk news and fake science by reminding them of their uniqueness and their right to high-quality informational resources. Diaspora communities in Canada and the United States, particularly those with ties to countries led by authoritarian regimes engaged in transnational influence operations, are already contending with their own unique informational challenges as they communicate with friends and relatives back home, those in the countries they now call home, and many anonymous, random users over a wide array of private and public social media platforms. We must work with them to support their own efforts to counter the particular online manipulation they experience and provide them with resources in the face of electoral disinformation.

It’s time, too, for governments to get serious about educating people about media and data. Existing systems for building this literacy, as experts have pointed out, need to work to overcome structural hurdles (such as one-size-fits-all educational approaches) and to consider ever-evolving social conditions. For example, danah boyd has noted that outdated media literacy approaches fail to “take into consideration the cultural context of information consumption that we’ve created over the last thirty years.” We now live in a world inextricably connected to the online sphere. Because of this we need to build flexible, approachable and culturally contextual media literacy campaigns for the digital age, rather than shoehorning in outmoded trainings and resources designed in the broadcast era.

Thinking toward social solutions requires that we accept that polarization, nationalism, globalization and extremism are the basic problems in our current world, both domestically and internationally, while disinformation and propaganda are symptoms.

Going Offline to Find Solutions

It might seem counter-intuitive. But the longest-term solutions to the problems of computational propaganda and the challenges associated with digital political manipulation are analog, offline solutions. We must invest in society and work to repair damage between groups. Thinking toward social solutions requires that we accept that polarization, nationalism, globalization and extremism are the basic problems in our current world, both domestically and internationally, while disinformation and propaganda are symptoms.

These issues can be addressed, but the primary solutions will be social — from investments in our educational systems, to amendments to laws, to changes in personal beliefs or ideologies that we may once have thought immutable. In order to change harmful perceptions of ourselves or others that seem cemented, we must consider questions and solutions related to empathy, psychology and cultural context. Formal education may not, say, undo racism or fundamentalist thinking on its own. We must also consider the fact that people believe the things they do (and share this online) not only because of science, but also because of the need to belong. Technology can help us, but even the most advanced machines and software systems are still tools. They are only as useful as the people, and motives, behind their creation and implementation.

Nevertheless, bad symptoms that intensify can worsen the underlying disease. Small things like disinformation on Twitter can inflame large issues, like polarization, in a circular way. But we have to get serious about what we are trying to address, and when and how we are going to do it. We must be systematic in our efforts to fix the problems created by malicious and manipulative uses of social media and other technology. And we need to repair the social bonds that these tactics are so effective at weakening even further.

In our online lives, we need to figure out ways to allow those with whom we have disagreed, argued or even fought, to redeem themselves. We must accept that we are also imperfect in our own informational habits; try to improve them and ask for forgiveness when we post information that turns out to be faulty. No one shares perfect data all the time. None of us are always rational. Nationally and internationally, polarization, nationalism, globalization and extremism have created or widened the divides among people.

Beyond the work that individuals must do in taking responsibility for their online behaviour and education, there is a great deal of work that online platforms must undertake to protect our privacy and identify and exterminate malicious automation, while also stopping the flow of online propaganda. These entities must also work to prevent the misuse of future technology.

Governments have a lot to answer for as well. It is an absolute outrage that so few laws have been passed to date to address political communication online in the United States and many other countries, including Canada. In the United States, the Federal Election Commission (FEC), judging by its oversight efforts, continues to act as if the internet plays little role in electioneering. Meanwhile, most governments continue to fail to effectively address similar problems in the digital public health information ecosystem — to curb seriously harmful (and often hateful) conspiracies about vaccination, COVID-19, and women’s health. In many ways, our legal systems remain in the dark ages when it comes to dealing with informational problems online in general, let alone with more specific issues such as disinformation on social media. Law makers must work to protect the hundreds of millions, even billions, who have already been deceived for political purposes during numerous past elections around the globe. The US government is responsible for regulating Silicon Valley, not rolling over while it becomes the playground for four or five monopolies. New regulation — informed by the expertise of people who actually understand how technology works — is a necessity for preventing the degradation of the internet and the misuse of future technology.

In a 2019 paper, Ann Ravel, Hamsini Sridharan and I outlined some sensible, simple policies that can be immediately enacted to curb the effects of digital deception. We also proposed a number of systemic changes that could be instituted to prevent future misuse of digital platforms. Building on an earlier report from Sridharan and Ravel — and looking specifically at one focus of computational propaganda, US election campaign finances — we recommended the following policy actions. Although these are oriented to the US context, we believe they have relevance for policy makers elsewhere. Some have been adopted by various social media platforms but need to be made enforceable via the law:

  • In the United States, pass the Honest Ads Act, originally proposed in 2017, mandating that major technology platforms publish political ad files in order to increase transparency about money in digital politics and dark advertising. Ensure that the data provided is standardized across platforms and provides the necessary level of detail regarding target audiences and ad spends. Records should remain publicly available for several years to facilitate enforcement.
  • Expand the definition of “electioneering communications” to include online ads, and extend the window for communications to qualify as “electioneering.” Electioneering communications are ads on hot-button issues that air near an election and reference a candidate but do not explicitly advocate for or against that candidate. Currently, online ads are exempted from the United States’ FEC disclosure rules for this type of advertising, which apply to TV, radio and print. Online ads that satisfy the definition of electioneering communications ought to be regulated. Moreover, with political ads running earlier and earlier each US election cycle, it is important to extend the window of time that electioneering regulations apply for online advertising.
  • Increase US FEC disclosure requirements for paid issue ads, which frequently implicitly support or oppose candidates and are intended to motivate political action but receive little oversight. This is one area where the government is hampered by court interpretations of free speech, but where technology companies could successfully intervene with civil society guidance.
  • Increase transparency for the full digital advertising ecosystem by requiring all political committees to disclose spending by sub-vendors to the FEC. Right now, committees must report payments made to consultants and vendors but aren’t required to disclose payments made by those consultants and vendors for purchases of ads, ad production, voter data or other items; as a result, much digital activity goes unreported. California requires political committees to report all payments of $500 or more made by vendors and consultants on their behalf. Similar rules should be adopted at the federal level.
  • Adapt on-ad “paid for by” disclaimer regulations to apply to digital advertising. Digital political ads should be clearly labelled as promoted content and marked with the name of whoever purchased them. They must contain a one-step mechanism, such as a hyperlink or pop-up, for accessing more detailed disclaimer information, including explicit information about who is being targeted by the ad. There should be no exceptions to this rule based on the size of ads; unlike with pens or buttons, technology companies can adapt the size of digital ads to meet legal requirements.
  • Create an independent authority empowered to investigate the flows of funding in digital political activity. This equivalent to the Financial Industry Regulatory Authority would be charged with following the money to its true sources, making it easier for the FEC to identify violations and illegal activity, and enforce penalties.

Each of these efforts would result in a clearer and less deceptive digital space. By passing legislation to require more transparency in social media advertising, more thorough investigation of digital political activity and, simply put, more accountability in how politics gets done online, we will build a more democratic online world.

Election campaign finance is only one area among many where we need to pass new laws and policies to regulate social media. Solutions are needed for problems posed by data usage and privacy, automation and fake accounts, platform liability and multi-sector infrastructure. These solutions — both their inception and their implementation — require more effective global cooperation around the issue of digital deception, better research and development on this topic, and clearer media and civic education efforts. There are many different strategies that could be tried — including antitrust actions against the tech industry and some kind of global governing board that oversees communication, especially political communication, on digital platforms. It’s clear that computational propaganda has numerous sources, and each needs to be dealt with in thoughtful, customized ways if we want to have a healthy democratic ecosystem.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Samuel Woolley is an assistant professor in the School of Journalism and program director for computational propaganda research at the Center for Media Engagement, both at the University of Texas at Austin.