Why Should We Care about Propaganda in Communication?

Social media companies make nearly all their income from advertisements, and by definition, the motivation of an advertisement is to change behaviour.

March 2, 2023
soviet
A Soviet-era propaganda poster of a mother and child is painted on the wall of a ruined pediatric centre in Moldova. (REUTERS)

The following is adapted from Manufacturing Consensus: Understanding Propaganda in the Era of Automation and Anonymity, by Samuel Woolley, published on January 31 by Yale University Press. Copyright © 2023 Yale University Press. Reprinted by permission of Yale University Press.


The popular dictum about how social media companies make their profi goes something like this: “If you don’t know what the product is, the product is you.” Facebook, Twitter and YouTube are for-profit entities focused on making money, but at first glance, it’s hard to see what these firms are selling. Events like the 2016 Cambridge Analytica scandal made it clear what they are selling: social media companies store and trade on the vast amounts of personal data we place on their sites and applications. They sell information on our behaviour to marketing firms, international conglomerates, and — yes — to political campaigns and their surrogates. Jaron Lanier and others have pushed the idea of users as products a step further. The product is not “you,” a single user — the product being sold is imperceptible behavioural change on a massive scale.

Lanier’s argument is simple truth, for social media companies make nearly all their income from advertisements, and by definition, the motivation of an advertisement is to change behaviour. Advertisements try to get someone to buy a product, visit a destination or support a cause — or to vote a particular way, to support or oppose a particular issue, or even to give up on civic engagement entirely. Uniquely, social media allows not just the familiar, identifiable traditional ads but also native ads — paid content that appears to be authentic, user-generated content. The native ads being run now are often nearly impossible to tell from “organic” content, for regular people can advertise on their own accounts without anyone — the social media firms that run the platforms or their followers — ever knowing they were paid for the content. Much of the time, even Twitter and Facebook don’t know what’s organic and what’s not when it comes from a seemingly real account.

We don’t yet really understand the behavioural effects of online ads. It’s easy to track whether a given ad on social media gets likes or comments, but it is much harder to track its influence on offline behaviours. This is as true for advertising-driven consumption as it is for online political propaganda. Indeed, it is even more difficult to track the behavioural changes from computational propaganda, which is not overt and identifiable political advertising: it includes the covert political propaganda driven by political bots, sockpuppets, gamed “trending now” social media recommendations, and coordinated groups of influencers.

Propaganda is inextricable from the whole ecosystem in which we live, with ads, ideas, media technologies, news organizations, and mutable societal norms, values and beliefs all smashed together.

What is the measurable effect of these political messages on our actions at the voting booth? Although scholars like Kathleen Hall Jamieson have argued that it is probable Russian trolls and hackers helped elect Donald Trump in 2016, such academics also rightly point out that there is still a lot we don’t know. What is more, we may never have certainty in such situations. But it is still crucial that we gather whatever information we can about how social media alters our lives. A large body of researchers are working to do this. However, as Lanier notes, the task is made more difficult by the fact that the behavioural changes caused by online political propaganda are incremental — distributed and sociological in scale, imperceptible at any given moment. Change can happen without being measurable through experimental analyses. Just because we can’t easily track a change from point A to point B — trace a line from being exposed to propaganda to voting a certain way — it doesn’t mean that change does not happen.

The task is also made more difficult by the fact that, as Jacques Ellul pointed out, propaganda is all around us. It’s not as easy as tracking a single official campaign advertisement, let alone a comment from a talking head or a Twitter post made by a partisan nano-influencer. We simply can’t measure these effects at the individual psychological level. Propaganda is inextricable from the whole ecosystem in which we live, with ads, ideas, media technologies, news organizations, and mutable societal norms, values and beliefs all smashed together. When we attempt to do a controlled experiment — for example, recreate a strand of manipulative political content in a vacuum to try to isolate its effects — it stops being propaganda because it’s been separated from the complex sociocultural world in which propaganda operates.

This doesn’t mean that we should give up on working to curb politically motivated disinformation or state-sponsored smear campaigns against journalists. What it does mean is that we don’t have the luxury of waiting to respond to these problems until we fully understand how they affect human behaviour. We must accept that the transmission, or communication, of propaganda leads to all sorts of consequences — some intended, some not — and focus on where the effects are clearest. We can, for instance, work to protect journalists and minority communities — groups that are often the primary targets of computational propaganda campaigns.

We do know that bots can impact the actions of influential political actors and can change their digital behaviour. But to understand political influence in a digital world, we can’t focus on tracking pure, empirically evidenced behavioural outcomes — direct notions of change as defined by traditional political science or psychology, which were theorized in an entirely different social and technological world. Instead, we need to think about how to track the diffuse, incremental influence exerted by computational propaganda. Perhaps we should follow the recommendations of scholars like Kate Starbird, focusing on second-order changes rather than first-order ones. In other words, we should focus not on how individual behaviours and ideas change but on how the entire system flexes and evolves. Systemic changes aggregate the changes taking place at the individual level, and they are more easily observed.

Politically motivated groups and individuals continue to regularly use bots in order to boost their communication. We should ask: What does this behaviour tell us about broader social beliefs and practices? And what does computational propaganda tell us about the new culture of political communication?

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Samuel Woolley is an assistant professor in the School of Journalism and program director for computational propaganda research at the Center for Media Engagement, both at the University of Texas at Austin.