The Business of Computational Propaganda Needs to End

Despite efforts to combat the practice, too many social media companies continue to use automation and algorithms to manipulate public opinion — manufacturing consensus. This represents a serious problem for society.

September 20, 2021
2021-08-17T000000Z_1099050241_MT1NURPHO000TKJHSA_RTRMADP_3_IT-SECURITY.jpg
Photo\Reuters

In the spring of 2014, I conducted an interview with the owner of a marketing firm who specialized in “boosting” clients’ social media metrics. It was the first of what would eventually become a body of more than 50 formal conversations with similar professionals who describe themselves as specialists in various strategies such as “digital growth hacking,” amplification, influence and native advertising. In that first interview, the person I spoke to (who, like many of my interviewees with similar backgrounds, shared his knowledge under the condition of anonymity) revealed a slew of ways he and his team could “game” social media metrics in order to gain more followers for their clients, get more “likes” and, ultimately, become influential on sites such as Twitter, Facebook and YouTube. Bluntly, his company was in the business of making people look more popular on social media than they actually were.

Seven years later, a quick Google search reveals that his firm is still in business. In fact, it seems to have grown — boasting more employees and larger clients. Very little has changed in the menu of things the company offers, despite the fact that many social media companies have clamped down on what they call “coordinated inauthentic behaviour.” And his work, like many of his competitors, is often just that: an organized effort to use various “inorganic” mechanisms and tactics to build clients’ digital clout. Bots (automated programs), sock puppets (false online identities) and groups of coordinated social media users, for instance, were crucial parts of his toolkit.

These tools continue to play a role in this business of computational propaganda — defined as the use of automation and algorithms in efforts to manipulate public opinion over social media. That this business still exists — and is, if anything, thriving — represents a serious problem for society. As I’ve argued elsewhere, the people who engage in these practices are manufacturing consensus. They exploit social media algorithms’ focus on quantitative metrics in order to push false trends that, in turn, generate the illusion of popularity for particular issues, people and entities. This illusion of popularity can turn into very real support through the bandwagon effect: when it seems like everyone around us likes or dislikes something, we are more likely to say we like or dislike it too.

It is beyond time to put an end to these types of business practices. While some of their uses are less harmful — say, providing fake likes to a client’s post — they still feed on untruths and practices of disinformation. Other uses are much more nefarious: deploying bots to boost anti-vaccine content, leveraging paid trolls to harass journalists and deploying sponsored small-scale influencers to sow coercive content during elections.

Much of the time these same companies are leveraging computational propaganda for purposes both less and more pernicious. Of course, not all bogus amplification or suppression of content online is created by professional entities. But a great deal of it is. Policy makers around the world must generate legislation that dismantles and punishes computational propaganda businesses — particularly when they are engaged in spreading electoral disinformation or assaulting vulnerable communities, but also when they use their knowledge and tools to create unfair competition.

It’s still relatively easy to find a company that specializes in this work, although they now tend to veil their practices and claim to boost clients’ digital footprint naturally. But research reveals that bots and sock puppets are still used to massively boost traffic surrounding particular organizations, companies, individuals and ideas in North America and across the globe.

In 2019, automated accounts actively amplified the #TrudeauMustGo hashtag on Twitter. At the time, in July of that year, it was among the top-trending hashtags in Canada on the site. During the highly contested 2020 US presidential election, paid influencers — many of whom did not disclose they were compensated — pushed partisan messages on behalf of particular super PACs (political action committees) and lobbying organizations. A massive network of more than 14,000 Facebook bots also pushed highly divisive content — including misleading content related to COVID-19 — during that contest. In the run-up to this summer’s mid-term elections in Mexico, coordinated groups pushed false news stories and disinformative websites pertaining to the contest.

Some researchers and experts have argued that it’s tough to ascertain the effects of bots or sock puppet accounts used to push political content online — with a few even going so far as to argue their impact is inconsequential. It’s crucial to remember, though, that the difficulty in tracking these and other tools used in artificially boosting certain streams of content is integral to their design and use by digital marketers. Interviewees have regularly explained to me that they build their tools to fly under the radar — they don’t want them to be discovered, because that costs them time and money.

More crucially, efforts to study the impact of bots on, say, other users’ beliefs can miss the mark. Coordinated automated and non-automated inauthentic accounts are often built to trick social media trending algorithms into thinking an amplified hashtag is popular — bypassing a focus on talking to authentic human users completely. To understand impacts on social media algorithms, as the increasingly tired (although no less relevant) phrasing goes, we need more data and more buy-in from social media firms themselves.

Some “growth hacking” and social media amplification-oriented firms have been punished for their computational propaganda practices. But while we can learn from these instances, they are relatively few and far between. Devumi, a US-based outfit specializing in “accelerating social growth,” recently reached a multi-million dollar settlement with the Federal Trade Commission (FTC) after it was caught selling “fake indicators of social media influence — like Twitter followers, retweets, YouTube subscribers and views.” In a case outside social media, but very much related to using the same tactics to game online systems, the US FTC has filed a suit of up to $31 million in civil penalties against ticket brokers who used automation to buy up and then scalp tens of thousands of tickets for concerts and sporting events.

But our laws need more teeth. The BOTS (Better Online Ticket Sales) Act under which the FTC scalping suit is being filed is only focused on punishing bot-driven ticket sales and on levying fines. We need more comprehensive policy worldwide that works to more effectively dismantle the business of computational propaganda.

In some cases, this must include criminal penalties for individuals involved in amplifying disinformation or harassing content on topics such as how/when/where to vote or in artificially boosting untested snake-oil such as “medical” treatments that may result in serious harm to users.

If we don’t stop the computational propaganda business now, its practitioners will continue to become more powerful and more adept at gaming our primary information systems and manipulating public opinion.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Samuel Woolley is an assistant professor in the School of Journalism and program director for computational propaganda research at the Center for Media Engagement, both at the University of Texas at Austin.