Influence Operations and Disinformation on Social Media

November 23, 2020

This article is a part of Modern Conflict and Artificial Intelligence, an essay series that explores digital threats to democracy and security, and the geopolitical tensions they create.

A

mid the coronavirus disease 2019 (COVID-19) pandemic, foreign state actors have been spreading disinformation on social media about the disease and the virus that causes it (Bright et al. 2020; Molter 2020). Covering a variety of topics — from its origin to potential cures, or its impact on Western societies — the creation and dissemination of COVID-19 disinformation online has become widespread.

States — such as Russia and China — have taken to Facebook, Twitter and YouTube to create and amplify conspiratorial content designed to undermine trust in health officials and government administrators, which could ultimately worsen the impact of the virus in Western societies (Barnes and Sanger 2020).

Although COVID-19 has highlighted new and incredible challenges for our globalized society, foreign influence operations that capitalize on moments of global uncertainty are far from new. Over the past few years, public and policy attention has focused largely on foreign influence operations targeting elections and referendums, but health-related conspiracy theories created and amplified as part of state propaganda campaigns also have a long history.

One example is the conspiracy theory that AIDS (acquired immune deficiency syndrome) was the result of a biological weapons experiment conducted by the US government. Historians have documented how Soviet operatives leaked “evidence” into foreign institutions and media outlets questioning the origin of the virus (Boghardt 2009). Because the US government was slow to respond to the AIDS epidemic, which disproportionately affected gay men and people of colour, conspiracy theories about its origin heightened suspicions within these communities that the US government was responsible (Qiu 2017). Decades later, public health research has shown that many people still hold conspiratorial beliefs about the human immunodeficiency virus (HIV) that causes AIDS, which has negatively affected treatment for the disease (Bogart et al. 2010).

Although COVID-19 has highlighted new and incredible challenges for our globalized society, foreign influence operations that capitalize on moments of global uncertainty are far from new.

Part of the reason why the HIV/AIDS conspiracy was effectively inculcated into the belief systems of everyday people was because it involved identifying and exploiting pre-existing divisions among society and then using disinformation to sow further discord and distrust. Today, state actors have applied the same playbook used during the Cold War as part of contemporary foreign influence operations: in the lead-up to the 2016 US presidential election, for example, disinformation and conspiracy theories injected into social and mainstream media were used to exacerbate racial tensions in the United States, particularly around the Black Lives Matter movement (DiResta et al. 2018; Howard et al. 2018), but also around religious (Hindman and Barash 2018) and gender divides (Bradshaw 2019).

What has changed from the Cold War–era information warfare to contemporary influence operations is the information and media landscape through which disinformation can be circulated. Innovations in technology have transformed modern-day conflict and the ways in which foreign influence operations take place. Over the past two decades, state and non-state actors have increasingly used the internet to pursue political and military agendas, by combining traditional military operations with cyberattacks and online propaganda campaigns (North Atlantic Treaty Organization 2016). These “hybrid methods” often make use of the spread of disinformation to erode the truth and undermine the credibility of international institutions and the liberal world order (National Defence Canada 2017).

Today, unlike in the past, when disinformation campaigns were slow, expensive and data-poor, social media provides a plethora of actors with a quick, cheap and data-rich medium to use to inject disinformation into civic conversations. Algorithms that select, curate and control our information environment might prioritize information based on its potential for virality, rather than its grounding in veracity. Behind the veil of anonymity, state-sponsored trolls can bully, harass and prey on individuals or communities online, discouraging the expression of some of the most important voices in activism and journalism. Sometimes the people behind these accounts are not even real, but automated scripts of code designed to amplify propaganda, conspiracy and disinformation online. The very design of social media technologies can enhance the speed, scale and reach of propaganda and disinformation, engendering new international security concerns around foreign influence operations online.

Foreign Influence Operations in a Platform Society

From public health conspiracies to disinformation about politics, social media has increasingly become a medium used by states to meddle in the affairs of others (Bradshaw and Howard 2018; 2019). From China’s disinformation campaigns that painted Hong Kong democracy protestors as violent and unpopular dissidents (Wong, Shepherd and Liu 2019), to Iranian-backed disinformation campaigns targeting political rivals in the Gulf (Elswah, Howard and Narayanan 2019), state actors are turning to social media as a tool of geopolitical influence. And it is not just state actors who turn to social media platforms to spread disinformation and propaganda. Populist political parties, far-right media influencers, dubious strategic communications firms and the charlatans of scientific disinformation have all found a home for conspiracy, hate and fear on social media (Campbell-Smith and Bradshaw 2019; Evangelista and Bruno 2019; Numerato et al. 2019). What is it about the contemporary communication landscape that makes social media such a popular — and arguably powerful — platform for disinformation?

Social media platforms have come to dominate almost every aspect of human interaction, from interpersonal relations to the global economy. But they also perform important civic functions. Increasingly, these platforms are an important source of news and information for citizens around the world (Newman et al. 2020). They are a place for political discussion and debate, and for mobilizing political action (Benkler 2007; Castells 2007; Conover et al. 2013). Politicians also rely on social media for political campaigning, galvanizing support and connecting with their constituents (Hemsley 2019; Howard 2006; Kreiss 2016). But social media platforms are not neutral platforms (Gillespie 2010). Scholars have described how their technical designs and governance policies (such as terms of service, community standards or advertising policies) embed a wide range of public policy concerns, from freedom of speech and censorship to intellectual property rights and fair use or tensions between privacy and surveillance online (DeNardis and Hackl 2015; Gillespie 2019; Hussain and Howard 2013; MacKinnon 2012). Platform design and governance also impact the democratic functions of platforms, including how disinformation and propaganda are spread. While it is important to recognize that all technologies have socio-political implications to various degrees, several characteristics of social media platforms create a particular set of concerns for the spread of disinformation and propaganda.

Aggregation

One of the most salient features of today’s information and communication environment is the massive amount of data aggregated about individuals and their social behaviour. The immense amount of data we leave behind as we interact with technology and content has been called “data exhaust” by some scholars (Deibert 2015). Our exhaust — or the by-product of our interactions with online content — is used by platforms to create detailed pictures of who we are not only as people and consumers, but also as citizens or potential voters in a democracy (Tufekci 2014). The collection, aggregation and use of data allows foreign adversaries to micro-target users with political advertisements during elections. Like all political advertising, these messages could drive support and mobilization for a certain candidate or suppress the political participation of certain segments of the population (Baldwin-Philippi 2017; Chester and Montgomery 2019; Kreiss 2017). We have already seen foreign agents purchase political advertisements to target individuals or communities with messages of mobilization and suppression (Mueller 2019). Although platforms have taken several steps to limit foreign advertising on their platforms, such as currency restrictions or account verification measures, foreign actors have found ways to subvert these measures (Satariano 2018).

Algorithms

Platforms apply algorithms — or automated sets of rules or instructions — to transform data into a desired output. Using mathematical formulas, algorithms rate, rank, order and deliver content based on factors such as an individual user’s data and personal preferences (Bennett 2012), aggregate trends in the interests and behaviour of similar users (Nahon and Hemsley 2013), and reputation systems that evaluate the quality of information (van Dijck, Poell and de Waal 2018). The algorithmic curation of content — whether it be a result of personalization, virality and trends, or reputation scores — affects how news and information is prioritized and delivered to users, including whether algorithms present diverse views or reinforce singular ones (Borgesius et al. 2016; Dubois and Blank 2018; Flaxman, Goel and Rao 2016; Fletcher and Nielsen 2017), nudge users toward extreme or polarizing information (Horta Ribeiro et al. 2019; Tufekci 2018) or emphasize sensational, tabloid or junk content over news and other authoritative sources of information (Bradshaw et al. 2020; Neudert, Howard and Kollanyi 2019).

Anonymity

Platforms afford different levels of anonymity to users. Whether users must use their real name has implications for whether bots, trolls or even foreign state actors use anonymity to mask their identity in order to harass or threaten political activists and journalists, or to distort authentic conversations about politics (Nyst and Monaco 2018). With anonymity, there is a lack of transparency about the source of information and whether news, comments or debate come from authentic voices or ones trying to distort the public sphere. Related to the question of anonymity is the question of data disclosure and how personal data disclosed to third parties can be used if unscrupulous firms or foreign state actors are able to use psychographic profiles to suppress voter turnout (Wylie 2020).

Automation

Platforms afford automation — where accounts can automatically post, share or engage with content or users online. Unlike a human user, automated accounts — which are sometimes referred to as “political bots” or “amplifier accounts” — can post much more frequently and consistently than any human user (McKelvey and Dubois 2017). Although there are many ways to classify automated accounts and the activities they perform (Gorwa and Guilbeault 2020), they generally perform two functions when it comes to foreign influence operations. First, by liking, sharing, retweeting or posting content, automated accounts can generate a false sense of popularity, momentum or relevance around a particular person or idea. Networks of bots can be used to distort conversations online by getting disinformation or propaganda to trend (Woolley 2016). Second, automation has been an incredibly powerful tool in the targeting and harassment of journalists and activists, whereby individuals are flooded with threats and hate by accounts that are not even real (Nyst and Monaco 2018).

The Future of Disinformation and Foreign Influence Operations

In conclusion, the spread of disinformation and propaganda online are growing concerns for the future of international security. The salient features of platforms — aggregation, algorithms, anonymity and automation — are some of the ways contemporary technologies can contribute to the spread of harmful content online, and foreign state actors are increasingly leveraging these tools to distort the online public sphere. The use of social media for “hybrid” methods of warfare is a broader reflection on how technological innovation changes the nature of conflict. Indeed, technology has always been recognized as a force that enables social and political transformation (Nye 2010). Similarly, the unique features of our contemporary information and communication environment provide new opportunities for state actors to use non-traditional methods of warfare to pursue their goals.

As we see innovations in technology, we will also see innovations in the way in which propaganda and disinformation spread online. The Internet of Things, which is already revolutionizing the way we live, creates even more data about us as individuals and as citizens. What happens in a world where we can measure someone’s physiological response to propaganda through wearable technology? We interact with “chatbots” like Alexa and Siri every day. What happens when the growing sophistication of chatbot technology is applied to political bots on Facebook or Twitter? How will the platforms differentiate between genuine human conversations and automated interactions?

Thus far, combatting disinformation and propaganda has been a constant game of whack-a-mole. Private responses focus on third-party fact-checking or labelling information that might be untrustworthy, misleading or outright false. In the form of laws and regulations, governments place a greater burden on platforms to remove certain kinds of harmful content, often without defining what constitutes harm. But propaganda and disinformation are also systems problems. Too often, public and private responses focus on the content. However, these responses ignore the technical agency of platforms to shape, curate and moderate our information ecosystem. Rather than focusing solely on the content, we need to look at the deeper systemic issues that make disinformation and propaganda go viral in the first place. This means thinking about the features of platforms that enhance or exacerbate the spread of harmful content online.

Works Cited

Baldwin-Philippi, J. 2017. “The Myths of Data-Driven Campaigning.” Political Communication 34 (4): 627–33. https://doi.org/10.1080/10584609.2017.1372999.

Barnes, J. E. and D. E. Sanger. 2020. “Russian Intelligence Agencies Push Disinformation on Pandemic.” The New York Times, July 28. www.nytimes.com/2020/07/28/us/politics/russia-disinformation-coronavirus.html.

Benkler, Y. 2007. The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University Press.

Bennett, W. L. 2012. “The Personalization of Politics: Political Identity, Social Media, and Changing Patterns of Participation.” The ANNALS of the American Academy of Political and Social Science 644 (1): 20–39. https://doi.org/10.1177/0002716212451428.

Bogart, L. M., G. Wagner, F. H. Galvan and D. Banks. 2010. “Conspiracy Beliefs About HIV Are Related to Antiretroviral Treatment Nonadherence Among African American Men With HIV.” Journal of Acquired Immune Deficiency Syndrome 53 (5): 648–55. https://doi.org/10.1097/QAI.0b013e3181c57dbc.

Boghardt, T. 2009. “Operation INFEKTION: Soviet Bloc Intelligence and Its AIDS Disinformation Campaign.” Studies in Intelligence 53 (4): 1–24.

Borgesius, F. J. Z., D. Trilling, J. Möller, B. Bodó, C. H. de Vreese and N. Helberger. 2016. “Should we worry about filter bubbles?” Internet Policy Review 5 (1): 1–16. https://policyreview.info/node/401/pdf.

Bradshaw, S. 2019. The Gender Dimensions of Foreign Influence Operations. Report prepared at the request of Global Affairs Canada.

Bradshaw, S. and P. N. Howard. 2018. “Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation.” Working Paper 2018.1. Oxford, UK: Project on Computational Propaganda, Oxford Internet Institute.

———. 2019. “The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation.” Working Paper 2019.2. Oxford, UK: Project on Computational Propaganda, Oxford Internet Institute. https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/09/CyberTroop-Report19.pdf.

Bradshaw, S., P. N. Howard, B. Kollanyi and L.-M. Neudert. 2020. “Sourcing and Automation of Political News and Information over Social Media in the United States, 2016–2018.” Political Communication 37 (2): 173–93.

Bright, J., H. Au, H. Bailey, M. Elswah, M. Schliebs, N. Marchal, C. Schwieter, K. Rebello and P. N. Howard. 2020. “Coronavirus Coverage by State-Backed English-Language News Sources: Understanding Chinese, Iranian, Russian and Turkish Government Media.” Data Memo 2020.2. Oxford, UK: Project on Computational Propaganda, Oxford Internet Institute. https://comprop.oii.ox.ac.uk/research/posts/coronavirus-coverage-by-state-backed-english-language-news-sources/.

Campbell-Smith, U. and S. Bradshaw. 2019. “Global Cyber Troops Country Profile: India.” Oxford, UK: Project on Computational Propaganda, Oxford Internet Institute. https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/05/India-Profile.pdf.

Castells, M. 2007. “Communication, Power and Counter-power in the Network Society.” International Journal of Communication 1 (1): 29. https://ijoc.org/index.php/ijoc/article/view/46.

Chester, J. and K. C. Montgomery. 2019. “The digital commercialisation of US politics — 2020 and beyond.” Internet Policy Review 8 (4). https://policyreview.info/articles/analysis/digital-commercialisation-us-politics-2020-and-beyond.

Conover, M. D., E. Ferrara, F. Menczer and A. Flammini. 2013. “The Digital Evolution of Occupy Wall Street.” PLOS ONE 8 (5): e64679. https://doi.org/10.1371/journal.pone.0064679.

Deibert, R. 2015. “The Geopolitics of Cyberspace After Snowden.” Current History 114 (768): 9–15. https://doi.org/10.1525/curh.2015.114.768.9.

DeNardis, L. and A. M. Hackl. 2015. “Internet Governance by Social Media Platforms.” Telecommunications Policy 39 (9): 761–70. https://doi.org/10.1016/j.telpol.2015.04.003.

DiResta, R., K. P. Shaffer, B. Ruppel, D. M. Sullivan, R. Matney, R. Fox, J. Albright, B. E. Johnson. 2018. “The Tactics & Tropes of the Internet Research Agency.” White paper. Austin, TX: New Knowledge. https://disinformationreport.blob.core.windows.net/disinformation-report/NewKnowledge-Disinformation-Report-Whitepaper.pdf.

Dubois, E. and G. Blank. 2018. “The echo chamber is overstated: the moderating effect of political interest and diverse media.” Information, Communication & Society 21 (5): 729–45. https://doi.org/10.1080/1369118X.2018.1428656.

Elswah, M., P. N. Howard and V. Narayanan. 2019. “Iranian Digital Interference in the Arab World.” Data Memo 2019.1. Oxford, UK: Project on Computational Propaganda, Oxford Internet Institute. https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/04/Iran-Memo.pdf.

Evangelista, R. and F. Bruno. 2019. “WhatsApp and political instability in Brazil: targeted messages and political radicalization.” Internet Policy Review 8 (4). https://policyreview.info/articles/analysis/whatsapp-and-political-instability-brazil-targeted-messages-and-political.

Flaxman, S., S. Goel and J. M. Rao. 2016. “Filter Bubbles, Echo Chambers, and Online News Consumption.” Public Opinion Quarterly 80 (S1): 298–320. https://doi.org/10.1093/poq/nfw006.

Fletcher, R. and R. K. Nielsen. 2017. “Are people incidentally exposed to news on social media? A comparative analysis.” New Media & Society 20 (7): 2450–68. https://doi.org/10.1177/1461444817724170.

Gillespie, T. 2010. “The politics of ‘platforms.’” New Media & Society 12 (3): 347–64.

———. 2019. Custodians of the Internet: Platforms, Content Moderation and the Hidden Decisions that Shape Social Media. New Haven, CT: Yale University Press.

Gorwa, R. and D. Guilbeault. 2020. “Unpacking the Social Media Bot: A Typology to Guide Research and Policy.” Policy & Internet 12 (2): 225–48. https://doi.org/10.1002/poi3.184.

Hemsley, J. 2019. “Followers Retweet! The Influence of Middle-Level Gatekeepers on the Spread of Political Information on Twitter.” Policy & Internet 11 (3): 280–304. https://doi.org/10.1002/poi3.202.

Hindman, M. and V. Barash. 2018. Disinformation, Fake News’ and Influence Campaigns on Twitter. Miami, FL: Knight Foundation. https://knightfoundation.org/reports/disinformation-fake-news-and-influence-campaigns-on-twitter.

Horta Ribeiro, M., R. Ottoni, R. West, V. A. F. Almeida and W. Meira. 2019. “Auditing Radicalization Pathways on YouTube.” Cornell University arXiv e-print, December 4. https://arxiv.org/abs/1908.08313v3.

Howard, P. N. 2006. New Media Campaigns and the Managed Citizen. Cambridge, UK: Cambridge University Press.

Howard, P. N., B. Ganesh, D. Liotsiou, J. Kelly and C. François. 2018. The IRA, Social Media and Political Polarization in the United States, 20122018. Oxford, UK: Project on Computational Propaganda, Oxford Internet Institute. https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2018/12/The-IRA-Social-Media-and-Political-Polarization.pdf.

Hussain, M. M. and P. N. Howard, eds. 2013. State Power 2.0: Authoritarian Entrenchment and Political Engagement Worldwide. Farnham, UK: Ashgate Publishing.

Kreiss, D. 2016. Prototype Politics: Technology-Intensive Campaigning and the Data of Democracy. New York, NY: Oxford University Press.

———. 2017. “Micro-targeting, the quantified persuasion.” Internet Policy Review 6 (4). https://policyreview.info/articles/analysis/micro-targeting-quantified-persuasion.

MacKinnon, R. 2012. Consent of the Networked: The Worldwide Struggle for Internet Freedom. New York, NY: Basic Books.

McKelvey, F. and E. Dubois. 2017. “Computational Propaganda in Canada: The Use of Political Bots.” Working Paper 2017.6. Oxford, UK: Project on Computational Propaganda, Oxford Internet Institute. http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/06/Comprop-Canada.pdf.

Molter, V. 2020. “Virality Project (China): Pandemics & Propaganda.” Cyber News, March 19. Stanford, CA: Cyber Policy Center. https://cyber.fsi.stanford.edu/news/chinese-state-media-shapes-coronavirus-convo.

Mueller, R. S. I. 2019. Report On The Investigation Into Russian Interference In The 2016 Presidential Election. Volume I of II. Washington, DC: US Department of Justice. www.justice.gov/storage/report.pdf.

Nahon, K. and J. Hemsley. 2013. Going Viral. Cambridge, UK: Polity Press.

National Defence Canada. 2017. Strong, Secure, Engaged: Canada’s Defence Policy. Ottawa, ON: National Defence. http://dgpaapp.forces.gc.ca/en/canada-defence-policy/docs/canada-defence-policy-report.pdf.

Neudert, L.-M., P. Howard and B. Kollanyi. 2019. “Sourcing and Automation of Political News and Information During Three European Elections.” Social Media + Society 5 (3). https://doi.org/10.1177/2056305119863147.

Newman, N., R. Fletcher, A. Schulz, S. Andi and R. Kleis Nielsen. 2020. Reuters Institute Digital News Report. Oxford, UK: Reuters Institute for the Study of Journalism.

North Atlantic Treaty Organization. 2016. Social Media as a Tool of Hybrid Warfare. Riga, Latvia: North Atlantic Treaty Organization. www.stratcomcoe.org/social-media-tool-hybrid-warfare.

Numerato, D., L. Vochocová, V. Štětka and A. Macková. 2019. “The vaccination debate in the ‘post-truth’ era: social media as sites of multi-layered reflexivity.” Sociology of Health & Illness 41 (S1): 82–97. https://doi.org/10.1111/1467-9566.12873.

Nye, J. S. 2010. “Cyber Power.” Paper, May. Cambridge, MA: Belfer Center for Science and International Affairs, Harvard Kennedy School.

Nyst, C. and N. Monaco. 2018. State-Sponsored Trolling: How Governments Are Deploying Disinformation as Part of Broader Digital Harassment Campaigns. Palo Alto, CA: Institute for the Future.

Qiu, L. 2017. “Fingerprints of Russian Disinformation: From AIDS to Fake News.” The New York Times, December 12. www.nytimes.com/2017/12/12/us/politics/russian-disinformation-aids-fake-news.html.

Satariano, A. 2018. “Ireland’s Abortion Referendum Becomes a Test for Facebook and Google.” The New York Times, May 25. www.nytimes.com/2018/05/25/technology/ireland-abortion-vote-facebook-google.html.

Tufekci, Z. 2014. “Engineering the public: Big data, surveillance and computational politics.” First Monday 19 (7). https://doi.org/10.5210/fm.v19i7.4901.

———. 2018. “YouTube, the Great Radicalizer.” The New York Times, March 10. www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html.

van Dijck, J., T. Poell and M. de Waal. 2018. The Platform Society: Public Values in a Connective World. New York, NY: Oxford University Press.

Wong, S.-L., C. Shepherd and Q. Liu. 2019. “Old messages, new memes: Beijing’s propaganda playbook on the Hong Kong protests.” Financial Times, September 3. www.ft.com/content/7ed90e60-ce89-11e9-99a4-b5ded7a7fe3f.

Woolley, S. C. 2016. “Automating power: Social bot interference in global politics.” First Monday 21 (4). http://firstmonday.org/ojs/index.php/fm/article/view/6161.

Wylie, C. 2020. Mindf*ck: Inside Cambridge Analytica’s Plot to Break the World. London, UK: Profile Books.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Samantha Bradshaw is a CIGI fellow and assistant professor in new technology and security at American University.

Autonomous systems are revolutionizing our lives, but they present clear international security concerns. Despite the risks, emerging technologies are increasingly applied as tools for cybersecurity and, in some cases, cyberwarfare. In this series, experts explore digital threats to democracy and security, and the geopolitical tensions they create.