What We Know — and Don’t Know — about Microtargeting and Its Influence on Political Behaviour

December 5, 2019

There are two, ultimately incompatible, interpretations of the 2018 Cambridge Analytica scandal, in which the political consulting firm culled vast amounts of personal data from Facebook users’ profiles without their consent. In the first, the information (a combination of demographic data and a broad range of character traits) was used to develop sophisticated psychographic profiles for millions of Americans, which were then used to target content to individuals in order to nudge their voting behaviour. This ability to sway voters’ behaviour is pitched in Cambridge Analytica marketing material, and is an underlying concern for many, including myself, who have been advocating for more stringent election integrity and platform policy. This summer, Netflix released The Great Hack, a documentary that largely espouses this view of Cambridge Analytica, and features the story of academic-turned-legal activist David Carroll. We interviewed Carroll for this week’s Big Tech podcast, and he provides some valuable context to this debate. 

In an alternative telling, data was still culled from Facebook and used to microtarget American voters (arguably major problems in and of themselves), but the effectiveness of this strategy is called into question. This more skeptical perspective has been articulated by quantitative social scientists such as Dean Eckles and Brendan Nyhan, and political communications scholars such as Daniel Kreiss, who consider this kind of psychometric targeting used for political purpose to be more “snake oil” than science.

Underlying these two versions of the role of Cambridge Analytica in the 2016 Brexit referendum and US presidential election is the need for a more nuanced conversation around the true impact of microtargeting on political behaviour and what we (governments, political parties, platform companies, citizens) should be doing about it.

This debate was re-ignited last week with the publication of a study in the Proceedings of the National Academy of Sciences (PNAS), which analyzed the Russian Internet Research Agency’s troll activity on Twitter and found no evidence that it “significantly influenced ideology, opinions about social policy issues, attitudes of partisans toward each other, or patterns of political following on Twitter.” What’s more, the article’s authors argue that one reason for this was that the troll accounts were predominantly interacting with people who were already highly polarized. Their conclusion is therefore counterintuitive: online echo chambers may have served as a containment mechanism for trolling content and not actually changed anyone’s views at all.

This study piqued my interest because over the course of the recent Canadian federal election, I directed a large-scale online media monitoring and survey project studying the spread and impact of disinformation on the behaviour of voters. Our team published seven reports during the election campaign, and our final analysis will be published in January. One of the phenomena we observed aligns with the findings of PNAS’s new study. In short, the presence of clearly defined echo chambers in the Canadian online conversation may have inoculated wider communities from the spread of disinformation and false content, and this content may have mostly been seen by those already predisposed to the message. We think the result is that this content likely didn’t change the voting behaviour of citizens.

But here is where the debate gets more complicated. As the results of isolated surveys and studies of discrete social media exposure, these findings may be sound. But they don’t paint the full picture. In response to the PNAS study, a number of scholars and researchers, such as Siva Vaidhyanathan, Johan Farkas and Renee DiResta, and journalists, such as Caroline Orr, have pointed out that researchers are only able to access a very limited scope of the media a person is exposed to, over a specific period of time, and can only capture a very limited type of behavioural shifts. Researchers also rely on variations of what communications scholars call the “hypodermic needle” theory of media influence: namely, the belief that media can be ‘injected’ into the minds of passive audiences. Perhaps more importantly, such studies might miss the actual intent of disinformation campaigns: to divide, inflame, engage and entrench pre-existing biases and polarizing beliefs.

Further — and this is likely the most significant limitation — these studies simply don’t have access to complete data sets. They are getting better — our Canadian study is likely one of the most comprehensive election data sets to date (including hundreds of millions of online posts)  — but due to restrictions on what platform companies will share, they still have profound black spots, including, for example, all comments, private posts and group conversations on Facebook. The larger question, therefore, raised in an exchange between Vaidhyanathan and Nyhan, is whether more data and better methods will ever get us to a full understanding of what are, ultimately, complex cultural and deeply human decisions based on a lifetime of information, knowledge and context. This debate is ultimately one of epistemology.  

What are policy makers to take from this seemingly isosteric academic debate? Is disinformation a problem? What can be done about it? First, it is clear we need much better research. We are in the early stages of a discipline-defining period of iterative methodological experimentation. This work needs to be funded and encouraged. But it also needs access to far better data — data that governments must compel platform companies to provide.

The case of the Canadian election also points to another important policy lever, one that can and should be implemented, regardless of which interpretation one supports (psychographic profiling as snake oil or science) in the debate over Cambridge Analytica’s impact. Simply put, we need to bring sunlight to this online data and targeting ecosystem. We must demand limits on and transparency over ad targeting (Canada imposed basic measures before the election, but should go much further), and far greater access to and clearer rules about platforms’ personal data collection activity. This is an easy first step in addressing some of the vexing (and not yet fully understood) challenges that were exposed by the 2016 Brexit referendum, US presidential election, and a string of democratic elections since.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Taylor Owen is a CIGI senior fellow and the host of the Big Tech podcast. He is an expert on the governance of emerging technologies, journalism and media studies, and on the international relations of digital technology.