It came as no surprise to anyone paying attention when the US Office of the Director of National Intelligence stated recently that Russian President Vladimir Putin and his intelligence agencies had attempted to influence the American presidential election.
The US intelligence community had been signalling for months that Russia was involved in hacking and leaking sensitive information from the Democratic National Committee and other key allies of Hillary Clinton. But the unclassified version of the intelligence report released Friday focused at least as much on Russia’s “state-run propaganda machine” — including a “network of quasi-government trolls” which it described as a crucial component of the influence campaign.
Such high-volume trolling was used to amplify negative stories about Clinton and drown out positive ones, overwhelming social media conversations about the election.
Keeping critical networks safe from cyberattacks is a serious challenge, but at least there’s a broad consensus on the need to do so. Deciding what to do when social networks are abused by propagandists intent on meddling with another country’s institutions raises far more complicated questions, but it’s no less important.
Much has been written about Russia’s hired trolls, who post large numbers of pro-government posts to Russian and international social networks. This election campaign also saw the unprecedented use of Twitter bots — accounts that sent out automated pro-Trump, anti-Clinton tweets hundreds of times a day. One analysis led by an Oxford University researcher found that highly automated accounts were responsible for 18 per cent of all Twitter traffic in the week leading up to the election. Automated pro-Trump tweets swamped similar pro-Clinton tweets by a 5:1 ratio.
The sheer volume of propaganda tweets and accounts makes them more effective. People are more likely to believe something they see repeatedly, from a large variety of sources, according to a report on Russia’s “firehose of falsehood” from the RAND Corporation think-tank. And people are more likely to believe information from someone who appears to be like them, which is why both automated and human-run propaganda accounts are set up to look like genuine American users.
Propaganda accounts sow confusion by contributing to an environment where social media users are overwhelmed with contradictory information, and completely fabricated accounts appear just as credible as traditionally respected sources.
Foreign espionage and propaganda aren’t new, but what sets the 2016 US election apart is the direct and blatant targeting of a fundamental state institution: the means by which citizens choose their government. And while the election is over, there is every reason to believe Russia will continue to use this strategy in other countries. Any state that is interested in protecting its democratic processes should be concerned.
And so should Twitter; people will stop using its platform if they no longer trust it to give them authentic content free from harassment and misinformation.
Twitter traditionally has been reluctant to restrict any content that appears on its platform. That attitude has been changing in the aftermath of several high-profile harassment controversies. There is also a precedent for Twitter disrupting a sophisticated propaganda network with ideological aims: It has taken down at least 360,000 accounts linked to the Islamic State since mid-2015.
In other cases, Twitter uses a policy of ‘Country Withheld Content’ to restrict tweets or accounts from being visible in a specific country that requests it. For example, neo-Nazi material is blocked in Germany, where Holocaust denial is illegal. But Twitter also has been accused of using the same tool to block anti-government views in countries like Turkey. Identifying and blocking foreign propaganda at the request of a state government would risk a similar backlash.
At the same time, democratic governments making such requests risk being branded as hypocrites, supporting freedom of expression only when it fits their agenda. Those countries have to decide whether protecting their crucial political conversations from being manipulated by foreign interests is worth the negative optics.
Twitter also has a decision to make: Does it want to be a place people can go to inform themselves about the world and share their thoughts with friends and strangers? Or is it more committed to being a place where anyone can use any method to spread any message, no matter how harmful it is for discourse and democracy?
Stephanie MacLellan is a research associate at the Centre for International Governance Innovation (CIGI) where she specializes in Internet governance and cybersecurity.
This article first appeared in iPolitics