Russia’s invasion of Ukraine and the accompanying propaganda wars are a stark reminder that, as Aeschylus knew in the sixth century BC, the first casualty of war is truth. And as Russia clamps down on what it deems to be “fake news,” choking off online information and criminalizing free expression, the direct connection between virtual and real-world harms comes into sharp focus.
Unlike in Vegas, what happens online does not stay online, although the traces of it may linger there forever. Peace and democracy rely on freedom of information and the right to form and hold our opinions freely. Increasingly, these freedoms are both enjoyed, and destroyed, online.
States around the world are grappling with the dangers to which we are exposed online. In the UK, the Online Safety Bill, which had its second reading this week, recognizes the need to protect content of democratic importance. But it fails to address the dangers of systems that work to undermine democracy by curating messages that mould our world view. The content is often not the problem.
A few weeks ago, I received a video on WhatsApp from a friend in Uganda, who asked me if it was real. It appeared to be a BBC newsflash saying that the Russians, against the backdrop of tensions in Ukraine, had launched a nuclear attack on London. It was not real. But it was very realistic. It turned out the video had been made in 2018 as part of a corporate emergency response exercise. There was nothing inherently damaging about the content and my friend was not sharing it with malicious intent. The video wasn’t designed to undermine peace and democracy, but its widespread circulation, in the context, may well have been.
The right to freedom of opinion, including the right to keep our opinions to ourselves and form our opinions free from manipulation, is protected absolutely in international human rights law. But the ways in which that online information is managed, targeted and amplified pose serious threats to that right in practice.
The use of propaganda by states to manipulate the worldview of entire populations, at home and abroad, is not new. The British Ministry of Information developed a structured approach to propaganda in the First World War that was honed by Nazi Germany with the technological tools of mass communications in the build-up to the Second World War. But what has changed, with 21st Century technology, is the ability to personalise messages to target individual minds on a massive scale around the world.
In the wake of the Cambridge Analytica schandal, the UK’s Information Commissioner’s Office flagged the risks of political parties’ unchecked use of data in its 2018 report Democracy disrupted? This data is not only used to understand voters, it is valuable because it can also be used to influence them. But politicians of all persuasions preferred to turn a blind eye to the issue when the Data Protection Act was passed that same year, and it seems that they are preparing to do the same with the underlying problems of online safety and their impact on democracy.
Cambridge Analytica may have gone away. But the potential for political behavioural micro-targeting in the United Kingdom, tailoring online information to citizens’ personal foibles so as to press their individual psychological buttons, is now enshrined in UK law. And the threat of Russian or other hostile-state interference with our online visions of the world is well documented. What could be more harmful than the ability to hijack the minds of a population through the targeted manipulation of their information environment? The data, and the content, matter, because they provide a portal to our inner lives.
And it’s not only about war. The Online Safety Bill does not address the underlying problem of the manipulative power of online information flows that affect so many aspects of our lives. That power is what makes Instagram toxic for teenage girls’ mental health and what persuades people that COVID-19 is a hoax or that the world is flat. Propaganda is not found in the syntax of a single statement; it is in the control of the delivery systems, the atmosphere and, ultimately, the emotions of the population. The problem is not with individual pieces of content but with how, by whom and to what end that content is managed.
Political hostility to the wider human-rights project makes it difficult to discuss the bigger picture around online harms, but we cannot afford to look away. Rather than making the United Kingdom a beacon for online safety, the new bill’s focus on freedom of expression and privacy, while ignoring the broader implications for human rights and democratic institutions, is likely to leave us scrambling over parochial culture wars while the big boys get on with the business of mind control.
If we don’t want to lose our minds, we need to think about the systems, not the symptoms, of online harms. The real life horrors of war in Ukraine are a stark reminder that we need to think fast.
This piece was co-published with openDemocracy.