Searching for a Stronghold in the Fight Against Disinformation

June 4, 2018
AP_18136591744626.jpg
Former Cambridge Analytica employee Chris Wylie testifies about the alleged improper use of information from millions of Facebook users for political purposes. ( AP Photo/Jose Luis Magana)

When Facebook CEO Mark Zuckerberg testified before the US Congress recently, he outlined a considerable task for social media companies: to ensure that online communities are pulling people together, not driving them apart. But in analyzing how social media was exploited by a spectrum of “bad actors” in the Brexit referendum, in the 2016 US presidential election and in a range of other elections across Europe, it has become clear that social media algorithms are far more effective at accelerating polarization.

Social media giants are under intense scrutiny for — at best — turning a blind eye to the proliferation of information operations on their platforms and — at worst — knowingly profiteering from and facilitating the use of their platforms for these kinds of campaigns. As a result, a number of social media companies are implementing voluntary measures to show they can self-police before the 2018 US mid-term elections. 

Facebook, Twitter and Google/YouTube have instituted new transparency and verification measures for advertising, designed to limit the ability of foreign actors, such as Russia, to run campaigns during national elections. Twitter is taking down malicious botnets that contribute to disinformation campaigns. YouTube, Google and Facebook are looking at how their algorithms trend, sort and promote content, after criticism that disinformation outperforms real news in their news feeds. Facebook is allowing access to data for research (with an emphasis on looking ahead, not behind), and partnering with the Atlantic Council’s Digital Forensic Research Lab to “identify, expose, and explain disinformation during elections around the world.”  

Most of these measures are viewed as important steps forward, if not enough to deter or disrupt hostile information campaigns, which are constantly evolving to evade detection measures. In private conversations, journalists and researchers have also voiced concerns that Facebook’s new partnerships are another effort to “buy up” talent and expertise that had previously provided outside review, even as Facebook retains greater control of the data. Additionally, some of the algorithmic tweaks are having unintended consequences, including making the problem of “siloing” worse

The focus has been on ads, bots and Russians. But ads aren’t really the issue. Bots aren’t the only, or even the primary, means of amplification. The Kremlin prefers to use proxies and deniability — and their disinformation tactics, techniques and procedures (TTPs) are being cloned by a range of state and non-state actors around the world. Focusing on these areas does nothing to alter the fundamental nature of the business model of social media, which creates echo chambers by design.

All of these new measures seek to address a past set of problems instead of the next ones. The challenge is that these measures still don’t acknowledge, let alone tackle, what yesterday and today’s problem is — in particular, the use of false identities, deep fakes and digital illegals to influence online discourse.

The idea is essentially this: By creating real-seeming personalities and groups of a particular profile who embrace similar views and then networking them together into a semblance of community, it is possible to create an incredibly persuasive, authentic-seeming echo chamber that is designed to engage and capture those potentially susceptible to those views. You can stumble into it and find information from a person or group that seems like you. This can be done positively — to reinforce existing perceptions — or negatively — to “mainstream” fringe views, promote conspiracy and potentially radicalize target populations (for example).

In Russian information operations, this process is known as creating protest or activation potential — another way of saying that people are being pushed toward behavioural change. Research at New Media Frontier has shown that the integration of different actors — bots, cyborgs and humans — results in the most effective amplification of narrative. False identities can be used in any of these roles but are most effective when they act as the nodes of the network map. 

“In my experience studying computational propaganda, I’ve found the vast majority of bots are unsuccessful — that is, human users do not interact with them extensively,” says Steve Kramer, president and chief scientist at Paragon Science, which uses propriety tools to monitor narrative online.

“Much more worrisome are cyborgs — actual human users who employ specialized software to automate their social posts…These cyborgs often use multiple ‘sock puppet’ accounts to amplify their messages.” This amplification positions them as distributors of information — the hubs in a hub-and-spoke model of dissemination.

Deep fake personalities with strong views can occupy these hubs — nodes —acting as engagement centres for other amplifiers and for people seeking those views. The impact is more effective when several connect to one another, reinforcing the narrative and overriding the internal doubt or caution of the audience they seek to engage. 

How effective is this strategy? Special Counsel Robert Mueller’s indictments have detailed the lengths to which Russian “troll farms” went to verify false identity accounts, using fake identity documents, addresses, bank accounts and other means. 

This was discussed at a recent meeting with Baltic military and intelligence officials responsible for monitoring the information realm.

“Fake accounts can be used to build a sense of false reality,” said one senior officer. He cited an example where they believed false identity accounts in support of a particular movement were likely created by the opponents of that movement in order to create a sense of complacency in the supporters of the movement — a risky psychological technique, but in this case, the demographics made sense, since the opposing group did not consume as much online media.

New research findings support the assumptions underlying military instruction on psychological operations and strategic deception. For example, a recent paper shows how much people love echo chambers: how quickly beliefs harden when people are exposed to views similar to their own (especially when those views are negative or emotional) and how automation accelerates and reinforces these psychological effects. Research data is now showing the power of false online discourse to harden views or create a false perception of reality. When focused on elections, hardened views and complacency are meant to mobilize or suppress voter turnout.

It is becoming clear, as awareness of the SCL-related data scandal expands, just how many of these companies there already are, and how dangerous it can be to sell these tools to the worst actors, who in turn use them to deceive opponents and supporters alike.

To counter these effects, there have been discussions about expanding “media literacy,” integrating clunky fact-checking systems into the platforms and creating ratings systems for the quality of news. But in Ukraine, Poland and the Baltic States, on the front lines of the fight against Russian disinformation, these gestures are met with skepticism.

Jakub Kalensky from the European Union’s East StratCom Task Force jokes that keeping Russian disinformation at bay is something like “teaching the village better fire safety without admitting there is a capable arsonist.”

“We are discussing that we should train more firemen and teach kids how to put out fire, and we blame ourselves that we should have built brick houses instead of wooden houses — but we are not discussing that we should catch the arsonist. He is capable and experienced, and will keep burning houses no matter how many aqueducts we build.”

Four years after the seizure of Crimea highlighted how far disinformation has evolved, Kalensky’s take is an understandable one.

“We need to do a better job of explaining how our societies are being manipulated. We need to show what is real, and show narrative that is not, and explain how that narrative works,” offered another Baltic military officer who tracks Russian disinformation. “You can’t fight everything, every bad actor, all around the world. Far better is to build a castle — an information castle — and promote it to everyone as a safe place.” This “castle” would include “national information strongholds” of statistics and information on trusted sites. 

Today, there is no stronghold. Platforms remain vulnerable to state actors, non-state groups and private sector mercenaries, all using a range of tools for psychological coercion. This landscape will get worse as information attacks are increasingly cross-linked to cyber attacks, and as artificial intelligence tools become easier to use. 

For now, the platforms themselves are in the driver’s seat to find solutions against digital disinformation. In some respects, the easiest solution is for them to reform and retain their central presence in the information environment with improved rules and ethics to limit use by bad actors. The major social media platforms are, and will likely remain for the foreseeable future, personality projects of the founders. But this means they can pivot fast if there is the will to do so. It would help if they would be transparent with data for research — including aggregated and historical data. It would help if their internal teams were allowed to openly discuss trends and patterns of behavior they have observed, often in real time. It would help if took some responsibility for the spread of disinformation.

We enter the 2018 election cycle with the potential for great peril: the power of these tools is known, but little has been done to alter their availability or limit their application. Our legislators don’t know how to regulate the algorithmic black boxes; Russia will change up its TTPs, even as new bad actors proliferate; and this will present new challenges. For example — now that “the resistance” is so spun-up against Russian disinformation, expect to see it weigh in behind Democratic candidates.

In the fight against disinformation, the battlefield will become far less clear over time.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Molly K. McKew (@MollyMcKew) is a writer and expert on information warfare; she currently serves as narrative architect at New Media Frontier, a social media intelligence company. As an analyst and author; her articles have appeared in Politico Magazine, Wired, the Washington Post, and other publications. She is a frequent radio/TV commentator on Russian strategy, briefs military staff and political officials on Russian doctrine and hybrid warfare, and lectures for psychological defense courses. McKew is also CEO of Fianna Strategies, a consulting firm that advises governments, political parties, and NGOs on foreign policy and strategic communication. Her recent work has focused on the European frontier — including the Baltic states, Georgia, Moldova, and Ukraine — where she has worked to counter Russian information campaigns and other elements of hybrid warfare.