Moderating Content in the Age of Disinformation

June 19, 2019
AP_18290836349302.jpg
A Facebook staff person monitors election-related content. (AP Photo/Jeff Chiu)

In late May, a few days before an Ottawa meeting of the International Grand Committee on Big Data, Privacy and Democracy, a doctored video of US Speaker of the House Nancy Pelosi had swept across social media. In the clip, Pelosi’s speech had been slowed down and the pitch of her voice altered to give the impression she was intoxicated and slurring her words. The video was posted on Facebook, shared widely on Twitter — including by President Donald Trump’s lawyer Rudy Giuliani — and reported on by traditional news outlets.

YouTube quickly determined the video violated its policies and banned it from the platform. Facebook, however, kept the video up, as its officials repeated now-familiar explanations about how the company is not in the business of determining what is true. Instead, it added a label warning that the video was questionable and reduced its prevalence in the news feed function.

In Ottawa, a member of the Grand Committee asked Facebook representative Neil Potts what would happen if a similarly doctored video of CEO Mark Zuckerberg appeared on the platform. “If it was the same video, inserting Mr. Zuckerberg for Speaker Pelosi, it would get the same treatment,” Potts said.

While deep fakes remain a huge potential problem, the Pelosi video shows that, for the time being, video and image manipulation don’t need to be high tech in order to be effective.

It didn’t take long for that hypothetical question to become reality. Last week, a more sophisticated doctored video that appeared to show a sinister Zuckerberg revelling in stealing Facebook users’ data was posted on Instagram, which is owned by Facebook. To date, it has not been taken down, although one of the video creators claimed that as with the Pelosi video, the company had flagged it as disinformation and limited its reach.

These duelling fake videos cast light on the ongoing challenges platforms face in moderating user content, especially as videos become easier for the average consumer to manipulate.

Deep Fakes versus Shallow Fakes

The video of Zuckerberg is what’s known as a deep fake: it uses artificial intelligence (AI) software to essentially merge one moving, speaking face with another, in order to make someone appear to do or say something that they have not done or said. It was created by two artists working with a tech start-up called Canny AI.

Tech and disinformation experts have been raising alarms about deep fakes for the past year or so, warning that as the technology becomes more widely available, it is more likely to be misused to cause political disruption, chaos or even violence. While the Canny AI team raised some eyebrows by turning Zuckerberg into a cartoon-villain version of himself, a malicious actor could just as easily have used the technology to make a politician utter racial slurs or announce a coup d’état.

But the alterations to the Pelosi video were nowhere near that sophisticated. It was an example of disinformation that draws on basic editing trickery, colloquially known as a “shallow fake” or “cheap fake.” Experts who track political disinformation say photos and videos that use simple manipulation, such as editing clips from an interview out of context or putting a misleading caption on a photo, have been prevalent in election campaigns around the world. While deep fakes remain a huge potential problem, the Pelosi video shows that, for the time being, video and image manipulation don’t need to be high tech in order to be effective.

Free Speech versus Free Reach

Facebook’s decision to minimize the spread of the Pelosi and Zuckerberg videos, rather than taking them down entirely, reflects an approach increasingly adopted by platform operators as they grapple with complicated issues around content moderation and freedom of expression. Researcher Renee DiResta describes this as the difference between free speech and free reach: people have the freedom to say whatever they want, even if it is false or offensive, but social platforms are not obligated to amplify those comments to a wider audience. Unlike 2016-era social media, when more engaging posts would generally be shared more broadly regardless of their veracity or potential offline harms, some tech companies now adjust their algorithms to stifle the spread of harmful content. Other methods of reducing reach include warning users when they view or share a questionable post and posting links to factual information alongside false content.

However, the Pelosi video showed that restrictions on reach — at least as they stand now — are not always enough to keep problematic content from spreading. Critics pointed out that the warnings that accompanied the Pelosi video on Facebook were so vague that they were meaningless: rather than learning that the video had been manipulated, viewers saw links to news stories describing the video as a hoax, while those who tried to share the video got a pop-up box labelled “additional reporting on this.” The video was still viewed millions of times. Limiting a post’s reach as a way to reduce the harm of fake or damaging speech has promise, but ongoing research and testing will be needed to make sure it’s done effectively.   

The Limits of Content Moderation

Much of the controversy over manipulated media has focused on digital platforms, and rightfully so — social media platforms, such as Facebook, remain a primary news source for a large percentage of the population. However, a range of factors contributed to the millions of views racked up by the Pelosi video, from political polarization to a mainstream media struggling with how to cover false narratives. “Whether repeating the lie or attempting to knock it down, the dominant political narrative of the past two days has focused squarely on Speaker Pelosi’s health,” New York Times columnist Charlie Warzel wrote. “And the video views continue to climb. Our attention has been successfully hijacked by a remedial iMovie trick.”

Stricter policies to stem the spread of harmful content on social or digital platforms have the potential to make a difference, but they are only a start. Any efforts to counter the spread of false and harmful content should also address the broader social and media environment that leads so many people to take these messages seriously. Examples could include enhanced digital media literacy initiatives, or more research on the role of journalists in responding to disinformation campaigns. These are complicated societal issues, and it’s important to remember that social media is just one part of society.

 

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Stephanie MacLellan is a digital democracy fellow with the Public Policy Forum.