The Patchwork of Policy Working to Fend off Misinformation

Election-oriented misinformation is one of the most pressing issues of the information age. What have governments, tech companies and civil society done to tackle this challenge?

Published: October 4, 2019

Author: Samarth Bansal

Google searches for “fake news” suddenly picked up after Donald Trump won the 2016 US presidential election. While some analysts attributed Trump’s victory to social media-driven propaganda and disinformation campaigns originating in Russia, the president termed journalistic reporting he didn’t like as fake news and the press as an “enemy of the people.” Meanwhile, journalists and academics intensified their scrutiny of social media platforms, revealing how digital technologies were routinely weaponized to amplify junk information for economic and political ends.

Underscored by Oxford Dictionaries’ decision to declare post-truth” as its 2016 international word of the year, election-oriented misinformation is one of the most pressing issues of the information age. What have governments, tech companies and civil society done to tackle this challenge? And have their efforts been effective?

The High Stakes of Misinformation

Anecdotes about public falling for fake news often lead to outrage and moral panic, but they have limited value in diagnosing or solving systemic problems. Evaluating the scale and impact of misinformation is hard, and expert opinion is divided.

Take the 2016 American election for example. In her 2016 book Cyberwar, Kathleen Hall Jamieson, a professor of communications at the University of Pennsylvania, argued that “it is more likely that Russian trolls changed the [2016 American] election's outcome than that unicorns exist.” Jamieson laid down multiple factors to back her hypothesis including the strategic release of stolen documents through WikiLeaks (amplified by the mainstream media) and disinformation disseminated on social media.

Evaluating the scale and impact of misinformation is hard, and expert opinion is divided.

“Ultimately, Trump’s election came down to 80,000 votes across three states — and by selectively depressing turnout with divisive social media posts, Jamieson argues, Russia’s interference was decisive,” The Verge noted.

Brendon Nyhan, a political scientist at the University of Michigan, disagrees: “many of the initial conclusions that observers reached about the scope of fake news consumption, and its effects on our politics, were exaggerated or incorrect.” In a March blog post, Nyhan wrote that there’s no evidence that “factually dubious for-profit sites whose content was shared millions of times on Facebook” were responsible for Trump’s victory.

“Research I co-authored finds that most people didn’t visit these sites at all in 2016. The same principle applies to Facebook political ads, which still have quite limited reach in 2018 relative to television ads; deepfake videos in politics, an idea where the media coverage radically outstrips the evidence of a crisis; and Russian hacking and information operations, a worrisome violation of our democratic sovereignty that was nonetheless relatively inconsequential to 2016’s electoral outcome.”

That doesn’t mean fake news is not a problem — far from it. Two things can be simultaneously true: misleading information may not have had a major impact in swaying elections, yet it still poses a threat to the health of a democratic society by amplifying fringe ideas to the mainstream, sowing seeds of distrust in traditional institutions and making facts drown in a sea of irrelevance. The problem deserves full attention from the public and policy makers alike.

The phenomenon — a polluted information ecosystem — is not entirely new; false stories and political disinformation campaigns have been around for centuries and humans’ affinity for emotional content is also a constant.

What has changed is the scale: an unregulated supply of misleading content, easily discoverable and shareable by both humans and bots. The vast volume of data mined by social media platforms and offline data brokers allows for paid advertising and micro-targeted, personalized content to shape multiple realities for different people. While it’s not just a technology problem, one can reasonably argue that social media has led to the accelerated distribution of misleading information.

The response to curb the spread of misleading content has been varied. Platforms are setting some policies about content moderation, civil society is accelerating efforts on fact-checking and governments are introducing legislation to regulate technology and online content. While stakeholders have all responded, the impact is limited. A scan of the initiatives underway to combat the spread of misinformation illustrates that some efforts are little more than virtue-signalling, but there is still hope for a stronger information ecosystem.

What Governments Have Done

Around the world, government response to misinformation can be categorized under four broad headings: monitoring social media content and criminalizing the spread of misinformation (read: censorship); fact-checking and media literacy initiatives; laws to regulate social media platforms; and legislation for more transparency in online political advertisement.

Monitoring Social Media Content and Criminalizing the Spread of Misinformation
Censorship is the most common measure adopted by countries to prevent fake news. The idea is simple — monitor the activity on social media, identify people and organizations posting or sharing problematic content, and punish them.

Brazil’s Federal Police announced a new program to “identify and punish the authors of ‘fake news’” in the run-up to its 2018 election. Under a Cambodian law introduced in 2018, publishers of fake news could be jailed for two years and fined US $1,000.

In Egypt, a law passed in July 2018 that “treats social-media accounts with more than 5,000 followers as media outlets” and can be prosecuted for publishing fake news, which is specified as a crime under the Egyptian penal code.

In May 2019, Singapore passed a law to regulate online speech and penalize propagation of “false statement of fact” and specifying hefty fines and jail terms for posting misleading content or using bots to propagate information. Malaysia criminalized sharing of misinformation in 2018.

The problem with most of these laws is ambiguity. What constitutes fake news? What happens when the person sharing the content is not aware that the information is incorrect?

According to Poynter, a non-profit research institute, France was the first country to pass a law that clearly defines fake news: “Inexact allegations or imputations, or news that falsely report facts, with the aim of changing the sincerity of a vote.” The law, passed in 2018, allows French authorities to remove misleading content on social media and block the websites that publish it.

Similar provisions on monitoring and censoring have been introduced across the world, but their efficacy is still unclear.

On the contrary, in countries where social media remains one of the few areas for free expression of ideas and dissent, “solving fake news” has served as a proxy for authorities to limit free speech.

Fact-checking and Media Literacy Initiatives

These initiatives aim to help citizens become critical consumers of news and information, and use state authorities’ reach and resourcing to combat rumours.

Brazil launched an online fact-checking page to clarify misleading information circulated ahead of last year’s elections. The Cambodian government is planning to launch a television show on fake news. In Indonesia, the government created a website that allows citizens to report fake news and check for accuracy of the submitted story, and started weekly briefings on fake news to educate the public about the spread of disinformation. In Italy, the government set up an online portal where people can report misinformation; police will fact-check the reported stories and take action if laws are found to have been broken. In Nigeria, the army set up a helpline for locals to report fake news and started using radio broadcasts to debunk false stories.

Sweden planned differently. Ahead of the 2018 election, the government announced that it would set up a “psychological defence” authority to “ensure that factual public information can be quickly and effectively communicated” to “identify, analyze and confront influencing operations.”

Poynter highlighted the difference in Sweden's approach: “rather than attempting to directly fight false or misleading information, it instead is aimed at promoting factual content.”

The Canadian government announced that it will spend CDN$7 million on “digital, news, and civic literacy programming” and launched a campaign for promoting citizen-literacy about online misinformation. The program is designed to encourage citizens “to read a diversity of sources, think before they share information online, ask them to think critically about what they see, question if messages are trying to influence them and encourage them to rely on trusted sources for news.”

Fact-checking has its limitations. An analysis by Alto Data Analytics found that the “reach of fact-checkers is limited, often to those digital communities which are not targets for or are propagating disinformation.” Another study of the 2016 American election found that “fact-checks of fake news almost never reached its consumers.”

Regulating Social Media Platforms

While censorship regulation aims to control the behaviour of content creators and media literacy initiatives aim to empower content consumers, regulating online platforms turns attention to content distributors.

Social media companies have consistently argued that they are platforms and not publishers. Publishers (newspapers for example) make editorial calls about the content that is published, how it is authored and which angles are and are not included. In contrast, social media companies often hesitate to do so, citing political neutrality; they argue that platforms are mere intermediaries and they do not bear responsibility for the user’s post.

Social media companies have consistently argued that they are platforms and not publishers.

However, social media companies are perpetually making editorial calls. The algorithmic filters are making data-driven decisions to rank content and show personalized feeds to users. What gets amplified depends on the metrics fed into the algorithms. At the same time, most platforms have complex guidelines (although vague) on content that is prohibited, with takedowns occurring regularly.

Only a few governments have started holding platforms accountable for the content they host. In January 2018, Germany enforced a law that requires online platforms with more than two million users to remove "obviously illegal" posts within 24 hours or face fines up to €50 million. In May 2019, while announcing the launch of a new Digital Charter to enforce rules governing social media platforms, Canadian Prime Minister Justin Trudeau tweeted: “Social media platforms must be held accountable for the hate speech & disinformation we see online – and if they don’t step up, there will be consequences.” The Indian government has demanded WhatsApp track the origin of messages on its platform to identify and curb crimes triggered by fake news, a demand the company has rejected.

Transparency in Online Political Ads 

The strongest demand for regulating platforms lies in the push to increase transparency around who funds online election-related advertisements.

Why does it matter? Alex Stamos, former Facebook chief information security officer (CISO) and current professor at Stanford University, has argued that advertising and recommendation engines should be at the top of the list for social media regulation. These services put content in front of people who did not ask to see it. It is crucial to know who is paying for these ads — especially given the threat of foreign interference in elections — and which communities are being targeted.

The demand for online ad regulation began in the United States. In October 2017, United States Congress announced a bill, the Honest Ads Act, which “would require online platforms such as Facebook and Google to keep copies of ads, make them public and keep tabs on who is paying — and how much.” The French government passed a similar law in 2018.

Canada’s Bill C-76 forces tech platforms to maintain a registry of domestic and foreign political advertisers during elections. And Israel banned the publication of anonymous ads on the internet on any platform in the run-up to its April 2019 election.

Australia is also moving toward transparent online advertisements and asking platforms to include the name and address of the people responsible for election-related ads. When companies failed to do so, the Australian Electoral Commission warned Twitter and Facebook that they would face court-ordered injunctions if they were unable to remove illegal political ads.

When Government Officials are Bad Actors

Any step taken by governments faces one big challenge: What happens when politicians and government officials themselves engage in activities they are meaning to regulate when the regulator itself is the bad actor?

That’s exactly what is happening. “Government agencies and political parties around the world are using social media to spread disinformation and other forms of manipulated media,” Philip Howard, Director of the Oxford Internet Institute said in September on the launch of a new study from Oxford researchers.

At least 70 countries launched political disinformation campaigns in the last two years, the study found. Governments spread manipulated media to garner voter support, discredit political opponents and downplay opposing views, the researchers noted.

“Although propaganda has always been a part of politics, the wide-ranging scope of these campaigns raises critical concerns for modern democracy,” Howard said.

What Platforms Have Done

An increasing number of people get their news from social media and its prominence is growing as more people gain access to the internet.

The problem is that there is no easy way to identify problematic information using algorithms in the millions of posts from millions of users. Right now, measuring the degree of truth associated with a post still requires human judgment.

Critics argue algorithms that are optimizing for engagement promote false and outrageous content over fact-based news. A 2018 study by three MIT scholars found that “false news spreads more rapidly on the social network Twitter than real news does — and by a substantial margin.”

Platforms play a major role in controlling information flows but their private ownership makes it difficult to hold them accountable. “Major tech companies are all acting in a quasi-governmental manner,” Stamos, the former Facebook CISO said in a talk at the University of Berkeley, “but they don’t have the legitimacy of governments.”

“When I was the CISO at Facebook, I had an intelligence team: a team of people whose entire job was to track the actions of state governments and their activities online and then to intercede to protect citizens of other governments. That is a unique time that a private company had had that responsibility,” Stamos explained.

“The companies all have people who decide what is acceptable political speech, what is an acceptable advertising standard,” he said. “But they don't have the transparency, they have never been elected.”

Facebook

Under increased government and public scrutiny, Facebook and other large platforms have taken steps to control misinformation.

Since 2016, Facebook says it has used the remove, reduce, and inform” strategy to “manage problematic content across the Facebook family of apps.” In an April blog post, the company explained the strategy: “This involves removing content that violates our policies, reducing the spread of problematic content that does not violate our policies and informing people with additional information so they can choose what to click, read or share.”

The company is also modifying its algorithms to reduce the reach of Facebook groups that are known for spreading misinformation.

In April of this year, the company introduced “click-gap,” a new metric that Facebook’s news feed algorithm will use for ranking posts to “ensure people see less low-quality content in their News Feed.” According to Wired, click-gap is the company’s “attempt to limit the spread of websites that are disproportionately popular on Facebook compared with the rest of the web. If Facebook finds that a ton of links to a certain website are appearing on Facebook, but few websites on the broader web are linking to that site, Facebook will use that signal, among others, to limit the website’s reach."

Facebook has expanded its third-party fact-checking program, but has acknowledged its limitations: “There simply aren’t enough professional fact-checkers worldwide and, like all good journalism, fact-checking takes time,” the company wrote in a blog post.

In 2018, Facebook adopted a new strategy to take down propaganda accounts called “coordinated inauthentic behaviour” (CIB). Facebook, in this case, doesn’t look at the content posted on the groups and pages: it takes decisions based on behaviour alone — people using fake accounts to misrepresent themselves, breaking spam rules or coordinating activity on the platform through multiple accounts.

While this strategy is not limited to curbing fake news, it does attack bad actors from coordinating to amplify propaganda. In the run-up to the 2019 Indian election, for example, Facebook took down hundreds of accounts engaging in CIB, including pages known to peddle fake news. Unfortunately, it was only a small portion of the content generated by users working to exploit Facebook as a platform for spreading misinformation.

Any discussion on curbing misinformation must recognize that the task ultimately requires humans to make calls on what is acceptable speech.

Although limited, some research indicates that Facebook has had some success: An October 2018 study from authors at Stanford University and New York University found that “interactions with fake news stories fell sharply on Facebook while they continued to rise on Twitter.” The authors concluded that “Facebook’s efforts to limit the diffusion of misinformation after the 2016 election may have had a meaningful impact.”

However, researchers can only sample data, and only Facebook knows the full scope of the problem. The company, for its part, is promoting independent research on social media’s role in elections by offering grants and access to privacy-protected Facebook data to selected researchers.

The project, however, did not pick us as planned: “18 months later, much of the data remains unavailable to academics because Facebook says it has struggled to share the information while also protecting its users’ privacy,” the New York Times reported earlier this week.

WhatsApp

Facebook-owned WhatsApp is the major carrier of fake news in the developing world. The WhatsApp sphere — and other instant messaging apps with end-to-end encryption — is different from Facebook and YouTube in many ways — most importantly, the companies can’t look at the content, there are no algorithms to rank content, and there are no filters.

In response, WhatsApp has added restrictions on forwarding messages (to add friction to making content viral), added a “forwarded” label to show a message has been passed on from another source by the sender, blocked accounts that engage in spam-like behaviour and launched public education campaigns to make people aware of fake news.

Google

In February 2019, Google published a white paper that discusses the company’s “work to tackle the intentional spread of misinformation across Google Search, Google News, YouTube, and our advertising platforms.”

The company’s approach to tackle disinformation is very similar to that employed by Facebook: “make quality count in our ranking systems, counteract malicious actors, and give users more context.”

Google outlined the behaviours it prohibits, including “misrepresentation of one’s ownership or primary purpose on Google News and our advertising products, or impersonation of other channels or individuals on YouTube.” This is relevant to tackling disinformation as “many of those who engage in the creation or propagation of content for the purpose to deceive often deploy similar tactics in an effort to achieve more visibility,” the company wrote in the paper.

To empower users, Google’s products and services “expose users to numerous links or videos in response to their searches,” which “maximizes the chances that users are exposed to diverse perspectives or viewpoints.”

New Norms

Any discussion on curbing misinformation must recognize that the task ultimately requires humans to make calls on what is acceptable speech. People interpret the same information in different ways, thanks to inherent cognitive biases; people seek information that confirms their beliefs and avoid inconvenient facts when they are unpleasant. It is hard to change people’s minds.

We are in the middle of a societal transformation. While regulators and platforms alike have taken a number of steps to curb the spread of misinformation, they are little more than baby steps. To find a workable solution will require a more coordinated — and perhaps global — approach.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Samarth Bansal is a freelance data journalist based in New Delhi where he writes about technology, politics and policy. His work has appeared in The Wall Street Journal, the Hindustan Timesand The Hindu.