Five Things to Know About the Hearing on Extremism and Misinformation

March 26, 2021
2021-03-25T163607Z_828398392_MT1SIPA000KE2FZP_RTRMADP_3_SIPA-USA.JPG
Mark Zuckerberg testifies remotely at the US House hearing on extremism and misinformation. (US House TV via CNP/Sipa USA)

On March 25, the US House of Representatives Energy and Commerce Committee held a joint subcommittee hearing titled “Disinformation Nation: Social Media’s Role in Promoting Extremism and Misinformation.” Mark Zuckerberg from Facebook, Sundar Pichai from Alphabet and its subsidiary Google, and Jack Dorsey from Twitter all appeared before the committee. And although this was the latest event in an ongoing legislative process that has brought tech executives to testify in front of Congress several times over the past couple of years, it was the first time since the riots on Capitol Hill on January 6, 2021.

The session was held online, which denied the politicians the grandeur of holding these hearings inside the US Capitol buildings. They looked awkward, staring into their webcams, while the tech executives appeared poised and well-framed. The entire process was embellished and theatrical, as politicians forced the executives to answer yes or no to leading questions, and those executives did their best to play out the clock and provide vague responses.

However, the hearing was also substantive: everyone involved went out of their way to demonstrate the actions they were taking to address the larger issue of rising extremism and proliferating misinformation. Here are five key takeaways from the hearing.

1. Politicians Are Eager to Show That They Are Taking Action to Regulate Big Tech

With a new administration in the White House and the Democrats in control of Congress, there is an eagerness to demonstrate to the voting public that something is being done to rein in big tech. Ohio Republican Representative Bill Johnson even noted that he thought the hearing marked a new relationship between the technology companies on the stand and the government.

During the hearing, while politicians certainly used the opportunity to question the three executives, they also took time to preview their own planned legislation or regulatory ideas. Democratic Representative Peter Welch from Vermont promoted a dedicated regulatory agency that could develop relevant expertise within government — an idea that Mark Zuckerberg responded positively to.

Anna Eshoo, a Democratic representative from California, warned the tech executives that a revamped version of the Protecting Americans from Dangerous Algorithms Act is in the works, as is a bill that would ban surveillance-based advertising as a business model.

New York Congresswoman Yvette D. Clarke noted her plans to introduce the Civil Rights Modernization Act of 2021, which will target discriminatory advertising and algorithmic bias on digital platforms.

These were just some of the initiatives that were previewed or highlighted. Across the board, politicians worked to emphasize to the executives present — but perhaps more so to the public —  that they were finally ready to take action.

2. Tech Companies Continued to Assert That They Are Already Doing Everything They Can

Despite ongoing hearings and the countless violent or extremist events that have been propelled, in part, by social media, big tech platforms are eager to tell anyone who will listen that they’re doing everything they can and, in particular, that they are working with trustworthy, external, expert partners. Versions of this response were repeated by all three executives when they were presented with questions regarding their efforts to mitigate the spread of disinformation.

Facebook emphasized their content oversight board, and Zuckerberg boasted that Facebook removes over a billion accounts per year for being fake or for violating their terms of service. When asked, Zuckerberg also indicated that Facebook has technology that can identify young users who lie about their age in order to sign up for the company’s products.

Zuckerberg also repeatedly called for the establishment of standards, whether in the form of national privacy legislation or to deal with bias and discrimination. This call might be interpreted as an attempt to reinforce big tech’s universal claim that the companies are doing everything they can, and will continue to do so, by adhering to any standards that are set by the government (or industry).

3. Section 230 of the Communications Decency Act Is Due for Reform

Many of the changes proposed at the hearing focused on reforming Section 230 of the Communications Decency Act. This is the decades-old law that has allowed digital platforms to avoid responsibility or liability for the content on their platforms. While politicians are eager to tinker with this law, it is, in some ways, the foundation upon which the US internet industry is built. As Mark Zuckerberg articulated in his opening submission to the hearing, Facebook is open to regulatory change regarding unlawful content, with some caveats:

We believe Congress should consider making platforms’ intermediary liability protection for certain types of unlawful content conditional on companies’ ability to meet best practices to combat the spread of this content. Instead of being granted immunity, platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it. Platforms should not be held liable if a particular piece of content evades its detection — that would be impractical for platforms with billions of posts per day — but they should be required to have adequate systems in place to address unlawful content.

Such a system would only be viable with transparency reports that detail what companies are doing to moderate content and enforce their policies.

While both the politicians and the company executives agreed that small companies should be treated differently, the general consensus was that large companies should be required to have moderation capabilities. Facebook, in particular, wants a law that would allow companies to actively moderate without being liable — activity they’re already actively engaged in.

4. Politicians Spent Considerable Time Highlighting Social Media’s Impact on Children

While many Republican representatives focused on the perceived political bias of the algorithms on these digital platforms, an even larger repeating theme was concern regarding the impact of social media on children. They directed many questions and accusations at the tech executives regarding the dangerous and addictive services and products on their platforms and their detrimental impacts on children’s well-being.

Asked outright if they make addictive products, the execs said no — but they also admitted that they restricted their use among their own children.

Republicans were not alone in expressing concern about children’s welfare in relation to the digital platforms. Politicians from both parties indicated an interest in legislation that would increase measures to protect children as well as introduce fines for companies who failed to comply with such measures.

5. As Expected, the Tech Companies Proposed Techno-Solutions for Oversight and Transparency

A number of representatives expressed their beliefs about how some of the platforms’ algorithms work, but few took the time to actually ask about or explore the need for algorithmic transparency.

Twitter’s Jack Dorsey did, however, speak to this issue. He repeatedly promoted a “protocol approach” rather than a government approach to regulation. According to Dorsey’s vision, by way of open-source technology and a transparent process, the industry could, in the future, be both decentralized and subject to scrutiny from government or the public.

Dorsey pointed to Twitter’s Bluesky initiative, which is attempting to create decentralized open-source social media protocols, as well as the Birdwatch program, which is a community-based approach to misinformation and fact-checking.

Perhaps Dorsey is anticipating antitrust actions and a decentralized social media ecosystem, where interoperability is both mandated and necessary.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Jesse Hirsh is a researcher, artist and public speaker based in Lanark County, Ontario. His research interests focus largely on the intersection of technology and politics, in particular artificial intelligence and democracy. He recently completed an M.A. at Ryerson University on algorithmic media.