America’s Newfound Interest in Regulating Tech May Be a Game Changer

Because the United States is home to so many of the tech giants whose products affect our lives, the White House’s newfound interest in regulation could be transformative.

March 30, 2022
fhaug2021-12-01T000000Z_1056740649_RC2S5R94H52Q_RTRMADP_3_FACEBOOK-CONGRESS-HAUGEN.png
Former Facebook employee and critic Frances Haugen answers questions during a U.S. House Subcommittee on Communications and Technology on Capitol Hill in Washington, December 1, 2021. (REUTERS/Elizabeth Frantz)

Could the tide be turning on our tacit acceptance of the role big tech plays in moulding our minds?

On March 1, President Joe Biden declared in his State of the Union address that “we must hold social media platforms accountable for the national experiment they’re conducting on our children for profit.” He credited the courage of Facebook whistle-blower Frances Haugen, whom he hosted as his guest at the joint session of Congress and whose revelations last fall made social media’s impact on the mental health of young people around the world undeniable. It is a relief that Haugen’s inside story has finally struck a chord in the United States. For many, the issues she highlights were no surprise.

As campaigners such as Privacy International have flagged, in the current data economy even our mental health is for sale. Many mental health websites around the world share information about their visitors, including, in some cases, answers to self-assessment questionnaires about mental health. That information, whether about children or the adults around them, is highly valuable in an online ecosystem where it is assumed to be legal to prey on and play with people’s emotional states. And it is the kind of information that could be fed into the algorithms that decide the price, or availability, of your medical insurance.

Reports from Australia in 2017 claimed that Facebook had offered advertisers real-time access to the emotional states of teenagers and young adults, allowing them to be targeted when they were at their lowest ebb. It was a claim that Facebook denied. But last year, Reset Australia found that it could buy advertising targeting thousands of children with dangerous interests such as extreme weight loss, alcohol and gambling for a few dollars. Surveillance advertising is the exploitation of all our mental states for someone else’s benefit.

But that weighted blanket Facebook wants to sell you to calm your anxious dreams is just the tip of the iceberg. The surveillance advertising business model is the oil that drives disinformation about COVID-19, turning it into “a partisan dividing line” instead of an infectious disease. That business model is also the pusher of conspiracy theories that lead people to take up arms on the steps of the US Capitol Building. It is the fuel for Russian information warfare in the current crisis in Ukraine, and it has been targeting democracies around the world for years. The algorithms that support surveillance advertising thrive on division, whatever the topic.

Campaigners and legislators have been grappling with these issues for more than a decade. Earlier this year, campaigners in Europe had a groundbreaking win with a ruling from the Belgian Data Protection Authority that consent pop-ups that are used to legitimize massive online tracking by advertisers are in fact a breach of EU law. Meta’s response to increased EU regulation had been to threaten that it may withdraw its business from Europe. Perhaps that would be no bad thing. If its business model cannot respect our rights, maybe it’s time for a new tech paradigm.

In the European Union, the Digital Services Act and the AI Act attempt to limit the human rights impacts of technology. And the UK’s Online Safety Bill, touted as a flagship piece of legislation to make the internet safer, was recently published. It will no doubt provoke more intense debates that pit safety against freedom of speech. But the bill’s focus on content is simultaneously too broad and too narrow and fails to touch the real problem. It is the systems, not the content, that cause the real harm. And the issues caused by business models built on surveillance, profiling and targeting go far beyond what we say. They affect how we feel, how we behave, how we spend and how we vote.

Even China has introduced regulation to tackle the influence of recommender algorithms that manipulate the way we see the world. The United States may be late to the party, but as the home of many of the tech giants that affect all our lives, its newfound interest in regulation may be a game changer.

But the reality is that any genuine move to address the harms must go beyond legislating to protect children online or to police content. It is the business model that uses vast troves of data on each of us to understand what we think. It determines how to press our individual emotional buttons in order to change our opinions and our actions. That is the biggest threat to our collective human future, and our children will only be safe when we are all free of it.

This article first appeared on Techonomy.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Susie Alegre is a CIGI senior fellow and an international human rights lawyer.