Last month, in its final throes before the snap summer election, the Canadian government unveiled a proposal for a new online harms bill. If it passes, social media platforms will be responsible for taking down five categories of already illegal content and speech within 24 hours, and the public will be able to report content to a new regulator, the Digital Safety Commissioner, who can order the social media platforms to take down or even repost content. This bill comes after years of government consultations in not only Canada but other countries where similar online harms laws have been enacted.
It also notably comes after a disastrous policy rollout of the government’s Broadcasting Act reforms, Bill C-10, which have rightly created an environment of deep distrust within the tech policy community.
The online harms bill has therefore — unsurprisingly — led to a backlash from some conservatives, civil libertarians and many internet rights activists. And there are some serious problems with this legislation as currently drafted. While for some this concern is rooted in a desire to stave off all regulation, and some critique is weakened by its by over-the-top rhetoric, there are clearly problems with this legislation as currently drafted.
For example, forcing platforms to take down content in 24 hours has been shown in other countries, such as Germany, to lead to over-censoring. Increased data sharing between platforms and the police poses serious privacy risks. And mandating telecom companies (which the bill calls “online communication service providers”) to build the infrastructure to block access to non-compliant platforms threatens long-held notions of net neutrality.
This is not to say that new regulations are not urgently needed — they are. Nor that there are not some smart and considered aspects of this legislation — there are. I have been studying how we can responsibly regulate platforms and working with networks of global scholars on policy options for governments around the world for years. This topic is hard, with difficult trade-offs, and policy makers and scholars are learning, but there is growing body of international evidence on what works and what doesn’t. The challenge is that rather than focusing on the root causes of the problem — the mass collection of personal data and the business model that incentivizes harmful content — the government has focused on the symptoms of this structural problem – the content itself. And, in so doing, it has stepped into the perilous space of governing speech.
Much of the criticism that the online harms bill has received is fundamentally about the government limiting speech. However, what is often left out of these critiques is the reality that there is a central tension in democratic societies between two democratic rights — the right to free speech, and the right to be protected from harmful speech. Every democracy balances these rights differently.
In Canada, while our Charter of Rights and Freedoms seeks to value both, in our current debate many who advocate for the rights of those harmed by hate speech are left out of the conversation. All of this makes the perspective of my recent guest on Big Tech even more critical. Few people have spent more time thinking about these kinds of questions than Jameel Jaffer.
Jaffer is the executive director of the Knight First Amendment Institute at Columbia University, and the former deputy legal director of the American Civil Liberties Union. While Canadian, Jaffer has spent most of his career in the United States, where he has been involved in some of the most important free speech litigation of the past two decades, including a successful challenge to the Patriot Act, a lawsuit against the National Security Agency, and freedom-of-information requests to force document disclosures relating to secret torture and drone programs.
He was also a part of the Canadian Commission on Democratic Expression, whose report informed some (but notably not all) of the current online harms bill.
As the Canadian government gets closer to passing some kind of online harms legislation, I believe this is a really important conversation to be having. We’re at a point where the discourse around this issue seems almost irreparably polarized: you’re either a for-free-speech person or a for-safe-speech person. This dichotomy is as trite as it is unhelpful.
As Jaffer remarked in our conversation, “I don’t see it as weighing free speech against other interests. Or limiting free speech because of the harms that free speech causes. I see it as thinking about what free speech really means — like, what values are we trying to protect when we say we care about free speech? And I think that most of us think more or less the same things. That we want to protect everybody’s right to participate in the conversation. We want to ensure that free speech works for our democracy.”
We clearly need smart internet regulation that both provides for free speech and mitigates against potential harms. And in order to arrive at this balance, we would be wise to listen to the thoughtful and considered views of Jameel Jaffer.