What Can Canadian Law Makers Draw from the New UK Online Safety Bill?

May 20, 2021
shutterstock_265870769.jpg
(Shutterstock)

On May 12, 2021, the United Kingdom published its draft Online Safety Bill, a new set of landmark laws designed to keep children safe, stop racial hate and protect democracy online.

Meanwhile, Canada is preparing to launch its own legislation to combat online harms, which the government has confirmed will create a new regulator to target illegal content, including child sexual exploitation, terrorism, inciting violence, hate speech and non-consensual sharing of intimate images online.

We asked three experts, “How does Canada’s planned legislation compare to the United Kingdom’s new bill and what might we draw from the Online Safety Bill?” Here are their responses.


Suzie Dunn, the Centre for International Governance Innovation

It’s difficult to compare Canada’s planned online harms legislation to the United Kingdom’s new draft Online Safety Bill as the full details of Canada’s plan have not yet been released. From what has been announced, we know that both countries are proposing to task a regulator with enforcing various content moderation requirements for social media companies, including swift takedowns of certain forms of illegal and, in the case of the United Kingdom, legal but deemed harmful, content. Those companies that fail to remove this content could face sanctions or penalties for failing to do so. Both proposals have faced critiques about their framing of the issue and their proposed methods of enforcing takedowns.

Canada’s approach to regulating platforms should centre human rights, substantive equality and intersectionality, and employ a trauma-informed approach.

While some form of takedown regulation is necessary to address certain online harms, as recommended by the Women’s Legal Education and Action Fund (LEAF) Deplatforming Misogyny report, Canada’s approach to regulating platforms should centre human rights, substantive equality and intersectionality, and employ a trauma-informed approach.

Another important aspect of Canada’s approach should focus on research, education and supporting organizations whose clients are impacted by these harms. The UK’s draft bill includes the promotion of media literacy and the development of research on online safety matters, such as users’ experiences of regulated services. As Canada moves forward in developing its legislation, as recommended by LEAF, it has an opportunity to establish a regulator with a strong dual mandate: one that provides meaningful remedies and supports for those impacted by online harms but also focuses on developing research and educational materials to better understand these social harms and the policies used to address them, and that further mandates supports for front-line and grassroots organizations that can provide direct assistance to people harmed in digital spaces.

Will Perrin, Trustee, Carnegie UK Trust

The draft Online Safety Bill proposes a new safety regime based on risk management. User-generated content platforms and some search companies will have to perform risk assessment and management for harms arising from illegal material, material that might harm children, and some other material that might harm adults. In a balancing act, companies will also have to respect free speech, speech in political debate, and material from traditional media, already regulated or self-regulated elsewhere. The United Kingdom’s highly experienced media regulator Ofcom will supervise, acting proportionate to a company’s resources and risk levels. Ofcom will have strong information-gathering powers. Ofcom will have powers to fine and, in extremis, disrupt the business activities of social media companies.

The challenge for democracies now is to work together for some commonality in their regulation.

An important contrast to what we know of Canadian proposals is that the UK draft regime is systemic — looking at system design, ensuring on a polluter-pays basis that the company that causes harm bears the responsibility for making judgments to limit harm by making its systems work better with external supervision — not merely palliative takedown. The challenge for democracies now is to work together for some commonality in their regulation. The G7 Tech Ministers’ declaration was hopefully a step toward a multilateral system.

Heidi Tworek, the Centre for International Governance Innovation

The draft Online Safety Bill is the latest stage in a long process of designing a regulatory regime around online services in the United Kingdom. The bill differs quite substantially from Canada’s upcoming legislation. The UK bill offers a much more sweeping vision of the relationship between regulators and platforms than Canada’s upcoming legislation, which will focus on illegal content. The UK bill builds on ideas of “duty of care” and seeks to tackle “harms,” which remain rather loosely defined in the legislation; this ambition is broader than Canada’s focus on five categories of speech that are already illegal. But the UK’s ambition may generate “a risk of arbitrary/ill-thought through distinctions creating incoherent and confusing rules that are prone to loopholes,” as an article in TechCrunch put it.

The omissions and commissions of the Online Safety Bill may hold several helpful lessons for Canada. On the one hand, the omissions: The UK bill focuses on user-generated content and does not seem to address advertising. This may be clarified or it may be an issue that the UK government intends to address through a new Digital Market Unit. But the omission seems strange given that advertising underpins the business model of the largest online platforms and ad fraud is a major problem. It is unclear where advertising content fits in Canada’s legislation, but this points to an important issue to consider.

On the other hand, the bill incorporates transparency reports as a crucial part of obligations for both the regulator (Ofcom) and service providers. This represents an important step in ensuring transparency from government alongside platforms, although it remains to be seen how this will play out in practice. Canada might consider similar transparency requirements for its new regulator and around any government involvement such as takedown requests. But transparency reports also raise questions about what they really measure or how they influence action, as Sun-ha Hong has pointed out.

If transparency reports are to play a role, it is important to ensure that their metrics don’t create perverse incentives to silence the very communities whom legislation is designed to help.

To take one example, relying on takedowns as a proxy for efficacy can create problematic incentives to delete content. Instances of content deletion are more common than many might think. Just at the start of May, Instagram was forced to apologize for deleting posts about missing and murdered Indigenous women and girls (MMIWG). One affected person — Emily Henderson, an Inuk writer based in Toronto — felt that Instagram had not “adequately addressed that feeling of silence and erasure.” If transparency reports are to play a role, it is important to ensure that their metrics don’t create perverse incentives to silence the very communities whom legislation is designed to help.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Authors

Suzie Dunn is a senior fellow at CIGI, a Ph.D. candidate at the University of Ottawa and an Assistant Professor of Law & Technology at Dalhousie University.

William Perrin is a trustee with Carnegie UK Trust and has worked on many regulatory regimes in the United Kingdom.

Heidi Tworek is a CIGI senior fellow and an expert on platform governance, the history of media technologies, and health communications. She is a Canada Research Chair, associate professor of history and public policy, and director of the Centre for the Study of Democratic Institutions at the University of British Columbia, Vancouver campus.