Recent municipal elections in Ontario, Canada, again highlighted the problem of harassing, hateful online speech, which caused some campaigning municipal politicians to leave social media over safety concerns. In the Niagara region, for example, a re-elected regional councillor set her social media accounts to private after experiencing personal attacks, while the first openly transgender candidate for municipal office in St. Catharines largely restricted her social media use during the election after experiencing “relentless” harassment.
Social media is so toxic that some political candidates are deciding to forgo using this medium to communicate with constituents. Such pervasive threats and harassment constitute a type of political violence.
Political violence through social media is not new, nor is the gendered nature of online harassment. Women, racialized and LGBTQ+ candidates receive a disproportionate share of abuse, at all levels of politics and across the political spectrum. Politicians, including Conservative member of Parliament Michelle Rempel Garner; Catherine McKenna, former Liberal minister for the environment; and former Toronto councillor Kristyn Wong-Tam have condemned the systemic patterns of harassing and threatening behaviour against female politicians, both online and in person. McKenna specifically called for regulation to address the promotion of violence and hate on social media companies.
Debates over regulating the internet, such as through Canada’s proposed online harms bill, often focus on the risk of silenced speech in the future, especially by authoritarian governments. But the discussion tends to downplay how violence and hate online are already driving people from the public sphere. Speech is already being chilled. This is already a problem for democracy and our aspiration to a free media.
With threats so omnipresent, women may not be willing to stay in or even enter politics. An academic report by Chris Tenove and Heidi Tworek studying online harms during the 2019 Canadian federal election concluded that frequent abuse injures public figures’ well-being and presents “barriers to political participation by people from under-represented groups.”
Online hate also affects how journalists do their jobs, with the result that some are leaving the profession, switching to less public-facing roles or removing their bylines from their work. The first Canadian national survey of online harassment against journalists and media professionals, in November 2021, found 72 percent had experienced online harassment, while 73 percent believed online harassment has grown more frequent over the last two years.
The Canadian Association of Journalists, in a September 2022 statement co-signed by media organizations, called on Canadian policy makers and police to act against hateful speech and harassment of journalists that cause them to fear for their safety. In response, Prime Minister Justin Trudeau said police need to take “seriously” the “pattern of intimidation and attacks” against journalists, as they can have a chilling effect on a free press and democracy.
Beyond more effective policing, the growing number of online threats against politicians and journalists should reignite policy makers’ efforts to move forward on the online harms bill, which is currently being redrafted after receiving largely critical feedback.
Opposition to state regulation of social media often identifies the potential for government censorship as the primary harm. While a legitimate concern, this critique generally does not acknowledge that social media is already silencing people, by way of their self-censorship or entire removal of themselves from online life because of hateful, abusive speech, as pointed out by US legal scholars Danielle Keats Citron and Mary Anne Franks.
Simply put, the current social media environment permits online hate and abusive, harassing behaviour. As I have laid out in previous articles, meaningful change will not result from relying upon greater transparency from industry, voluntary codes of conduct, or tasking users to flag harmful content. It is not realistic to demand that users police multi-billion-dollar companies.
For regulation to be effective it must address the companies’ business models — that is to say, advertising-based revenue generated by user engagement. The platforms’ algorithms are designed to increase this engagement, a key growth metric, even when that engagement may entail sharing potentially dangerous content such as medical misinformation. When engagement is monetized, regardless of the quality of speech, governments need to make structural reforms, specifically, restricting companies’ use of targeted behavioural advertising. This type of advertising relies on detailed personal information that social media companies continually siphon from their users.
Elon Musk’s takeover of Twitter, and users’ growing concerns about hate speech increasing on the platform, is the perfect moment for Canadian policy makers to reconsider how to regulate hateful, violent speech on social media. It’s time for decisive measures that directly tackle the causes of online harms, not for partial measures that preserve the status quo.