What Can We Do to Combat Online Gender-Based Violence?

Laws and government regulations are one approach to dealing with digital threats against women and girls, but they must be implemented with care to avoid unintended harms.

June 23, 2022
2019-04-19T171138Z_1_LYNXNPEF3I0TY_RTROPTP_4_SAUDI-CULTURE
Dea Rrozhani and Jonada Shukarasi, both 16, creators of GjejZa, an application that’s intended to fight domestic violence, work on a laptop in Tirana, Albania, September 25, 2019. (REUTERS/Florion Goga)

Digital technology provides modern citizens with remarkable tools for navigating an increasingly tech-facilitated world. But the same innovations that can be used to foster community and enable communication among family and friends can also be turned to anti-social purposes — to commit crimes, to sow division, to promote extremism, and to monitor and repress political opponents.

From disinformation to deepfakes and cyber mobs, digital technology has opened up new vulnerabilities for democratic life, social harmony and notions of the common good. In the words of cybersecurity experts Bruce Schneier and Tarah Wheeler: “Today, the Internet is fundamental to global society. It’s part of everything.… How individuals, corporations, and governments act in cyberspace is critical to our future. The Internet is critical infrastructure. It provides and controls access to healthcare, space, the military, water, energy, education, and nuclear weaponry. How it is regulated isn’t just something that will affect the future. It is the future.”

Because most technical and scientific knowledge is male-centric, women and girls experience these vulnerabilities differently than men and boys. Women in the digital world are often attacked for their appearance and their very existence, while men are attacked for their ideas and actions. Women often find that their security, their health, their relationships and their careers can be jeopardized in ways that men and boys rarely encounter or personally experience. Tech-facilitated gender-based violence (TFGBV) is one of these ways.

What can be done to prevent and combat TFGBV, without jeopardizing democratic values? In this first of two articles considering this question, I’ll be looking at different legal and regulatory approaches.

Law and Criminal Justice

If the various forms of TFGBV were treated as crimes, then law could be an important tool to combat them. Mary Anne Franks, a professor at the University of Miami School of Law, points out that “laws prohibiting stalking, harassment, extortion, computer fraud, identity theft, and threats can be very effective against online harassment, but they are rarely used because law enforcement either does not know, does not care, or does not have the training and resources to use them.” When women who face online harassment report their experiences to the police, they are too often told that the attack is a civil matter rather than criminal, even though applicable criminal laws exist.

Criminal prosecutions for online abuse can, however, be successful. In The Netherlands, Aydin Coban was convicted and sentenced to over 10 years for webcam blackmail and related crimes involving more than 30 girls in several European countries and the United States. In 2020, he was extradited to Canada to stand trial for extortion, criminal harassment, child luring and child pornography in the case of British Columbia teenager Amanda Todd, who committed suicide after years of cyber bullying and sextortion. His trial is under way in New Westminster, British Columbia.

Law professor and privacy expert Danielle Keats Citron argues that “we need to enhance criminal, tort, and civil rights laws’ ability to deter and punish harassers. State stalking and harassment laws should cover all of the abuse, no matter the mode of surveillance, tracking, terror, and shaming. The non-consensual disclosure of someone’s nude images should be criminalized. Civil rights laws should be amended to cover bias-motivated cyber stalking that interferes with victims’ important opportunities.”

Citron proposes permitting victims to sue under pseudonyms: “Pseudonymous litigation offers victims the opportunity to pursue their legal rights without further publicizing the abuse connected to their real identity.” Citron also proposes criminalizing revenge porn and harassment as felonies rather than as misdemeanours. Doing so would send a message to prosecutors that such forms of abuse should be taken more seriously than they usually are.

A recent example of a major legislative initiative is the European Union’s Digital Services Act (DSA). The law requires tech companies to offer a way for users to turn off recommendation algorithms that use their personal data to tailor content. Meta, TikTok and others are required under the DSA to share more data with university researchers and civil society groups about how their algorithms work. To increase transparency, companies will also be required to conduct an annual risk-assessment report, reviewed by an outside auditor, with a summary of the findings made public.

An emerging trend is the rise of legislative initiatives in several US states that are designed to prevent alleged “viewpoint censorship” by social media companies, based on claims that content moderation is biased against conservative views. Courts have not looked favourably on such laws so far. Comparative research on content moderation by Oliver L. Haimson, Daniel Delmonaco, Peipei Nie and Andrea Wenger has found that three groups of users experience disproportionate levels of social media comment and account removals. Indeed, one of them is politically conservative people. The other two are transgender people and Black people. However, the authors note that the types of content removed for each group differ: “Conservative participants’ removals often involved harmful content removed according to site guidelines to create safe spaces with accurate information, while transgender and Black participants’ removals often involved content related to expressing their marginalized identities that was removed despite following site policies or [that] fell into content moderation gray areas.”

Some proposed viewpoint laws do include exceptions for threats against people based on factors such as race, religion or national origin. Gender is rarely, if ever, included. This omission is quite common in laws banning hate, harassment, stalking or other forms of violence.

Given these findings, such viewpoint laws could make it more difficult to remove hateful, misogynistic or racist content, including the manifesto posted by the suspect in last month’s mass shooting in Buffalo, New York. Limits to free speech are certainly controversial and potentially dangerous to democratic discourse. In the case of TFGBV and online harms such as doxing and cyber mobs, however, scholars draw the line at speech that silences other speech. In the words of Mary Anne Franks: “When online harassment silences women and minorities and pushes them out of public spaces and conversations, then the principles of free speech have not been upheld.”

Some proposed viewpoint laws do include exceptions for threats against people based on factors such as race, religion or national origin. Gender is rarely, if ever, included. This omission is quite common in laws banning hate, harassment, stalking or other forms of violence. To address this, Danielle Keats Citron proposes criminalizing threats made due to someone’s gender.

Research in Brazil by political scientists Victor Araújo and Malu A. C. Gatto has shown that conservatism in the electorate is associated with the adoption of fewer policies that address violence against women, producing an implementation gap between law and practice. The result, according to Araújo and Gatto, is that “in contexts where the electorate holds conservative preferences, policy responsiveness may incur costs to women’s lives.” Clearly, greater diversity in the legal profession, police and the courts, as well as in tech companies themselves, would go a long way to ensuring better laws and fairer enforcement. But diverse electorates also appear to be a necessary prerequisite for laws and policies that adequately address TFGBV.

Regulation: A Delicate Balance

It has become increasingly clear that tech platforms cannot be solely relied upon to moderate content and to police their users. The result has been a growing appetite on the part of governments around the world to regulate these platforms with laws and penalties. A tension has emerged between preventing or controlling digital harms such as cyber bullying, misogyny and hate, on the one hand, and using such controls to stifle dissent, undermine free speech or erode democratic values. Danielle Keats Citron coined the term “censorship creep” to refer to this “expansion of speech policies beyond their original goals.” The costs, she argues, include “the suppression of legitimate debate and counter speech that might convince people to reject bigotry and terrorist ideology.”

A tension has emerged between preventing or controlling digital harms such as cyber bullying, misogyny and hate, on the one hand, and using such controls to stifle dissent, undermine free speech or erode democratic values.

As the non-governmental organization Freedom House reports,

The current drive for greater regulation raises the risk that instead of curbing and decentralizing the power of tech companies, governments will attempt to wield it for their own purposes and further infringe on users’ rights. The most promising legislation seeks to address online ills while bringing both corporate and state practices into compliance with international human rights principles such as necessity, transparency, oversight, and due process. But the danger posed by the worst initiatives is immense: if placed in the hands of the state, the ability to censor, surveil, and manipulate people en masse can facilitate large-scale political corruption, subversion of the democratic process, and repression of political opponents and marginalized populations.

This has already become a concern in the United States in the event that abortion rights are eliminated by the Supreme Court and individual states criminalize those who seek or provide abortions. The fear is that tech companies will be required to hand over user data to police or prosecutors seeking evidence of women and girls searching for abortion providers, medication or even travel to states where abortion is still legal.

This concern is not overblown. Censorship creep has already occurred in the area of countering terrorism and violent extremism and is extending into areas of foreign-influence campaigns, political disinformation and hate speech. Take the example of “hashing” technology developed to counter online child sexual abuse material (CSAM). As explained by the cybersecurity firm SentinelOne: “Hashes are the output of a hashing algorithm…. These algorithms essentially aim to produce a unique, fixed-length string — the hash value, or ‘message digest’ — for any given piece of data or ‘message’.” In countering CSAM, pornographic video images are “hashed” and added to databases that are shared by law enforcement, non-profit organizations such as the UK-based Internet Watch Foundation and the major internet companies. The same technology has now been applied to terrorist and extremist content. Under pressure from the European Union to develop such “voluntary” controls following major terrorist attacks in Paris and Brussels, the industry-led Global Internet Forum to Counter Terrorism (GIFCT) was formed in 2017 to run the database of hashed violent images.

Much TFGBV is perpetrated by extremist groups or individuals not subject to anti-terrorist legislation or counterterrorism initiatives.… While male supremacist websites and posts are clearly linked to misogynistic ideology that promotes subjugation of women, including by violent means, the use of “borderline content” has enabled violent extremists to thrive on the internet.

Seth G. Jones and his colleagues at the Center for Strategic and International Studies (CSIS) observe that “the evolution of commercial technology and the diffusion of internet and social media platforms will require Western governments to continually train government personnel, hire experts from technology companies and academia, update laws, and work with the private sector. Companies that cannot effectively take down content that supports terrorism should be held accountable, including through legal means.”

One problem with this blanket assertion is that much TFGBV is perpetrated by extremist groups or individuals not subject to anti-terrorist legislation or counterterrorism initiatives. Such initiatives tend to focus on groups designated by the United Nations or individual nations as terrorist organizations. While male supremacist websites and posts are clearly linked to misogynistic ideology that promotes subjugation of women, including by violent means, the use of “borderline content” has enabled violent extremists to thrive on the internet. Ye Bin Won and Jonathan Lewis, extremism researchers at the Global Network on Extremism and Technology (GNET), warn that “more diligence is required to mitigate the use of veiled or coded language by bad actors in furtherance of their respective extremist ideologies.…While borderline content does not explicitly violate policies, the unmitigated perpetuation of such content allows these communities to flourish. Technology companies will have to get the balance right, as ideologies such as male supremacism continue to inspire violent attacks, espouse dangerous rhetoric, and sometimes serve as links to other extremist communities.”

Definitional ambiguity creates problems for content moderation, information sharing across jurisdictions where definitions may vary, and policy coordination across tech platforms. The key is to enhance the clarity, accountability, transparency and oversight of government efforts to regulate content and the solutions developed by the tech industry in response to those efforts.

One interesting approach to resolving the tension between removing harmful content and protecting free speech is the idea of “counterspeech,” a term coined by Susan Benesch, founder of the Dangerous Speech Project. Counterspeech is similar to creating counter-narratives in the area of counterterrorism and in countering violent extremism. Facebook’s Online Civil Courage Initiative, for example, rewards civil society groups with support and other perks for countering hate speech online. Google’s Jigsaw, a unit that “explores threats to open societies,” combines Google’s advertising algorithms with YouTube’s video platform to dissuade aspiring Islamic State recruits. As Danielle Keats Citron describes it: “The program places advertising alongside results for key words and phrases commonly searched for by people attracted to ISIS. The ads link to YouTube channels featuring videos that counter ISIS’s brainwashing, such as testimonials from former extremists and imams denouncing ISIS’s distortion of Islam.”

Similar initiatives could be used to combat TFGBV.

Nicholas J. Rasmussen, executive director of the GIFCT, highlights the importance of “bringing stakeholders from private technological companies, academic researchers, and government practitioners together.” On the other hand, senior research fellow at Columbia University evelyn douek warns that “building in third-party oversight and accountability mechanisms from the start is essential. The GIFCT example shows that when institutions are set up as reactions to particular crises, the institutional design may not serve longer-term or broader interests.”

This leads us to the issue of tech design, which I’ll explore in the second part of this two-part series. Given the challenges that emerge during implementation of legal, regulatory or technical approaches to combatting TFGBV, I’ll argue that education, in both the short and the long term, and across all sectors of society, is the most promising approach to addressing the persistence of gender-based violence both online and off.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Ronald Crelinsten has been studying the problem of combatting terrorism in liberal democracies for almost 50 years. His main research focus is on terrorism, violent extremism and radicalization and how to counter them effectively without endangering democratic principles.