Education Is Key in the Fight against Cyber Harassment

It’s vital for all stakeholders and sectors of society to be educated about the nature of online gender-based violence, and how to combat it.

August 18, 2022
TFGBV
On the urging of the feminist collective #NousToutes, a demonstration took place in Paris to protest against sexist and sexual violence against women, children and trans people, November 20, 2021. (Djoudi Hamani/Hans Lucas via REUTERS)

In this second part of my look (part one is here) at what can be done to combat tech-facilitated, gender-based violence (TFGBV), I shift the focus from legal and regulatory approaches to technical and educational ones.

Tech Design and Ethical AI

Some of the challenges faced by legal and regulatory approaches to combating TFGBV result from how social media platforms are designed and function. In the conclusion to their book Mediating Misogyny, editors Jacqueline Ryan Vickery and Tracy Everbach pose questions to “experts trained in and knowledgeable about digital media, journalism, law, gender, and harassment” about what can be done about mediated misogyny. For Microsoft researcher and associate professor of communication at Cornell University Tarleton Gillespie, digital platforms have created a business model that facilitates misogyny, harassment and hate. In answer to the question “What Can Digital Platforms Do to Help Combat Online Harassment?” Gillespie replies:

platforms were intended to allow everyone to speak their minds, to connect with others around issues that matter to them, to be findable on the network, to present themselves as they choose, and to form bonds through conversation untrammeled by status or location. Harassment is all of those things, at least for the harasser. Harassment is not a perversion of that dream; it is one logical version of that participation — just not the one designers had in mind. And from a business perspective, at least in the short term, harassment and trolling are just as valuable to the platform as other forms of participation. If it’s advertising they seek, these are eyeballs to be sold like any other. If it’s data, these are traces to be sold like any other.

Gillespie goes on:

Platforms … hold popularity to be a fundamental value, the core value that serves as proxy to every other value: relevance, merit, newsworthiness. It is their core metric for engagement, and they perform it back to users as recommendations, cued-up videos, trends, and feeds. Harassment and hate take advantage of this by doing things that accumulate popularity: cruel insults that classmates will pass around, insults aimed at women that fellow misogynists will applaud, nonconsensual porn that appeals to the prurient interests. These are not just attacks, they’re generators of likes, views, comments, retweets, making it very hard for platforms to discern or pass up.

Tech reporter Paris Martineau points out that “in a connected, searchable world, it’s hard to share information about extremists and their tactics without also sharing their toxic views,” warning that “too often, actions intended to stem the spread of false and dangerous ideologies only make things worse.” She suggests that media literacy programs or technical fixes such as adding content moderators, developing more advanced auto-filtering systems or deploying fact-checking programs “ignore messier, less quantifiable parts of the problem, like the polarized digital economy where success is predicated on attracting the most eyeballs, how rejecting ‘mainstream’ truths has become a form of social identity, or the challenges of determining the impact of disinformation.”

One possible way to fix this broader problem is to change the model, to maximize positivity, constructive engagement and long-term collaboration — not negativity, destructive engagement and short-term attention. However, in response to the same question posed to Tarleton Gillespie in the conclusion to Mediating Misogyny, Adrienne Massanari, former director of the Center for Digital Ethics and Policy at Loyola University, suggests that “platform managers should be wary of easy algorithmic fixes for harassment. Like all technologies, algorithms have politics. Often bots and scripts are created in such a way that they unintentionally suppress certain speech and images shared by marginalized communities. Or they may be gamed by harassers to target individuals in an effort to intimidate them off the platform.” She warns that “even seemingly benign information about user activity — about when messages are read, profiles are visited, or posts are made, for example — can become tools for harassment. Therefore, it is critical that platform designers consider the ways that features they create impact diverse audiences.”

Some platforms are listening. Microsoft, for example, recently announced guidelines for building artificial intelligence (AI) systems responsibly, and Meta is helping Wikipedia fight misinformation with a new AI tool. Governments are also looking at ways to regulate AI. According to AI governance expert Mardi Witzel, Canada’s newly proposed Artificial Intelligence and Data Act, for example, “would require firms designing, developing and using high-impact AI systems to meet certain requirements aimed at identifying, assessing and mitigating bias and harm.” Witzel suggests that such regulatory efforts will ultimately lead to “the creation of an entire new services market, the market for AI assurance.”

Adrienne Massanari’s suggestion that designers consider how their algorithms impact diverse audiences is akin to calling for ethics to become an integral part of design, namely, algorithmic design that is socially aware and considers social values, such as fairness, privacy, transparency and even morality. This is not the norm. Facial recognition, for example, is significantly less accurate in identifying faces of women and people of colour. When police and immigration services use these algorithms, serious social inequities can result.

Tech ethicists Carey Fiesler and Natalie Garrett propose the following solution for developers of new technology: “As part of the design process, you should be imagining all of the misuses of your technology. And then you should design to make those misuses more difficult.” The hope is that tech companies will make their algorithms less destructive, their content moderation more effective, and their personnel more diverse and better trained. This will not always be easy. For example, efforts to detect and eliminate deepfakes by means of AI and deep learning, a subfield of machine learning, is fraught with difficulties, as described by Elise Thomas: “Cultural misunderstandings, recognising satire and protected political speech, and the complexity of legal jurisdictions and incompatible national laws are all problems….As complex as the technology of deepfake detection is, it may turn out to be the easy part compared to the politics of policing what’s ‘real’ amidst the morass of sex, lies and videotape.”

In their book The Ethical Algorithm, computer scientists Michael Kearns and Aaron Roth propose the creation of “algorithms that can assist regulators, watchdog groups, and other human organizations to monitor and measure the undesirable and unintended effects of machine learning and related technologies.” By instilling human values such as fairness, privacy, transparency and accountability into these “ethical algorithms,” the worst effects of unintended design consequences can be mitigated, such as the ability to de-anonymize aggregated data despite efforts to remove variables that may help to identify a subject. While the trade-off might be a need for much larger data samples or a lack of precision in searches, in the words of data scientist Cathy O’Neil, “we need to impose human values on these systems, even at the cost of efficiency.”

User Initiatives

In an analysis of the link between algorithmic development and the rise in toxic political discourse in Canada, Stephen Maher highlights a 2018 change in Facebook’s algorithm to prioritize content that engages users, namely, “meaningful interaction,” as measured by likes, shares and comments. The result was an increase in emotionally charged political messages. People are attracted to negativity much as motorists slow down to stare at a car crash, thereby creating a traffic jam. Gender-based violence, hate and harassment, when prioritized by Facebook’s algorithm, create a positive feedback loop that amplifies and spreads the initial attack.

One answer to this amplification of TFGBV is for users themselves to create pockets of positivity on the internet — safe havens, so to speak. Hashtag campaigns such as #MeToo and #NastyWoman were developed to counter the corrosive effects of misogynistic political speech. The creation of web communities that reinforce positivity and diversity can go a long way to creating alternative spaces to the dark corners of the web. Feminist activism has played a leading role in combatting online misogyny and hate in such ways. Another example is the use of “online whisper networks,” namely, private online fora where women share information about their dating experiences. In the fight against disinformation, the online encyclopedia Wikipedia has taken the lead with its open-access format that creates “radical transparency.” As tech reporter Omer Benjakob points out, “Wikipedia’s tedious and self-enforced editorial process has proved to be a better tool to deal with disinformation than the algorithms, moderators and fact-checkers Silicon Valley giants are relying upon.”

Another example of a user initiative is TrollBusters, created by journalism professor Michelle Ferrier to provide just-in-time support to women journalists, writers and bloggers targeted by online harassment. TrollBusters was inspired by Facebook research that showed how mood can be affected by the kind of posts that appear on a user’s feed: negative posts foster more negativity; positive ones foster more positivity. According to Ferrier and her colleague Nisha Garud-Patkar, “when Troll-Busters.com is alerted to online harassment through its website or on Twitter, the service sends positive messages and just-in-time education to help the target protect her location and help her document, deflect, and respond to online harassment.”

The key here is a combination of online and offline activity. TrollBusters provides its users with referrals to lawyers, psychologists and technical support to help targets of abuse and coach them on how to proceed. Offline programs and services that deal with domestic violence and abuse or with teen suicide can be paired with online efforts to counter trolls and negative posts with positive affirmations and online support.

An Educational Imperative

The best, albeit long-term — even generational — solution for preventing TFGBV is education. This includes digital and media literacy, critical thinking, and understanding how the technology works, both as a useful tool and as a source of potential harm.

One interesting initiative is the Dangerous Speech Project (DSP), founded by Susan Benesch of Harvard University’s Berkman Klein Center for Internet and Society. Part of the DSP mandate is to research how to combat dangerous speech while still protecting freedom of expression. The group advises tech companies on their content policies and educates groups such as activists, educators, lawyers, researchers, students and tech company staff about studying and countering dangerous speech.

One of the DSP’s most comprehensive educational tools is a practical guide on dangerous speech. Among other things, the guide identifies five hallmarks of dangerous speech, one of which is speech by an individual that asserts that a marginalized out-group is attacking the women and children of that individual’s own group. With this tactic, known as “accusation in a mirror,” the speaker attributes to perceived enemies the very acts of violence the speaker hopes to commit against them. A recent study by Blyth Crawford, researcher at the International Centre for the Study of Radicalisation in the United Kingdom, describes similar narratives used by the neo-fascist militant accelerationist movement (NMA). According to Crawford, accelerationists of all stripes are “united by the shared aim of exacerbating existing political tensions to the point of societal collapse in order to rebuild a new, ‘pro‑white’ society.” Crawford found a connection between the NMA movement’s anti-gender sentiment and its antisemitism: “Any sexuality or aspect of sexual politics that falls outside…strict constructions [of what constitutes a ‘real’ family] is regarded as a threat to the white race and is attributed to hostile Jewish influence.” This fictitious threat is spread and amplified online as the great replacement conspiracy theory and used to justify violence against Jews, feminists and members of the LGBTQ+ community.

It is vital for all stakeholders and sectors of society to be educated about the nature of TFGBV and how to combat it and mitigate its pernicious effects without jeopardizing democratic values. The Citizen Lab, in a submission to the UN Special Rapporteur on violence against women, its causes and consequences, argues that “well-intentioned policy measures meant to protect vulnerable groups can have serious negative consequences when not properly implemented.” They observe that “where new powers are insufficiently targeted or fail to account for the unique characteristics of the online ecosystem, they may also threaten human rights, including — but not limited to — freedom of opinion, expression, and privacy.” The eQuality Project at the University of Ottawa is one example of an educational program that addresses these concerns, focusing on young people’s use of networked spaces, with special emphasis on privacy and equality issues. Another example is the Screening Surveillance series, a public education project that highlights the potential human consequences of big data surveillance. Created by sava saheli singh, research fellow at the University of Ottawa Centre for Law, Technology and Society, the series uses “speculative surveillance,” whereby short films create dystopian futures depicting the harmful consequences of using tracking technology in areas such as education, employment and mental health.

Any educational imperative to combat TFGBV, especially in the longer term, also includes broader attempts to educate young people about gender and reproductive health, about physical and emotional development, and about the importance of personal dignity and respect for diversity in appearance, ability and interests. Sadly, some politicians have characterized such educational initiatives as propaganda or indoctrination and have proposed or enacted legislation banning certain kinds of knowledge, such as sex education or critical race theory. In the words of legal scholars Mari J. Matsuda, Charles R. Lawrence III, Richard Delgado and Kimberle Williams Crenshaw: “The code words of this backlash are words like merit, rigor, standards, qualifications, and excellence. Increasingly we hear those who are resisting change appropriating the language of freedom struggles. Words like intolerant, silencing, McCarthyism, censors, and orthodoxy are used to portray women and people of color as oppressors and to pretend that the powerful have become powerless” — another example of accusation in a mirror.

Regressive reactions to honest and informed efforts to increase knowledge about half of the human family suggest a creeping reversion to the Patrilineal/Fraternal Syndrome that lies at the root of gender-based violence, both offline and online. The United States Supreme Court’s recent majority opinion overturning Roe v. Wade and Planned Parenthood v. Casey can be seen in the same light. In a solo opinion concurring with this majority opinion, Justice Thomas Clarence went even further, calling for the Court to reconsider other rights, such as contraception, same-sex marriage and even same-sex relationships. The Supreme Court decision flies in the face of research showing that unwanted pregnancies have serious mental health and socio-economic consequences, while women with legal access to safe abortion have benefited socially and economically, including through improved educational attainment and job opportunities.

Warnings have proliferated about the digital trail that women seeking abortions produce and how this data can be used to track them. In the words of Lil Kalish, editorial fellow at Mother Jones: “As the line between our digital and physical selves fades, surveillance researchers and reproductive rights advocates increasingly see our data as the next big front in the war on abortion. Law enforcement has new tricks to land convictions for miscarriages or post-ban abortions; anti-abortion activists are making sophisticated updates to tried-and-true methods of stalking, harassment, and disinformation.”

According to privacy and cybersecurity reporter Tonya Riley, the digital rights group Electronic Frontier Foundation “is advising individuals to review privacy settings, turn off location services on apps that don’t need them and switch to encrypted messaging apps like Signal. Experts also advise individuals who are attending protests related to the ruling to leave their phones at home or take a burner phone to minimize the risk of location data being used against them by law enforcement.” Legislative efforts to criminalize online information or assistance to pregnant women and girls, and to hold tech platforms accountable for such abortion-related content, risk stifling valuable information about women’s reproductive health and, at the very least, will do nothing to reduce the threat of online harassment, stalking and disinformation by anti-abortion activists. At worse, they could even encourage vigilante activity, including violence.

Conclusion

Danielle Keats Citron, an advocate of legal reform to address TFGBV, admits that changing laws takes time. In the meantime, there is much that can be done: “Through software design and user policies, Internet companies are engaging in efforts to inculcate norms of respect. Parents and educators are teaching the young about cyber harassment’s harms.…If we act now, we could change social attitudes that trivialize cyber harassment and prevent them from becoming entrenched. Then future generations might view cyber harassment as a disgraceful remnant of the Internet’s early history.”

As author and writer Seyward Darby said in the wake of the May 2022 Buffalo mass shooting that was live-streamed on Twitch for almost two minutes, in a direct replication of the modus operandi of the March 2019 mosque shootings in Christchurch, New Zealand: “It’s an all-hands-on-deck situation.…There’s a default to, ‘Well, this is a law enforcement problem. How are we going to hold him [the alleged gunman] accountable?’ But we also should think about tech companies and government institutions and educational settings.…We need to ask, ‘How do we lay the groundwork for a healthier society?’ And that work must come from every possible actor.”

Understanding how extremist, misogynist groups justify their violence and how responding to this violence can have negative consequences on a variety of human rights, particularly women’s rights, is a crucial part of any educational initiative. And these initiatives must go well beyond the curricula and classrooms of students from kindergarten to university. Civil society, commercial advertising, media coverage, popular culture, political discourse, the legal profession, police, prosecutors, judges, online extremism researchers, tech designers, online content moderators — all must address this educational imperative.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Ronald Crelinsten has been studying the problem of combatting terrorism in liberal democracies for almost 50 years. His main research focus is on terrorism, violent extremism and radicalization and how to counter them effectively without endangering democratic principles.