Women, Not Politicians, Are Targeted Most Often by Deepfake Videos

March 3, 2021
AP_17068082171535_0.jpg
Women from labour organizations and other grassroots groups chant during a rally as part of International Women's Strike NYC. (AP Photo/Kathy Willens)

In a culture that is rife with misinformation and disinformation, it can be easy for people to be duped into believing they are reading or seeing something that has no base in reality. Deepfake videos have added to this confusion, sometimes presenting content that is meant to deceive the viewer or to drastically misrepresent the person in the video. With the advent of deepfakes, viewers now need to question whether what they are seeing in a video is real or not.

Much of the public concern about deepfakes has centred on fears about them being used to disrupt politics or business. But the reality is that deepfake technology is predominately being used to create sexual videos of women without their consent.

Deepfake videos are a form of synthetic media that uses artificial intelligence to swap out the faces of people in videos. When done well, these realistic videos can be quite convincing, making a puppet out of the person featured in the film.

Fake videos have been made of politicians endorsing views contrary to their own, public figures confessing to wrongdoings, and women engaging in sexual activities they never engaged in. Some of these videos are clearly deepfakes, due to their low-quality visual effects, unusual contextual setting or the explicit acknowledgement that they are deepfakes. But many others are nearly impossible to distinguish from a real video and are not labelled as fakes.

Their initial popularity was fuelled by the non-consensual creation of sexual deepfakes of female celebrities. In 2017, Motherboard journalist Samantha Cole reported that publicly available open source software made it possible for anyone with some programming skills and a decent graphics card to create these types of videos.

Since that time, concerns about the misuse of deepfakes to manipulate elections, perpetuate fraud in business, alter public opinion and threaten national security have dominated the discussion about deepfakes. A study by Chandell Gosse and Jacquelyn Burkell found that media reports focused primarily on the negative use of deepfakes for these purposes, rather than on the harms caused by the non-consensual creation of sexual deepfakes, despite that being the most common use of deepfakes.

Political figures have certainly been the target of deepfakes; however, the current risk of deepfakes directly influencing politics has largely been overstated. In 2020, Sensity AI, an organization that monitors the number of deepfakes online, found that of the thousands of celebrities, public figures and everyday people who had deepfakes made of them, only 35 of these individuals were American politicians.

A small number of deepfake videos featuring politicians have been intended to manipulate a political situation, but most videos of political figures have been used for parody or to educate the public about the role that deepfakes could play in spreading misinformation and disinformation. During the 2019 UK election, Boris Johnson appeared in a deepfake by social enterprise Future Advocacy, in which he endorsed his opponent; in 2020, Britain’s Channel 4 created an alternative Christmas message from the Queen in which she made uncharacteristic comments about her family and her position. Both videos revealed onscreen that they were deepfakes and that their purpose was to educate the public of the potential misuse of deepfake technology.

Artists’ use of deepfakes has also opened up a conversation about the ways in which people’s personal images can be manipulated and to bring attention to important social issues. Stephanie Lepp’s art series Deep Reckonings imagines controversial figures having a reckoning about their politics, past behaviours and ideologies. Her videos, explicitly marked as fake videos so that there is no confusion, present individuals saying things that severely contradict their public personas. In a similar fashion, Bill Poster and Daniel Howe’s 2019 video project Spectre showed deepfake doppelgängers of Mark Zuckerberg and Kim Kardashian critiquing their own alleged misuse of social media data.

As yet, the use of deepfakes by bad actors to purposely confuse the public is relatively low. Instead, the true risk that deepfakes pose to politics and public information is in what Robert Chesney and Danielle Keats Citron have coined the “liar’s dividend.” Deepfakes cast doubt on real videos, allowing politicians to claim that a real video or audio recording of them doing something problematic is actually a fake. Sam Gregory from Witness, an organization that educates people on the use of videos and synthetic media in relation to human rights, has noted that some political figures and their followers have started claiming that real videos are deepfakes to avoid acknowledging wrongdoings or to maintain their desired narrative.

These problems are significant and should not be ignored. However, the reality remains that the predominant use of deepfakes is to create sexual videos of women without their consent. A report by Sensity AI, The State of Deepfakes 2019 Landscape, Threats, and Impact, found that 96 percent of deepfakes were non-consensual sexual deepfakes, and of those, 99 percent were made of women. Deepfakes are a relatively new way to deploy gender-based violence, harnessing artificial intelligence to exploit, humiliate and harass through the ages-old tactic of stripping women of their sexual autonomy.

Female celebrities in the United States and South Korea are the main targets of sexual deepfakes. These videos have become so prevalent in South Korea that citizens have petitioned the government to address the issue. However, the women featured in these films have little recourse. Many major social media companies have banned non-consensual sexual deepfakes, but few countries have created laws that would help these women get the content removed across all websites. Laws that address sexual deepfakes have been passed in areas such as Virginia, California, and several jurisdictions in Australia, but regulation regarding sexual deepfakes remains relatively rare globally. The American actress Scarlett Johansson, who has had many sexual deepfakes made of her, has expressed frustration that, even with significant resources to fight back, it can be impossible to get these videos taken off the internet.

While celebrities are the main focus of deepfakes, it is becoming more common for everyday women and female public figures of all sorts to be targeted. In some cases, these videos have been expressly created as a tool of harassment. Rana Ayyub, a journalist in India who spoke out against the government’s response to the rape of an eight-year-old girl, was the subject of a deepfake video made as part of a coordinated online hate campaign. Noelle Martin, a young woman in Australia who has been advocating about the issue of image-based sexual abuse, also became the subject of manufactured sexual images and deepfaked video. More recently, UK poet and broadcaster Helen Mort found deepfakes of herself online. These videos, besides harming women by co-opting their sexual identities, are used as a form of intimidation to silence the women depicted and to discourage them from acting as public figures.

As governments and researchers apply resources to examining how to tackle the damaging use of deepfakes, it is critical that they pay attention to the people who are most commonly harmed by deepfakes. In the conversation about responding to deepfakes, non-consensual sexual deepfakes should not be a side issue but at the very centre of the discussion.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Suzie Dunn is a senior fellow at CIGI, a Ph.D. candidate at the University of Ottawa and an Assistant Professor of Law & Technology at Dalhousie University.