Evolving Surveillance Tech Whets the Authoritarian Impulse to See and Know All

The Chinese government has invested significant effort in trying to shape international public opinion.

August 28, 2023
surveil
Surveillance cameras are mounted on a wall of the State Historical Museum near Moscow’s Red Square, amid Russian media reports that authorities aim to use such cameras to track potential recruits who try to evade the draft. (Vlad Karkov/Sipa USA via REUTERS)

At the 2023 Summit for Democracy organized by the US State Department, countering digital authoritarianism was on the agenda, in line with the Biden administration’s mission of advancing technology for democracy. From geopolitical tensions to the rapid growth of generative artificial intelligence (AI), the authoritarian digital arsenal keeps expanding and diversifying. As more countries move toward authoritarianism and technological advances abound, we need to keep track of their tools and tactics to respond to new digital harms and counter global democratic decline.

There’s an Authoritarian App for That

As Canadian journalism struggles with Facebook’s and Google’s reaction to Bill C-18, Chinese journalists are dealing with a new authoritarian-like app, the “Journalist’s Home ‘University Hall.’” This training platform released by the All-China Journalists Association (ACJA) offers more than 200 courses in the Marxist view of journalism, an ideology that defines the approach of the Chinese Communist Party (CCP) to media, and teaches the tenets of President Xi Jinping’s leadership concepts. Additionally, the app assists with renewals of journalists’ press cards, putting pressure on the press corps to abide by the rules in order to keep their jobs.

The press release published by the ACJA and Xinhua News Agency couldn’t be clearer. It states the app “will play a positive role in educating and guiding journalists to concentrate their souls around Xi Jinping[’s] Thought on Socialism with Chinese Characteristics for a New Era.” The course work includes videos on how to become influencers who can guide public opinion and safeguard ideological security, and how to take part in the communication “war” with the West on Western platforms. Journalists are not simply taught how to adhere to the CCP’s understanding of journalism but rather how to become brands for the party’s agenda. The “Marxist View of Journalism,” published in 2011 by the Propaganda Bureau, and influenced by Karl Marx, Mao Zedong, Deng Xiaoping and Jiang Zemin, states that journalists have a responsibility to guide public opinion and be “obedient servants of the Party leadership.” According to the China Media Project, an independent research effort specializing in the study of Chinese media, dozens of journalists working for state-run media use personal accounts on Western social platforms to spread party-approved narratives about China.

Multidimensional Propaganda on Western Social Media

The guiding principles of the “Marxist View of Journalism” are not new. But the technology available now, combined with China’s global influence, has radically changed the game. In a speech delivered at the CCP’s Nineteenth National Congress in 2017, Xi explained that social media could be used to “present a true, multidimensional, and panoramic view of China, and enhance our country’s soft power.” In May 2021, he spoke about the need to “build a strategic communication system with distinctive Chinese characteristics, and focus on improving the influence of international communications, the appeal of Chinese culture, the affinity of China’s image, the persuasive power of Chinese discourse, and the guiding power of international public opinion.”

The Chinese government has invested significant effort in trying to shape international public opinion on China, building a network of social media accounts and news agencies as part of its global influence campaigns, and relying on the voices of Chinese diplomats, bloggers and social media influencers.

The tactics have changed over the years: at times they’re more aggressive and overt, at other times more positive and covert. Since 2019 in particular, Chinese diplomats have been using social media platforms banned in China, including Twitter (now X) and Facebook, to respond to Western criticism, often quite forcefully, earning them the title “wolf warriors.” In a recent article in Foreign Policy, “How China Trolls Flooded Twitter,” Bethany Allen-Ebrahimian, China reporter at Axios, explains how and why Beijing has adopted some of Russia’s more combative information warfare strategies, choosing Twitter and Facebook as the battleground. The change occurred in 2019 during the protests in Hong Kong. Before then, few diplomats or Chinese state media had a presence on Western social media. Then, in 2021 alone, Xinhua News’ Twitter following grew to 12 million. The strategy has not gone unnoticed: in 2019, Twitter revealed that it had discovered a “significant state-backed information operation focused on the situation in Hong Kong, specifically the protest movement and their calls for political change.” This type of organized foreign influence campaigning can be expected to get worse with the rapid rise of generative AI models.

The CCP’s tactics can also be more subtle, relying on Chinese social media influencers to spread a positive image of China by reaching audiences looking for more curated and beautiful content. In a 2020 Australian Strategic Policy Institute (ASPI) report entitled TikTok and WeChat: Curating and controlling global information flows, the authors describe how more than 300 Chinese state media reporters and social media influencers were dispatched to the Xinjiang province to portray the region as “a good place,” thereby glossing over the human rights violations committed there. These WeChat and TikTok campaigns were organized by the Xinjiang Uyghur Autonomous Region Party Committee Cyberspace Office and the Xinjiang Propaganda Department.

This strategy has now been deployed on Western social media platforms, including YouTube. In a report entitled Frontier influencers: The new face of China’s propaganda, published in October 2022, ASPI lays out how the Chinese government has added female China-based ethnic minority influencers, also described as “frontier influencers,” to its external propaganda arsenal. Faced with growing criticism about its treatment of Uighurs, Tibetans and other ethnic minorities, Beijing is allowing young female YouTube influencers from these communities to spread a bucolic and pastoral image of China’s frontier regions. This is in line with Xi’s strategy. During a visit to Xinjiang in July 2022, he underlined the need “to launch multi-level, omni-directional, three-dimensional propaganda about Xinjiang directed abroad…and tell China’s Xinjiang story well.”

This new frontier influencer phenomenon, according to ASPI, shows the Chinese government’s readiness to diversify its communications and experiment with different forms and platforms for propaganda. The impact of these types of influencers, with their polished and curated portrayal of China, could be considerably greater than the aggressive tactics of wolf warrior diplomats, since they’re more in line with what Western audiences are used to seeing on platforms such as Instagram.

The debate around TikTok, which is owned and operated by China-based ByteDance, has grown considerably as a result of its booming popularity, particularly among young people.

TikTok and Snapchat: More Apps for Authoritarian Influence

While Facebook and X have access to much user data, Chinese social media apps come with their own culture of privacy, surveillance and ability to influence public opinion. In recent months, it’s become apparent that Western governments are worried about TikTok. The Australian Senate Committee on Foreign Interference through Social Media published a report in July 2023 recommending banning the app from federal government devices and describing it as the country’s biggest security risk due to “authoritarian-headquartered social media platforms like TikTok and WeChat and Western-headquartered social media platforms being weaponized by the actions of authoritarian governments.” Australia would not be the first government to ban TikTok from government phones — Canada, Belgium and France, among others, have already done so.

So, should TikTok be regarded as a security risk?

The debate around TikTok, which is owned and operated by China-based ByteDance, has grown considerably as a result of its booming popularity, particularly among young people. The main issue concerns TikTok’s ownership, access to user data and censorship. Beijing has the ability to exert control over social media platforms and tech companies through an array of regulatory regimes and via CCP cells within tech companies’ corporate structures, thereby “becoming more like state-owned enterprises.”

According to a 2019 ASPI report titled Mapping more of China’s technology giants, ByteDance “collaborates with public security bureaus across China, including in Xinjiang, where it plays an active role in disseminating the party-state’s propaganda on Xinjiang.” In the early years of the company, TikTok user data was sent to and processed in China. But in 2020, TikTok Chief Information Security Officer Roland Cloutier said that “our goal is to minimize data access across regions so that, for example, employees in the APAC region, including China, would have very minimal access to user data from the EU and US.” Yet, recent controversies have cast doubts on this, as Bytedance admitted in December 2022 that it had accessed the data of two Western journalists and other users.

While the company says the social media app is “not influenced by any foreign government, including the Chinese government,” “does not remove content based on sensitivities related to China” and “does not moderate content due to political sensitivities,” several reports contradict these claims. The 2020 ASPI report TikTok and WeChat held that TikTok was engaging in censorship on a variety of political and social topics, including LGBTQ+ issues, Xinjiang, Tibetan independence and Tiananmen Square.

In July 2023, a lengthy investigation published by Forbes revealed that China’s largest state media outlets had been pushing propaganda to millions in Europe via TikTok since October 2022. The authors, Iain Martin and Emily Baker-White, reported their investigation found ads in TikTok’s advertising library that, besides promoting Xinjiang as a “good place,” touted China state-party propaganda on the benefits of COVID-19 lockdowns and extolled the country’s economy, tech sector and cultural heritage. While company policy states that “TikTok does not show political or election ads on the platform” and prohibits advertising about social issues by government entities, a TikTok spokesperson said that state media are not considered government agencies. This is not the first time Chinese state media publishers have used social media apps, including Twitter and Facebook, to push pro-Beijing narratives to Western audiences, particularly around protests in Hong Kong.

How integral is TikTok to China’s communications arsenal? In a Guardian article published in March 2023, Nita Farahany, a leading scholar on the ethical, legal and social implications of emerging technologies, argued that TikTok is part of “China’s cognitive warfare campaign” due to its mass collection of personal and biometric data. From her perspective, the Chinese government regards the human mind as another domain of military operations.

Several reports tend to support that argument. A recent investigation by cybersecurity firm Internet 2.0 warns against the company’s excessive data harvesting, including through location checking, access to contact and calendar information, and other device information. The authors of Internet 2.0’s report write that the app would function perfectly without this kind of access and criticize the “culture of persistent access.” While TikTok says US data is mostly stored in the United States and Singapore, TikTok has acknowledged that some of it is stored in China. How much access the Chinese government has to such data is hard to determine. In 2021, TikTok also subtly updated its privacy policy to allow the app to collect biometric data, including “faceprints and voiceprints.”

The usurping of social media for authoritarian purposes is not only a problem for TikTok. Snapchat, an American messaging app, is under scrutiny as well, particularly in Saudi Arabia. With its more than 20 million users, the app is extremely popular in the kingdom, leading one senior Snap Inc. executive to describe it to Al Arabiya English as “an extension of the [kingdom’s] social fabric.” Indeed, the company agreed to a collaboration with the Saudi culture minister. Prince Al Waleed bin Talal is one of the company’s biggest investors. The controversial MBS, as Prince Mohammed bin Salman is known, is one of the platform’s biggest users, and the Saudi kingdom has been accused of using the platform to promote his image. While critics of MBS may get arrested for their social media posts, the government uses automated accounts to make sure that pro-government content dominates the platform.

China is testing biometric ID surveillance and recognition in a growing number of areas across the country, including to target Uighur and Tibetan populations, as well as individuals whose behaviour appears suspicious.

Surveillance: Predictive Policing and Law on the Rise

Rapid advances in AI are another worrying trend. As AI programs acquire more data about individuals, they will be able to not only call out perceived transgressions, but predict the likelihood we will commit crimes.

AI-based facial recognition is already in use by police in the United States, Germany, the United Arab Emirates (UAE) and China, among other countries, to prevent and predict crimes. Such use is typically justified in the name of public safety, economic prosperity and national security. But according to legal scholar and social scientist Jon Penney and security technologist Bruce Schneier, writing in Slate last month, a number of countries are already experimenting with AI tech that can hyper-personalize surveillance and law enforcement, which would take current practices even further.

China is testing biometric ID surveillance and recognition in a growing number of areas across the country, including to target Uighur and Tibetan populations, as well as individuals whose behaviour appears suspicious, with the aim of safeguarding public stability and predicting crimes and protests before they happen. In a 2022 New York Times article titled “‘An Invisible Cage’: How China Is Policing the Future,” journalists Paul Mozur, Muyi Xiao and John Liu analyzed procurement documents to show how Chinese authorities are extending social, legal and political control through technology. President Xi has made no secret of this. In 2019, Xi said that “big data should be used as an engine to power the innovative development of public security work and a new growth point for nurturing combat capabilities.”

The company Super Red, for example, was contracted by 19 municipalities in Qinghai province, next to Xinjiang province, and had already completed 1.2 to 1.5 million eye scans between March 2019 and July 2022. It should be noted that Super Red has connections to the Chinese Academy of Sciences, which is part of the central government and CCP. Similarly, as Mozur, Xiao and Liu reported, AI start-up Megvii’s “intelligent search” technology can assemble digital dossiers for the Chinese police with the aim of building “a multidimensional database that stores faces, photos, cars, cases and incident records.” Data can then be analyzed to “dig out ordinary people who seem innocent” or to “stifle illegal acts in the cradle.” They write that the Chinese company Hikvision aims to predict protests by collecting data on and building the profiles of Chinese “petitioners” — individuals who complain about local officials to higher authorities. Such individuals would thereby automatically be labelled as suspects before the fact. They also quote a researcher at China’s national police university, who stated in 2016 that “through the application of big data, we paint a picture of people and give them labels with different attributes. For those who receive one or more types of labels, we infer their identities and behavior, and then carry out targeted pre-emptive security measures.”

And China is not alone in this. At a police conference that took place in Dubai in March 2023, various surveillance tools were put on display, from facial recognition tools that track individuals across cities to new software that breaks into phones and “sentiment analysis software.” Since the Arab Spring in particular, several countries in the Middle East have heavily invested in surveillance technology because they are determined to monitor and repress “internal enemies.” Oyoon — meaning eyes in Arabic — is a citywide facial recognition program deployed across Dubai, capable of pulling “the identity of anyone passing one of at least 10,000 cameras, linking to a database of images from airport customs and residents’ identification cards.” Part of Oyoon’s aim is to prevent and predict crimes using AI, according to authorities.

While governments and law enforcement agencies, including in the United States, argue that tools such as data-driven police software can strengthen security and stability and “optimise the use of human resources, by reducing the need for human intervention,” this comes at a cost: losses to fundamental human rights, privacy, freedom from discrimination and legal accountability. Among the consequences of our ever-growing reliance on surveillance technologies is their chilling effect. In his paper “Understanding Chilling Effects,” legal scholar Penney describes surveillance as a “‘tool of social control’ that enhances the ‘power of social norms’ when people are being observed,” leading them to not only conform to rules but also potentially self-censor. In Slate, Penney and Schneier argue that modern tech surveillance such as microdirectives will heighten this chilling effect, to the detriment of freedom of speech and freedom to act freely. What does surveillance culture through technology mean for free will, if every step we take is watched and judged?

Finally, Penney argues that increasing surveillance by governments and corporations encourages individuals to surveil their fellow citizens. This is already a reality on social media, where our messages are watched and judged by others. A recent statement by China’s top state security body further substantiates the risk of peer-to-peer surveillance: in an article published at the end of July on Tencent’s WeChat platform, China’s Ministry of State Security urged the public to join the fight against spying, encouraging individuals to report activities that could be harmful to state security. The ministry’s new WeChat social media account is meant “to educate citizens about what to look out for, and provide them with an easy-to-access way to report suspicious activities.”

As our lives becomes more dependent on AI-driven technologies, we must build better guardrails to protect not only democratic values but also our right to make independent choices and live free of others’ constant watchful eyes.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Marie Lamensch is the project coordinator at the Montreal Institute for Genocide and Human Rights Studies at Concordia University.