In October, the Trump administration placed eight Chinese technology firms on a US blacklist. The decision bans US companies from exporting technology to Dahua Technology, Hikvision, IFLYTEK, Megvii Technology, SenseTime, Xiamen Meiya Pico Information Co. Ltd., YITU Technologies and Yixin Science and Technology Co. Ltd.
The firms aren’t well known, but to experts in the opaque world of surveillance technology, they matter. Together, they account for as much as one-third of the global video surveillance market.
Steven Feldstein, a non-resident fellow at the Carnegie Endowment for International Peace (CEIP) and an associate professor at Boise State University, noted that Hikvision and Dahua are responsible for a significant share of the video cameras used for surveillance around the world.
“Given the interrelated nature of the global supply chain, components from US manufacturers are regularly incorporated in these products,” he said. “Adding these companies to the entities list sends a blunt message: We’re going to restrict your ability to use US components for Chinese surveillance systems.”
The move to add these Chinese companies to the US blacklist comes as the use of artificial intelligence (AI) technology for surveillance is expanding rapidly around the globe — with experts and policy makers in agreement that surveillance will continue to expand in the coming years, even if government regulators were ever to launch a massive coordinated effort to discourage its use.
At CEIP, Feldstein recently developed an AI Global Surveillance Index to track the use of AI for surveillance. The index found that 75 of 176 countries with populations of more than 250,000 use some form of AI technology for surveillance.
“We are trying to understand in an in-depth way how this technology is being used, particularly by governments, and deployed around the world,” Feldstein said. “There is a lot of hyperbole in the space but not a lot of specific information about what is going on out there.”
Feldstein’s study found that in many countries AI-driven surveillance technology served a dual purpose. On one level, the technology, including facial recognition cameras, helps countries set up so-called “smart policing” and “safe city” systems.
“These systems help cities time traffic lights to limit traffic, dispatch police when a crime is committed or an emergency crew when a fire is identified,” he said. “The facial recognition component can be used in airports, for example, so that terrorists are identified.”
However, on another level, authoritarian countries often employ the service in ways that violate human rights, for example, to identify and monitor dissidents or protesters.
“What has evolved from those services is that a lot of countries use the surveillance technology to keep an eye on protesters and dissidents,” he said. “They survey a demonstration and pick up the protesters later. It all comes down to who do you trust? Do you trust security services to protect you or are they operating at the behest of an autocratic system?”
CEIP’s study notes that Chinese companies, including Huawei, Hikvision, Dahua and ZTE, are the largest providers of advanced AI-driven facial recognition surveillance equipment to countries around the world. According to the study, Chinese companies supply AI surveillance technology to 63 countries. Huawei is the largest provider of the technology, accounting for 50 countries worldwide.
“Chinese companies have provided the technology to some countries, such as France, where the law suggests they won’t be used for unlawful purposes,” he said. “But China is also providing the technology to countries that are known human rights violators with low standards.”
Feldstein noted that, for example, China provided a long-term, low-interest-rate loan to the Zimbabwean government so it could buy technology, enabling it to set up a facial recognition program. Some experts believe the project is China’s attempt at building a diverse database of faces in order to reduce racial bias in Chinese facial recognition technology and ultimately improve its autocratic surveillance system.
“Zimbabwe has a long pattern of concern involving police conduct. China is providing very advanced equipment to countries like Zimbabwe that are known to abuse them,” he said. “They [China] provide low-interest loans over long periods of time that have conditions attached to them. If Kenya wants to buy a surveillance system and have eyes on protesters, China will loan you the money, but it is conditioned on buying Chinese products and potentially giving Chinese companies access to the data.”
Domestically, China is using its surveillance technology to monitor, detain and repress its ethnic Uighur Muslim population in its Xinjiang region, in the far west of China. Many Uighurs are interned in re-education camps, including one that reportedly occupies more than 195,000 square metres. The employment of surveillance AI against the Uighur population offers a disturbing indication of how the technology may be used by autocratic regimes in other parts of the world.
“The use of this technology against the Uighur population confirms the worries people have about oppressive use of surveillance technology,” Feldstein said.
Liberal democracies are struggling with the ethical dilemmas posed by the massive growth of surveillance technology. Critics of the growth of unregulated surveillance technology argue that the US blacklist of Chinese surveillance and AI companies is a good first step, but it won’t stop the global expansion of the technology. In addition to the United States and China, Israel, Italy and Japan are actively manufacturing and supplying surveillance technology, Feldstein said.
“I don’t think you can stop the spread of this kind of technology,” he added.
However, liberal democracies such as the United States and Canada can take other steps to help limit the impact of the technology. Currently, regulation around surveillance systems and privacy controls is in its infancy.
“It’s the wild, wild west right now, when you consider how your data is transported from one database to another,” Feldstein said.
In January, The Intercept reported that Amazon provided its researchers in Ukraine access to a cloud storage service that contained every video created by its consumer security camera service called Ring, which is used by millions of households. In August, The Washington Post reported that Ring had set up a video-sharing partnership with more than 400 police forces across the United States.
Feldstein, who was deputy assistant secretary of state in the US Bureau of Democracy, Human Rights and Labor from 2014 to 2017, argues that the United States should lead the way in terms of building rules for control of personal data. Two overarching themes have emerged: regulators should require more transparency about the distribution and sale of personal video data, and citizens should be required to consent to their data being moved or sold to brokers or other companies or individuals.
“If your data is moved or sold to brokers or other companies you should consent for that data to go there,” he said. “If you do consent, you should receive part of the profit related to the sale of data about yourself. You should be asked, ‘Do you approve that the information about you is being sold or distributed to other firms?’ Transparency and consent should be central to the distribution of surveillance data.”
Finally, policy makers should set up tough enforcement mechanisms, including fines and compensation for victims, to mitigate abuse. “If your data was distributed without user consent there should be a fine,” he said. Cities must evaluate whether privacy rights trump police enforcement considerations. In May, San Francisco set up a moratorium on the use of facial recognition software by the police and other government offices.
As surveillance technology expands globally, it has been driven in part by thousands of informal workers who tag massive databases of images and videos so that AI technology can do a better job of building algorithms to identify real-world objects.
“There is a whole gig economy of workers who are tagging objects, to train an algorithm to understand that a tree is a tree,” he said. “These workers are tagging millions of different pictures of trees so it can learn over time what is a tree and differentiate it from a lamppost. All this tagging helps create image data sets to train how video recognition will work.”
And, as in other areas, data tagging has a dual purpose. Beyond surveillance, the data could be used for a variety of other purposes — maybe helping autonomous vehicles drive more accurately, maybe fuelling the surveillance systems built by one of the United States’ blacklisted companies. Overall, surveillance by autocratic governments is on the rise and a blacklist employed by one nation won’t stop it.