On September 9, the White House hosted a Summit on Artificial Intelligence to discuss how the US government might use artificial intelligence (AI) to improve government services. The US is not alone. Many governments, including China, India, Canada and Germany see AI as key to their future growth and development.
These officials understand that AI systems could improve human welfare, increase productivity and help solve complex problems such as global warming. But countries won’t be able to reap the benefits of AI unless they create an effective enabling environment for it. Ideally, an effective enabling environment would include an internationally accepted system of norms to govern AI and policies that discourage anti-competitive behavior.
The United States should be leading this effort, because it holds a large share of the global market for AI services. Unfortunately, the US is sending mixed messages.
AI is generally used to describe computer systems that can sense their environment and think, learn and act in ways that humans do. AI applications use computational analysis of data to uncover patterns and draw inferences. To “train” these systems, AI applications utilize huge volumes of data that are supposed to be high quality, up to date, complete and correct to ensure accuracy and avoid discrimination.
Because of this huge and growing demand for training data, no nation alone can govern AI without interoperable policies to govern data use, as well as hold firms to account for potential misuse. America and China’s data giants achieved an early lead in acquiring data, which in turn made it easier for them to capture a large share of the global market for AI.
The Trump administration approach to governing AI is contradictory.
On one hand, officials have worked to develop ethical guidelines, sought public comment on America’s AI strategy and proposed greater funding for AI research.
Federal officials are examining whether any US data giants engaged in anti-competitive practices, and the Federal Trade Commission has imposed fines against some of these firms. Trade policymakers have included language in the US-Mexico-Canada trade agreement to encourage the free flow of data.
Trump officials also plan to improve public systems by ensuring that public data that is held by the government is open and generally available in a form that computer systems can easily utilize. The US Patent Office recently called for public comment on the patentability of AI, an important question.
But the US does not have a national law protecting personal data, and the Trump administration has done little to link such a law to its AI plans. To many observers, the US has yet to create an effective enabling environment for AI.
The Trump administration has promoted a nationalist conception of AI, emphasizing its role as a military technology and its importance to national security. Administration officials sought public comment on export controls related to AI. They restricted work and student visas, reducing the already limited pool of AI researchers in the US. Moreover, Trump officials have restricted federally funded labs from working with foreign students or benefiting from foreign funding. These strategies could undermine basic AI research.
The Trump administration has taken important international steps such as working with 41 other countries at the Organization for Economic Co-operation and Development on an international agreement for building trustworthy artificial intelligence. But taken in sum, its actions send a message that America is less interested in cooperation than domination.
In contrast, the European Union (EU) has signaled that it views AI development as a global good. Like the US, the 27 EU nations have increased funding and published a roadmap to achieve trustworthy AI. The EU also uses trade agreements to promote the free flow of data, which allow its researchers to gain access to larger pools of personal and public data. European agencies have also levied heavy fines against firms that engage in uncompetitive business practices.
Most importantly, the EU adopted regulations to ensure that personal data is protected and grant users greater control over their data. Firms can’t rely on AI to make decisions that could affect human rights, and they must explain how AI systems make decisions if asked.
Other countries have emulated this approach, including Brazil, India and Indonesia. The EU recognized 12 countries as having equivalent (or adequate) levels of personal data protection. Many nations are striving to be judged adequate in order to freely trade data with and from EU citizens. In short, the EU’s internationalist and trustworthy approach is gaining converts.
Like the EU, the US wants to create trusted AI systems. But if its approach is nationalistic, it risks the international network of research, talent, and capital that fuels AI. If the US wants to encourage continued innovation and trust in firms using AI, policymakers should remember that leadership abroad begins at home.
This article originally appeared in the San Francisco Chronicle.