The Ministry of Artificial Intelligence

April 30, 2018
AP_18100688487595.jpg
Facebook CEO Mark Zuckerberg testifies before US Congress about the use of Facebook data to target American voters in the 2016 election. (Win McNamee/Pool Photo via AP)

One of the more striking results of CEO Mark Zuckerberg’s recent testimony before the US Congress about Facebook’s role in the Cambridge Analytica scandal was the perceived incompetence of elected officials. Both the market and the general public came to the conclusion that Zuckerberg won that particular contest, in part because the government did not seem to have the proficiency to understand his business model, let alone be in the position to regulate it.

This lack of proficiency, or even lack of comprehension raises a larger question as to whether governments are able to keep up with industry and technology when it comes to regulating artificial intelligence (AI) and ensuring that data protection practices are robust and effective. Certainly, governments are waking up to their responsibility to hold the AI an tech sectors accountable. But gaining awareness that there’s a problem is one thing; actually regulating an industry fuelled by fast-paced technology is a much taller order.

Regulating AI is no trivial issue. The longer governments defer regulation, the harder it will be.

In many ways, technology is a threat to government as we know it. Democratic governments are wrestling with its impact on elections and how AI polarizes the public sphere. Authoritarian governments are paranoid at the potential for algorithms and AI-driven social media to amplify dissent and empower dissidents.

As an industry, AI represents an unprecedented concentration of wealth and power. Governments face a data deficit where the AI industry is in a position to understand the public with far greater accuracy and nuance. Without oversight, it is only a matter of time before the industry uses that power to influence or form the government.

Bianca Wylie, a CIGI senior fellow and an open data activist, argues that this deficit arises when it comes to the management of data and digital infrastructure. She goes further to argue that, as a result of “the absence of policy and law to manage data and digital infrastructure, tech firms are building themselves up as parallel government structures.”

Zuckerberg’s testimony before the US Congress demonstrated how far behind North American governments are when it comes to understanding and regulating AI. While the equivalent Canadian parliamentary hearing did not feature Zuckerberg nor embarrassing comments from elected officials, it did illustrate that the federal government is scurrying to catch up.

But what about other countries? Are governments across the globe struggling to embrace, understand and regulate AI too?

China’s response to the potential threat of technology subverting government capabilities is to essentially nationalize their tech sector. This move is significant, given that in his testimony to Congress, Zuckerberg cited Chinese companies as Facebook’s primary competition, and suggested that to regulate or break up his company would be to further strengthen the growing Chinese AI industry. Clearly, the coordination between government and industry in China is benefiting these companies’ ongoing development.

The only government in the world that has an actual government minister dedicated to AI is the United Arab Emirates (UAE). As part of a cabinet reshuffle in October 2017 that focused on expanding the government’s technology capabilities, the UAE appointed Omar bin Sultan Al Olama as the first state minister for artificial intelligence. Sheikh Mohammed bin Rashid Al Maktoum, the ruler of Dubai and the vice president and prime minister of the UAE, described the appointment as serving the country’s desire to “become the world’s most prepared country for artificial intelligence.”

While democratic governments do not have the same options as authoritarian regimes, they still need to prepare for AI and build regulatory capacity. There is still a need for regulation, and close coordination between government and industry. Mark Zuckerberg has said as much, but whether governments are able to regulate, let alone enable and lead the AI industry globally remains uncertain. Even the policy debate around what would achieve these outcomes is in its infancy. Countries talk about being globally competitive in AI, but do not have clear or distinct ideas on how to do that, short of throwing money at research.

In 2016, Japan called for the Group of Seven (G7) to draft “basic rules” to govern AI, although it is not clear how such rules would be enforced. In 2017, the G7  on human-centric AI, but no clear regulations resulted.

In the United States, President Barack Obama’s administration held several workshops around AI and public policy. One of the outcomes were calls by researchers for the establishment of a Federal Robotics Commission, which have since largely been ignored. The US-based AI industry spends heavily on lobbying to ensure their activities remain unfettered.

Twenty-four members of the European Union, empowered by the General Data Protection Regulation, have recently signed a declaration that commits public research funding to AI and the advancement of national policies that contribute to large-scale AI initiatives.

The European Union has a digital commissioner, and while Mariya Gabriel has not yet proposed binding legislation to govern AI, her office is actively gathering input from member states, experts and industry to craft effective and responsive policy. Its digital single market strategy gives the European Commission the mandate to build these regulatory capabilities.

Within the European Union, France, too, has demonstrated leadership on building the capacity to govern AI — for example, a member of the French Parliament, Cédric Villani, has published a report titled “For A Meaningful Artificial Intelligence.” The French government is certainly not averse to regulation, and specifically cites the need to prevent “dystopia” when it comes to the rise of AI.

However, the current focus is on research rather than regulation. Governments are scrambling to understand the problem or build the industry, rather than develop its own expertise. While Germany is taking a strict approach to regulating social media companies around hate speech, the portfolio for AI currently rests within the Ministry of Research. Similarly, in Finland, where AI is also being taken seriously by the government, it is within the Ministry of Economic Affairs.

AI is not an industrial concern, nor is it purely about research and development. Rather, AI touches all aspects of society. The regulation of AI will need to be far-reaching, and yet also needs to be nuanced, balancing the needs of industry with the rights of society. It has to be responsive, anticipating an ongoing and rapid rate of technological change, but it also has to conform to existing regulations on human rights, freedom of expression and transparency.

The United Kingdom, on the verge of leaving Europe, is one of the few democratic governments taking a big-picture approach to the governance and regulation of AI. The House of Lords has established a Select Committee on Artificial Intelligence, and just recently published a report proposing that the United Kingdom “lead the way on ethical AI.” The report specifically argues that success in the global AI industry will be a result of addressing the ethical and governance issues that surround AI.

The report does not go so far as to call for a dedicated agency to govern AI in the United Kingdom; it does, however, call upon existing British regulators to adapt in the AI era. It also acknowledges that AI regulations will almost certainly have to include antitrust measures, to prevent or limit the rise of monopolies.

In its many expressions — from social media’s role in elections, to the rise of global data-driven monopolies — AI poses an existential threat to government as we know it. The United Kingdom is wise to look at the big picture, and to recognize that the success of the industry is tied to the success of society and democratic governance.

In Canada, the governance of AI is currently lagging, although a recent event held by the Public Policy Forum that featured numerous participants from the Privy Council Office suggests that the federal government is starting to think about the problem.

In the United States the outcome of the upcoming midterm elections could have a substantial impact on potential AI policy. If Democrats are able to win back control of Congress, AI policy could come to the forefront as issues of electoral interference drive the political agenda. Although, said electoral interference could taint the outcome of said election and complicate the subsequent policy environment, making the creation of a new regulatory environment difficult in an era of scandal and distraction.

If governments are too slow at governing AI, it is distinctly possible that AI will become the government. Recently in Japan, an AI ran for mayor in Tama City in Tokyo.

The clock is ticking. The rise of AI will not wait for governments to figure out how to regulate the industry or mitigate the potential harms.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Jesse Hirsh is a researcher, artist and public speaker based in Lanark County, Ontario. His research interests focus largely on the intersection of technology and politics, in particular artificial intelligence and democracy. He recently completed an M.A. at Ryerson University on algorithmic media.