2018: A Landmark Year for Artificial Intelligence

December 27, 2018
Google CEO Sundar Pichai speaks at the Google I/O conference in Mountain View, California on May 8, 2018. (AP Photo/Jeff Chiu)

Earlier this year, Google CEO Sundar Pichai made a bold claim about artificial intelligence (AI), calling it “one of the most important things that humanity is working on. It’s more profound than, I don’t know, electricity or fire.”

It’s a dramatic statement, but it is one of many made in 2018 that reflect the perceived promise of the technology, all of which point to a common narrative found in this year’s news coverage: the future is here and AI is driving its arrival.

The data echoes that sentiment: from 2017 to 2018, the global value derived from AI business is projected to increase 70 percent to US$1.2 trillion. And according to the AI Index 2018 Report, the amount of papers published, patents pursued, conferences attended, and job openings related to AI continue to reach the highest points yet globally.

While the technology has a long history — the term artificial intelligence was coined more than 60 years ago during a small Dartmouth College summer research project — a number of 2018 milestones suggest that artificial intelligence experienced a landmark year.

AI Application in Every Sector

It’s not surprising that tech companies continue to hire experts and invest in AI development, but in 2018, it seems that no sector was left untouched by the technology. AI applications in the automotive and transportation industry occupied headlines many times with the promise of self-driving cars and busses. Just this month, Waymo launched a commercial self-driving car system in Phoenix.

The move toward leveraging AI is evident in agriculture too, where climate change challenges have propelled the use of algorithms for monitoring and predicting crop health and yield. In some call centers, AI programs are being employed to detect the emotions in customer’s voices as a training mechanism for more responsive service agents. In the health care sector, robot-assisted surgery can improve techniques in complex procedures, while virtual nursing assistants can provide efficient and low-cost support. Even the humanitarian space is exploring the use of the technology to expedite aid delivery and disaster relief.

A Global Race for AI Strategy

In 2018, countless publications characterized the global, government-led competition for artificial intelligence leadership as an arms race. The ongoing battle between China and the United States for global leadership has been likened to the Cold War, with some accounts indicating China is ending the year ahead.

China and the United States aren’t the only countries to watch going forward. According to a report from the Canadian Institute for Advanced Research on national and regional AI strategies, the number of countries and regions with AI-specific strategies grew from six in 2017 to 18 in 2018. The European Union and the Council of Europe have also launched regional strategies. There is a propensity for these strategies to prioritize scientific research, talent development and industrialization, but they vary in scope. Some outline specific policies attached to funding and others simply act as guiding documents. Most include a reference to ethical AI standards. However, significant questions remain on the readiness of existing regulation and legislation to govern AI and its impact, at both national and international levels.

The question of regulation is especially pertinent as governments aren’t just building strategies for AI research and development, but also employing it in a range of services and applications. This year, the Canadian government launched a pilot program to use artificial intelligence in its immigration systems and the United States is using AI in its criminal justice systems. Meanwhile, Russia is actively exploring the use of AI in warfare, and in China, artificial intelligence powers an expansive surveillance system.

In 2018, Canada has continued to position itself as a leader in the space. It was the first country to launch a national strategy in March 2017. Earlier this month, Prime Minister Justin Trudeau announced $230 million in funding for an initiative integrating AI into supply-chain networks and, together with the French minister of digital affairs, plans for the International Panel on Artificial Intelligence, which will assess the ethical concerns stemming from the technology. 

Growing Consideration for Ethical AI

In 2018, the concerning uses of technology were widespread, and AI was no exception. A Belgian political party employed “deep fakes” — a computer-generated replication of a person — to produce fake videos of US President Donald Trump. Google was exposed as quietly supplying AI technology in a drone warfare project. Amazon scrapped an AI recruiting tool it was working on, after it was revealed that the recruiting tool was biased against women.

Researchers and human rights organizations alike have been sounding alarm bells about these kinds of implications of AI for years. In 2018, there was a move toward establishing the principles and frameworks required to guide the often-cited, yet less universally defined concept of human rights-based and ethical AI standards, two of which were launched or drafted in Canada.

The Toronto Declaration focused on protecting the rights to equality and non-discrimination in machine learning systems, using an international human rights law framework to provide a starting point for upholding rights and clarifying the obligations of companies and governments. The Montreal Declaration for a Responsible Development of Artificial Intelligence announced in 2017 but launched this year, takes a broader perspective, focusing on an overall ethical framework for the technology’s development and use.

Even private sector leaders spoke out this year. Senior leadership at 116 AI companies wrote a letter to the United Nations, calling on the body to ban lethal autonomous weapons, often referred to as “killer robots.” Microsoft president Brad Smith also took a stance, calling for regulatory action from governments on the use and development of facial recognition technology, which often employs AI.

Artificial Intelligence in 2019

It’s unlikely that AI will take up any less space in the year to come and the tension between its potential for both good and harm will only grow.

At least 12 countries have AI strategies in development, with more anticipated. Companies will continue investing in AI-based solutions — Facebook for example, claims AI is the key to the many issues that plagued the company in 2018, including fake news and disinformation. Additionally, the global value of AI business is estimated to reach nearly US$3.9 trillion by 2022.

AI’s integration into society seems to be moving at full speed, but many questions remain about how its limitations will be addressed, what governance models should look like and what safeguards can be put in place to mitigate the potential negative consequences. It’s important that as the technology rapidly develops, governments don’t give up the driver’s seat in governing it.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Nikki Gladstone is a RightsCon Program and Community Manager and a Master of Global Affairs (MGA) graduate from the Munk School of Global Affairs at the University of Toronto, where she focused on the intersection of technology, innovation, and human rights.