Collaboration Is Necessary for Ethical Artificial Intelligence

August 20, 2018
AP_18181381743658.jpg
A computer running artificial intelligence software defeated two teams of human doctors in accurately recognizing maladies in magnetic resonance images at the world's first competition in neuroimaging between AI and human experts. (AP Photo/Mark Schiefelbein)

Over the past few years, several countries around the world have started to develop national artificial intelligence (AI) policies and strategies. AI, the digital economy and the future of work continue to appear as priorities. Under the Argentine presidency, the Group of Twenty (G20) is working to build on previous consensuses reached on the adoption of digital technologies and policies in order to “provide recommendations on inclusive development in the era of digital transformation.” AI features prominently in discussions of Industry 4.0, which is defined as “the next phase in the digitization of the manufacturing sector,” where much of the conversation has focused on AI in relation to the workforce, privacy concerns and cyber warfare.

Despite the wide-reaching impacts of AI on various industries and sectors, there is no mechanism or body that is charged with assessing national AI strategies, policies or ethics. The subject — which increasingly impacts the day-to-day life internationally — is worth a serious assessment.

Tim Dutton’s Politics + AI (an online publication) is one of the few resources that attempts to compile and compare national AI strategies. Many of the strategies mentioned are vague, contributing to a lack of transparency on how AI is being used by governments around the world. As the various working groups of the G20 meet over the course of the year to discuss digitization and Industry 4.0, it is paramount that they make concrete efforts to foster an environment in which information sharing is the norm. Creating environments that encourage the sharing of information is especially important as Australia, the European Union, France, Mexico, Singapore, the United Kingdom and members of the Nordic-Baltic region pledge to develop AI ethics frameworks. As countries begin to develop their own ethical frameworks for AI, there is a risk that divergent and conflicting pathways will emerge. The set of principles and regulations one country adopts may conflict with that of others and result in the development of AI technologies that fail to operate in a global context.

Without concerted efforts to develop a global ethical framework for AI, technologies may be misappropriated or misused, or even intentionally used for nefarious purposes, such as surveillance programs used to identify and suppress dissent.

There are dual-use risks of AI; tools that may be developed for legitimate uses may be used to support illegal, criminal, or unethical activities. It is difficult for technologists, researchers, policymakers and users to develop measures to mitigate the risks associated with these technologies because there is a lack of education and awareness on the ethical and social implications of AI. Furthermore, since AI is seen to have global impacts, regardless of the specific geographic location that it is being employed in, it is important for technologists to be aware of the varying political, social, cultural and economic systems that may incentivize or allow individuals to use AI to suppress, oppress or control others.  

Developing ethical principles or guidelines can assist technologists, researchers, industry, and users in understanding the implications of the AI tools they are developing, marketing or using. While it is generally important to teach ethics and critical thinking, regardless of the specific context, the development of emerging and exponential technologies makes this an even greater imperative. Technologies tasked with decision making (such as AI) introduce ambiguity, making it difficult to discern who is ultimately responsible for the consequences or impacts that certain technologies have. In fact, by relegating decision making to these technologies, individuals may be less apt to critically think about the consequences of the decisions being made.

Mapping out the impacts of AI and forecasting how it will shape the future is limited by the contents of national, regional or international AI strategic plans and documents. While it is acknowledged that AI can only be as good as its inputs (good data in, good data out), this same principle has not been applied to AI strategy, policy development or oversight. For example, the Pan-Canadian AI Strategy does not provide details on investments in specific types of AI technologies, metrics or indicators that might determine whether the strategy is seen to be successful or not. The lack of comprehensive information provided by various national AI strategies makes it difficult for states to coordinate their efforts in this space with one another.  It is unclear how comprehensive policies and regulations can be developed when governments consider investments and technology development in silos.

What is evident, however, is that most, if not all, of existing national AI strategies fail to prioritize peace building, human rights, and social and environmental justice. That said, governments are not the only actors shaping the future of AI. The private sector plays a key role, investing millions of dollars in AI research, development and commercialization. However, commercial interests don’t encourage transparency among such private actors. Governance mechanisms that simply focus on the role of states will fall short in ensuring greater transparency and accountability as it pertains to AI.

AI ethics are complex and the related discussions can’t be tackled in one sitting — far from it. It is important, however, that steps be taken to better equip the individuals at the table — developers, regulators and technology users alike. The following list of suggested steps is by no means exhaustive, but it is a strong starting point for the discussion:

  • Develop a global repository of AI strategies and policies to ensure greater transparency and accessibility to the general public and relevant stakeholders, such as policy makers.
  • Develop a governance structure or platform for ensuring accountability and transparency in the development of AI, in particular as it relates to the social and political impacts of these technologies.
  • Encourage greater knowledge sharing among different states and stakeholders to foster a more collaborative environment (most national AI strategies are focused on developing competitive economic and militaristic advantages and are not prioritizing peace-building, human rights, social justice, and environmental sustainability).
  • Create opportunities for states and other actors to collaborate on the development of a global ethical framework for AI and an ethics board for exponential and emerging technologies.
  • Develop accessible, comprehensive education curriculum that ensures interdisciplinary understandings of AI and its impacts on society, to enable citizens to make informed decisions in their use of AI and other emerging technologies.
  • Include diverse stakeholders in the development of AI policies and strategies.
  • Finally, invest in studying and comparing the social, ethical, political and environmental implications of AI, in addition to its security and economic implications.

Unless we develop AI policies and regulations in a collaborative environment, AI itself is unlikely to foster collaboration and will instead reinforce norms of competition.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Bushra Ebadi is a social innovator focused on designing sustainable, innovative solutions to complex global challenges using her multidisciplinary background and skills in design and systems thinking, policy analysis and mixed methods research.