Public and Private Dimensions of AI Technology and Security

November 16, 2020

This article is a part of Modern Conflict and Artificial Intelligence, an essay series that explores digital threats to democracy and security, and the geopolitical tensions they create.

T

he rapid emergence of disruptive technologies such as artificial intelligence (AI) requires that new governance frameworks be created so that these technologies’ development can occur within a secure and ethical setting, to both mitigate their risks and maximize their benefits for humanity. There are public and private dimensions to AI governance. Various private companies have called for increased public regulation to ensure the ethical use of new technology, and some have even suspended high-risk applications, such as facial recognition for law enforcement, until a proper regulatory framework is in place. Public-private collaboration is essential to creating innovative governance solutions that can be adapted as the technology develops, not only to support innovation and commercial application but also to provide sturdy guardrails that protect human rights and social values.

Private Initiatives in AI Governance

Private companies’ governance initiatives generally involve best practices and voluntary guidelines to govern the development and use of responsible AI.

An initial report on AI governance was launched in 2016 by the Institute of Electrical and Electronics Engineers (IEEE). The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems aims to “ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity” (IEEE 2017, 3). The initiative also involves a series of voluntary IEEE standards that address governance and ethical aspects of AI.

Another private initiative, the Partnership on AI, was established by several large technology companies — Apple, Amazon, DeepMind and Google, Facebook, IBM and Microsoft — and has since expanded to include a wide variety of companies, think tanks, academic AI organizations, professional societies, and charitable groups such as the American Civil Liberties Union, Amnesty International, the United Nations’ Children’s Fund and Human Rights Watch. The partnership’s work involves study, discussion, identification, sharing and recommendation of best practices in the research, development, testing and fielding of AI technologies. The partnership addresses such areas as fairness and inclusivity, explanation and transparency, security and privacy, values and ethics, collaboration between people and AI systems, interoperability of systems, and the trustworthiness, reliability, containment, safety and robustness of the technology.

Public-private collaboration is essential to creating innovative governance solutions that can be adapted as the technology develops.

The Information Technology Industry Council (ITI), a trade association, has developed its own set of AI principles for designing AI technologies beyond compliance with existing laws (ITI 2017). The ITI recognizes the potential uses and misuses of technology, the implications of its use or misuse, and the industry’s responsibility and opportunity to take steps to avoid the reasonably predictable misuse of AI by committing to ethics by design.

Many large technology companies have value-based principles for internal AI activities to guide their conduct.1 However, these private initiatives are not binding and require voluntary compliance by companies using the technology.

Public Initiatives in AI Governance

Public governance initiatives include international value-based policies for responsible AI and guidance for national legislation.

The Organisation for Economic Co-operation and Development (OECD) developed principles on AI to promote trustworthy AI that respects human rights and democratic values. The “OECD AI Principles,” formally known as the Recommendation of the Council on Artificial Intelligence, were adopted in May 2019 by OECD member countries and are the first such principles signed on to by governments (OECD 2019a). Beyond OECD members, other countries, including Argentina, Brazil, Costa Rica, Malta, Peru, Romania and Ukraine, have already adhered to the OECD AI Principles, with further adherents anticipated. The OECD AI Principles set standards for AI that complement existing OECD standards in areas such as privacy, digital security risk management and responsible business conduct. The principles identify five complementary value-based principles for the responsible stewardship of trustworthy AI and also provide five recommendations to governments. While not legally binding, they aim to set the international standard for responsible AI and to help governments design national legislation. In June 2019, the Group of Twenty (G20) adopted human-centred AI principles that draw on the OECD AI Principles, which affirmed at the G20 level “that the AI we want is centered on people, respects ethical and democratic values, [and] is transparent, safe and accountable” (OECD 2019b).

In addition to international principles, multiple foreign governments have presented national AI policies or policies that purport to regulate some aspect of the adjacent technology stack. In Canada, the National Cyber Security Strategy presents a vision for protecting Canadians’ digital privacy, security and economy and a commitment to collaborate with France on ethical AI (Public Safety Canada 2018).

China has a national recommended standard for personal data collection, issued as GB/T 35273-2020 or “Information Security Technology — Personal Information Security Specification” (People’s Republic of China 2020), which addresses data considerations similar to those in the European Union’s General Data Protection Regulation. China’s “Next Generation Artificial Intelligence Development Plan” highlights the need to strengthen research and establish laws, regulations and ethical frameworks on legal, ethical and social issues related to AI and protection of privacy and property (People’s Republic of China 2017). In India, there is discussion on the importance of AI ethics, privacy, security and transparency, as well as on the current lack of regulations around privacy and security (National Institute for Transforming India 2018).

The European Union’s Committee on Legal Affairs recommends that “the existing Union legal framework should be updated and complemented, where appropriate, by guiding ethical principles in line with the complexity of robotics and its many social, medical and bioethical implications” (European Parliament 2017, 9). The European Commission (2018a) published its strategy paper on AI but did not propose any new regulatory measures for AI. As a follow-up, it published a “Coordinated Action Plan on AI” that set forth its objectives and plans for an EU-wide strategy on AI (European Commission 2018b). A UK strategy considers the economic, ethical and social implications of advances in AI and recommends preparing for disruptions to the labour market, open data and data protection legislation, data portability, and data trusts. The UK perspective is centred on the fact that large companies that have control over vast quantities of data must be prevented from becoming overly powerful. France aims to implement inclusive and diverse AI and avoid the “opaque privatization of AI or its potentially despotic usage” (Macron, quoted in Rabesandratana 2018).

The United States appears to be focused on the military aspects of AI policy, with the House Committee on Armed Services legislating a National Security Commission on Artificial Intelligence mandated “to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies by the United States to comprehensively address the national security and defense needs of the United States.”2 The commission’s latest report recommends a White House-led technology council and aims to convey one big idea: “The countries, companies, and researchers that win the AI competition — in computing, data, and talent — will be positioned to win a much larger game” (National Security Commission on Artificial Intelligence 2020, 2).

However, these policies are not binding on private players. Governments are slowly starting to introduce binding national laws directed to certain technologies, such as automated decision making, face recognition and conversational agents. While technology development and deployment accelerates, private actors continue to request increased regulation to ensure the ethical use of new technology and mitigate risk.

The Need for a Joint Effort

A joint effort by private companies and public governments is needed to create a more agile regulatory framework responsive to the accelerating pace of disruptive technologies. Many private entities better comprehend the AI tools and unintended impacts of regulation, making their perspectives essential to public regulators.

Public action is required to mandate compliance with AI policy and enforce ethical requirements. There should be coordinated effort between different public actors. Different regulators with similar policy objectives should adopt universal language for legislation to encourage regulatory convergence. International standards with universal language can help streamline adoption by private players operating in multiple countries, for example. Private actor cooperation is required for widespread compliance with any new regulation.

Current AI policies are often value-based and might not provide enough detail on how private actors can achieve the target objectives within different use cases in order to comply with the policy. There should be sufficient guidance to understand how a specific AI tool should be designed to meet an objective and whether the specific tool is compliant with the objective. Any regulation should include operational guidance and workable directives developed in cooperation with private actors. Detailed examples for different applications can help to provide guidance and create more certainty on a specific policy’s impact on those applications.

Private players can agree to voluntarily adopt public initiatives until such time as new governance solutions are available. If a company voluntarily adopts an AI policy, it must then comply with the policy. Evaluating specific AI tools for compliance with principles requires the dedication of significant efforts on the part of the private player. A comprehensive evaluation often requires a technical understanding of a specific AI tool to see how it maps to different principles. However, there can be a lack of knowledge by regulators and a need for input by private industry to improve understanding of these complex technologies. Enforcement of a policy might require examination of the AI tool, which is undesirable if aspects of the tool are protected as trade secrets. Protective measures for the code will be required for examinations of code mandated by policy.

Private contracts can be used to increase adoption of governance terms for new disruptive technologies. Ethics and governance requirements that might otherwise be voluntary can be incorporated in contracts to create binding obligations between private players. However, incorporating governance terms into contracts requires agreement by the contracting parties. If parties were to commonly include governance terms in contracts relating to AI technology, they would help to encourage the adoption and standardization of these terms and to establish at least minimum standards for ethics and security.

Open-source software licences can also be used to encourage adoption of governance terms. Disruptive technologies are commonly being offered as “open-source” tools licensed by standard terms. The software licences could also be updated to include minimum governance requirements, such as through the listing of both permitted ethical uses of the open source tools and prohibited uses. Widespread use of the tools could in turn trigger widespread adoption of these minimum governance terms. For example, contact-tracing tools to track outbreaks for public health purposes could involve collecting data with varying levels of sensitivities. Contact-tracing tools and associated data can be released under terms of use that mandate basic ethical practices.

Innovative regulatory models for disruptive technologies are also emerging. New hybrid “regulatory markets” pair strong government oversight with private sector regulators. In these regulatory markets, private sector regulators compete for the right to regulate specific AI fields. Instead of enacting traditional regulation, government can set the goals, and independent companies can determine how they should meet such goals, thereby incentivized to invent streamlined ways to achieve these government-set goals. There are risks that private regulators will be influenced by the entities they regulate rather than by public interest, and it will be important to ensure that private regulators act independently.

Private actors will continue to request public regulatory guidance for high-risk applications of disruptive technology, such as the use of AI in self-driving cars and law enforcement. Until such time as proper security and ethical regulatory measures are in place, the use of these technologies will be stifled. However, uncertainties around regulatory compliance and enforcement can also stifle innovation. We need new solutions for regulating disruptive technology that are responsive to high-risk applications while also supporting technology development and deployment.

  1.  See www.microsoft.com/en-us/ai/responsible-ai?activetab=pivot1:primaryr6; Pinchai (2018); IBM (2018).
  2.  John S. McCain National Defense Authorization Act for Fiscal Year 2019, Pub L No 115-232, §1051(b)(1), 132 Stat 1636 at 1964.

Works Cited

European Commission. 2018a. “Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions on Artificial Intelligence for Europe.” COM(2018) 237 final, April 25. Brussels, Belgium: European Commission. https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe.

———. 2018b. “Coordinated Plan on Artificial Intelligence.” COM(2018) 795 final, December 7. Brussels, Belgium: European Commission. https://ec.europa.eu/knowledge4policy/ai-watch/coordinated-action-plan-ai_en.

European Parliament. 2017. Report with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). A8-0005/2017. January 27. www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.pdf.

IBM. 2018. “IBM’s Principles for Trust and Transparency.” THINKPolicy Blog, May 30. www.ibm.com/blogs/policy/trust-principles/.

IEEE. 2017. Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. Version 2 For Public Discussion. December. https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf.

ITI. 2017. “AI Policy Principles.” October 24. Washington, DC: ITI. www.itic.org/resources/AI-Policy-Principles-FullReport2.pdf.

National Institute for Transforming India. 2018. “National Strategy for Artificial Intelligence.” Discussion Paper, June. https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf.

National Security Commission on Artificial Intelligence. 2020. National Security Commission on Artificial Intelligence 2020 Interim Report and Third Quarter Recommendations. October. https://drive.google.com/file/d/1R5XqQ-8Xg-b6CGWcaOPPUKoJm4GzjpMs/view.

OECD. 2019a. Recommendation of the Council on Artificial Intelligence. OECD/LEGAL?0449. Adopted on May 21. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.

———. 2019b. “Remarks by Angel Gurría, OECD Secretary-General, Osaka, Japan.” 2019 G20 Leaders’ Summit — Digital (AI, data governance, digital trade, taxation), June 28. www.oecd.org/about/secretary-general/2019-g20-leaders-summit-digital-osaka-june-2019.htm.

People’s Republic of China. 2017. “Next Generation Artificial Intelligence Development Plan issued by State Council.” China Science and Technology Newsletter No. 17, September 15. Beijing, China: Department of International Cooperation Ministry of Science and Technology. http://fi.china-embassy.org/eng/kxjs/P020171025789108009001.pdf.

———. 2020. [Information Security Technology — Personal Information Security Specification.] GB/T 35273-2020. Issued by General Administration of Quality Supervision, Inspection and Quarantine and State Administration for Market Regulation on March 6; implemented on October 1. www.secrss.com/articles/17713.

Pinchai, Sundar. 2018. “AI at Google: our principles.” The Keyword (blog), June 7. www.blog.google/technology/ai/ai-principles/.

Public Safety Canada. 2018. Canada’s Vision for Security and Prosperity in the Digital Age. Cat. No. PS4-239/2018E. Ottawa, ON: Public Safety Canada. www.publicsafety.gc.ca/cnt/rsrcs/pblctns/ntnl-cbr-scrt-strtg/ntnl-cbr-scrt-strtg-en.pdf.

Rabesandratana, Tania. 2018. “Emmanuel Macron wants France to become a leader in AI and avoid ‘dystopia.’” March 30. Science, March 30. www.sciencemag.org/news/2018/03/emmanuel-macron-wants-france-become-leader-ai-and-avoid-dystopia.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Maya Medeiros is an intellectual property lawyer, patent agent and trademark agent, and has a degree in mathematics and computer science.

Autonomous systems are revolutionizing our lives, but they present clear international security concerns. Despite the risks, emerging technologies are increasingly applied as tools for cybersecurity and, in some cases, cyberwarfare. In this series, experts explore digital threats to democracy and security, and the geopolitical tensions they create.