Biden’s Executive Order on AI: Bravo, Now Back to Work

The document weighs in at a hefty 20,000 words. Here is a brief summary of its most notable aspects.

November 8, 2023
bidenai
The Biden administration’s executive order effectively buys time for policy makers to get AI legislation on the books. (Photo by Budrul Chukrut/SOPA Images via REUTERS)

The Biden administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI), released at the end of October, was a clear win for democracy. Confounding the lower expectations of some, including this writer, the order demonstrates a focused and creative effort to make AI work in the public interest.

That said, it’s time to get back to work. That’s because the order kicks off a series of cascading deadlines, set at 30, 45, 60, 90, 120, 150, 240, 270, 365 and 540 days from signing, requiring the hiring of new staff, the creation of guidelines, the delivery of reports, the launching of public comment periods, the establishment of councils and working groups, the development of new funding mechanisms and even the delivery of recommendations for legislation on AI. To even begin to meet these goals, the pace must be frenetic.

The document weighs in at a hefty 20,000 words. Here is a brief summary of five of its most notable aspects.

First, the order addresses private sector uses of AI — and not just when the government is the customer. Many experts and journalists anticipated that the order would primarily apply to government applications of AI, and perhaps use government procurement policy to drive private sector practices on AI. There’s an element of that here, but this order goes further by invoking the Defense Production Act (a 1950 law) to allow for regulation in the interests of “national defense and the protection of critical infrastructure.” Some critics see this as regulatory overreach. Not so. Given the urgency of the policy issues, it’s an appropriate and pragmatic solution at a time when the US Congress is paralyzed by inter- and intra-party battles.

Second, the order applies “dual-use” terminology to “foundation models,” laying out an important vocabulary and rationale for executive branch actions and future legislation. “Dual-use” is a term used to refer to technologies that have both civilian and potential military applications. Nuclear, biological, chemical and satellite technologies are common examples. There is a precedent for restricting access and putting export controls on such technologies, so it’s great to see the order acknowledge the real danger of AI systems and their potential to be weaponized.

Third, very large AI models are listed as being subject to significant reporting, safety-testing and red-teaming requirements. Very large AI models — likely one or two generations beyond systems such as ChatGPT, Google’s Bard, Anthropic’s Claude and so forth — are subject to significant new requirements due to their dual-use nature.

The National Institute of Standards and Technology (part of the Department of Commerce) will take the lead in defining standards and best practices for this testing. Under the order, companies will be legally required to share any results “relating to lowering the barrier to entry for the development, acquisition, and use of biological weapons by non-state actors; the discovery of software vulnerabilities and development of associated exploits; the use of software or tools to influence real or virtual events; [and] the possibility for self-replication or propagation.” Companies will also be required to report on their computing assets above a certain threshold and advise the government of their future AI development plans and their security protocols for keeping model technologies protected. Cloud computing providers will also need to disclose if their services are used for training very large models.

Fourth, open-source AI is addressed directly in the order. I’ve been sharing my concerns for many months about the potential for open-source AI systems to be dangerous in the wrong hands. The order directs the secretary of commerce to study these models, described here as “dual-use foundation models with widely available model weights” and issue regulatory recommendations within 270 days.

Finally, civil rights are treated expansively, and the order sets out numerous requirements for relevant agencies to ensure that AI systems do not discriminate. The order also puts in motion exploration of future legislation that could be used to protect people from discrimination in areas including housing, credit, health care, criminal sentencing and government benefit programs.

The above being said, it is important to remember that this is not legislation. Much of it is not yet binding. Enforcement mechanisms and liability standards are not clearly established. The order can’t go nearly as far as Congress could, not to mention the European Union in its upcoming EU AI Act, (which I expect will be the first major piece of AI legislation from a democracy, and likely remain the most stringent and impactful for years to come). The order could also easily be reversed in part or in full by a future administration, as occurs in virtually all election cycles.

This order effectively buys time for policy makers to get AI legislation on the books. As such, it marks an important step forward. Indeed, we may find that a jigsaw puzzle of executive measures in the United States, combined with legislative measures from Europe, defines the regulatory landscape on AI for months or even years to come. Although that could be confusing, we’re already in a much better position than we were even a week ago, set against the goal of global AI being developed in the public interest.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

David Evan Harris is a CIGI senior fellow, Chancellor’s Public Scholar at UC Berkeley and faculty member at the Haas School of Business.