Can We Have Our AI Cake and Eat It Too?

Artificial intelligence proffers an abundance of advantages for humanity. Yet the technology has a dark side we cannot ignore.

November 30, 2023
shutterstock_2255757301
Finger reaching out to touch AI symbol on a screen. (Shutterstock)

We asked four CIGI experts: “Can we have our AI cake and eat it too? What should policy makers and industry do now to ensure humanity benefits from AI, while mitigating its risks?” Here are their responses.


Susan Aaronson, Senior Fellow

US policy makers have many options if they want to regulate artificial intelligence (AI). They can regulate the AI type, the AI risk, firms producing AI, and/or business practices. However, policy makers have not paid sufficient attention to the potential of data and corporate governance to mitigate AI harms such as discrimination.

To address the risk of AI discrimination, Congress could require that the Securities and Exchange Commission (SEC) develop rulemaking related to the data underpinning AI. The SEC has already determined that how firms protect one kind of data from cyberthreats such as cyber theft is “material” information for shareholders. How firms acquire, collect and utilize data for AI should also be understood by investors as “material,” because incomplete, inaccurate or unrepresentative data can also pose a substantial risk to investors, as well as to society.

Incomplete, inaccurate or unrepresentative data can also pose a substantial risk to investors, as well as to society.

Moreover, the SEC should require that AI developers and deployers maintain records on the provenance of data and delineate how their algorithms use data to make decisions, predictions and recommendations.

While there is no one perfect recipe for AI governance, data is the key ingredient for every type of AI. Hence, US officials should not overlook the utility of data and corporate governance in ensuring everyone benefits from AI.

Susie Alegre, Senior Fellow

AI has exploded this year, offering to do everything for everyone, all the time. From writing the eulogy for your parent’s funeral or deciding your fate in court, to creating a recipe from leftovers in your fridge or being your fantasy girlfriend, AI is touted as a tool that can do it all much better than we can ourselves.

Headlines suggest an AI apocalypse one day and a techno-utopia filled with smiley happy people the next. Creators, tech leaders, academics and policy makers are also divided on what a future with AI holds. Amid all the noise, it is hard to know what to think — although if you ask it, an AI chatbot could surely help with that, too.

Perhaps the most dangerous thing about the current wave of AI is the hype designed to distract us from rational thinking.

Perhaps the most dangerous thing about the current wave of AI is the hype designed to distract us from rational thinking. AI can support human endeavours, but it should not replace us, and it is not above the law. Tech leaders are calling for AI regulation, some even for a pause in development, but slow action from regulators still gives the companies that profit from AI time to embed their technology in society, creating dependency before actual legal consequences catch up with them.

If we want AI safety that reinforces our human rights, now and in the future, it is time to ask the experts on safety and human rights, not the AI salesmen, for advice.

Duncan Cass-Beggs, Executive Director, Global AI Risk Initiative

Recent events at OpenAI highlight one of the greatest global injustices of our time: a few people in a handful of companies are making decisions that could affect billions of people around the world. This small group does not have the legitimacy to decide on behalf of all humanity what risks are worth taking in the race to develop highly advanced AI systems.

Addressing the dangers of AI does not require a complete halt or slowdown in development. Instead, regulation and risk management must be proportionate to the level of risk: many kinds of AI require little or no regulation, while other systems require guardrails and incentives calibrated to balance risks and benefits.

Agreement is needed on a legitimate and effective process to handle the greatest global risks posed by AI.

AI systems capable of creating catastrophic global consequences merit a particularly stringent approach: one that ensures that such systems cannot be developed until they are demonstrated to be reliably safe.

Agreement is needed on a legitimate and effective process to handle the greatest global risks posed by AI. This will be extremely challenging — strong public pressure and unprecedented ingenuity and innovation will be required. Only by attending to such risks, however, will we be able to then enjoy the many fruits and benefits that future AI development may bring.

Jeni Tennison, Senior Fellow

Our lives and relationships are changing and becoming increasingly datafied as AI is developed and deployed in our workplaces, schools, hospitals and communities.

As with any revolution, there will be winners and losers. It does not just matter that humanity benefits from AI, and that its risks are mitigated. What matters is where and on whom benefits and harms fall.

By default, the winners of the AI revolution will be those who are already ahead: big tech companies that have access to data and can afford computing power; employers that monitor and automate their employees’ jobs; governments that set the rules they play by; and those who are already privileged and hold power. AI, as it is currently pursued, is set to primarily enhance the lives and work of Western knowledge workers and the digital elite off the back of data-labelling sweatshops and exploited creators.

Only when AI benefits the least advantaged will it truly benefit all of humanity.

Instead, AI and the data that underpins it must be placed under democratic control. Policy makers and industry need to orient toward creating a world that is equitable, just and sustainable, with data and AI put in service of those goals. Transparency, accountability and governance have to be targeted on equipping and empowering those whose voices would otherwise be ignored.

Only when AI benefits the least advantaged will it truly benefit all of humanity.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Authors

Susan Ariel Aaronson is a CIGI senior fellow, research professor of international affairs at George Washington University and co-principal investigator with the National Science Foundation/National Institute of Standards and Technology, where she leads research on data and AI governance.

Susie Alegre is a CIGI senior fellow and an international human rights lawyer.

Duncan Cass-Beggs is executive director of the Global AI Risks Initiative at CIGI, focusing on developing innovative governance solutions to address current and future global issues relating to artificial intelligence.

Jeni Tennison is a CIGI senior fellow and the founder and executive director of Connected by Data.