Will Autonomous AI Bring Increased Productivity, Cognitive Decline, or Both?

There is something deeply worrying about the prospect of AI agents that can reason and act independently.

June 27, 2024
A driverless car races against former Formula One driver Daniil Kvyat at the Abu Dhabi Autonomous Racing League in the United Arab Emirates, April 27, 2024. (Amr Alfiky/REUTERS)

The recent launches of OpenAI’s GPT-4o and Google’s Gemini 1.5 Pro have given impetus to the development of a more sophisticated set of artificial intelligence (AI) agents that will offer human-like interaction with users, acting as their digital assistants. These agents will be able to accept text, audio, images and video, and provide a personalized output in the same formats, further narrowing the gap between human and machine.

Indeed, the proliferation of plugins and APIs (application programming interfaces) associated with large language models is leading to the development of intelligent AI agents that will function as fully autonomous systems. A software agent is a program coded to perform a dedicated task. By combining a set of predefined rules with the ability to accept queues from different sensors or inputs to make decisions, AI can act autonomously. The result will be an “intelligent” agent similar to Google Assistant and Apple’s Siri, but with far greater subtlety, depth and range.

The ability to make decisions and adapt based on varying input signals or rules, coupled with built-in AI, has the potential to make these tools more powerful than any existing automation software. In addition to the input sensors, these AI agents will have a set of relevant Internet of Things output devices to allow them to act.

There is something deeply worrying about the prospect of AI agents that can reason and act independently, make copies of themselves, and spread across networks while carrying out complex tasks, without human oversight. Moreover, these agents could exist on billions of communication devices with virtually no guardrails in development and training, while at the same time being released with a break-and-fix mentality about quality control.

The possible uses of AI agents, both positive and harmful, are staggering. Imagine having multi-tasking personal agents specialized in deep knowledge about virtually any topic and available anytime, anywhere. For example, conversations moderated by AI could be carried on among people scattered around the world and speaking different languages yet hearing all the conversation in their language of choice.

Currently, humans set goals, while AI agents independently choose the best actions needed to achieve those goals. Inadequate guardrails and the reduction or removal of the human from the testing loop could leave the door ajar for misuse.

Highly autonomous AI agents could conceivably make decisions that are misaligned with human values or intentions. For example, AI agents deployed in financial markets could launch and control manipulative practices, causing market instability and economic turmoil.

Still unknown is the impact, over time, on independent and original thinking. One can imagine people abandoning human interactions in lieu of associating with their AI agents. AI agents as avatars are already causing social change as synthetic partners, friends and mentors. This could lead to serious disengagement from family, friends and all forms of social interaction ranging from work to play.

What will be the effect on behaviour, when individuals and groups can speak with AI agents that appear human in their ability to provide detailed answers? They could impair the cognitive skills of those who depend on them. This is why university professors have demanded that students provide their full body of work to reach an answer rather than just show the result.

Discussion of AI risks thus far has focused too little on the possible unforeseen consequences on human behaviour. This needs to change.

Increased productivity is wonderful. But while embracing this, society must also find ways to minimize the unintended side effects. Critical thinking and problem solving are essential to human existence. Any technology that impinges on these needs to be approached with great caution.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Authors

Eli Fathi is chair of the board at MindBridge Analytics, based in Ottawa, Canada.

Peter MacKinnon is a senior research associate in the Faculty of Engineering at the University of Ottawa, Canada.