Reclaiming Cognitive Autonomy in the Age of AI

Reclaiming our autonomy demands hybrid intelligence that preserves independent thought, strengthens human judgment and resists overreliance on machines.

August 19, 2025
Walther, Cornelia - Guarding Agency v2
As artificial intelligence becomes ever more embedded in our daily tasks, our ability to function independently silently deteriorates.(Nicolas Maeterlinck/REUTERS)

Imagine waking up tomorrow to a malfunctioning AI assistant. Your emails remain unsent, your schedule unorganized, and even your morning commute is disrupted due to navigation glitches. Suddenly, the seamless integration of technology in your daily life becomes glaringly apparent — and alarming.

This hypothetical scenario, though extreme, underscores an increasingly tangible concern: as artificial intelligence (AI) becomes ever more embedded in our daily tasks, our ability to function independently silently deteriorates.

First, it is important to understand the phenomenon known as agency decay, a subtle yet substantive erosion of human autonomy resulting from overreliance on AI. The world is going through a dangerous transition, with people who gradually shift from experimentation, via integration of AI, to reliance and, in the near future, possibly addiction. Unchecked, this trend threatens our cognitive autonomy, our appetite for critical thinking and our creative problem-solving skills. But there is an alternative: to navigate this hybrid world, we need hybrid minds, which requires having hybrid intelligence. But let’s start from the beginning.

The Path to Hybrid Intelligence

Hybrid intelligence arises from the complementarity of natural and artificial intelligences. In this realm, the “A-Frame” is a simple group of steps to curate a mindset geared toward hybrid intelligence and focused on preserving cognitive agency in the age of automation. It consists of four interconnected pillars: awareness, appreciation, acceptance and accountability.

Awareness involves recognizing moments of cognitive offloading. When we let an app choose our route or an algorithm suggest a hiring shortlist, are we making informed choices or simply accepting defaults? In its AI Risk Management Framework, NIST (the National Institute of Standards and Technology), calls this “contextual awareness” — and it’s the first line of defence against blind delegation.

Courses that highlight ethical dilemmas, AI’s biases and scenarios of AI failure can help learners internalize the necessity to remain masters of their own judgment.

Appreciation reminds us of what humans bring to the table: nuance, values, compassion and embodied experience. A recent study in diagnostic medicine published in Nature showed that teams combining human clinicians with AI support outperformed humans or AI working independently, so long as the humans remained actively engaged rather than passively compliant.

Acceptance means treating AI as a partner and not as a crutch. Automating routine tasks can free up energy for more meaningful work — but only if we design processes in which humans remain active contributors. That’s the spirit behind the European Union’s legal requirement for “meaningful human oversight,” or AI systems that are designed to allow for human control and intervention. More precisely, Article 14(1) of the EU Artificial Intelligence Act requires high-risk AI systems to be designed for effective oversight by natural persons during their use. The goal herein is to prevent or at least minimize risks to safety, health and fundamental human rights. Although some argue that this type of oversight is of questionable value when the outputs are highly technical, it recalls to the forefront the fourth component of the A-Frame.

Accountability ensures that responsibility remains human. Who owns a decision made with AI assistance? Who audits the data, explains the logic and corrects the errors? The global agreement on the ethics of AI adopted by members of the UN Educational, Scientific and Cultural Organization (UNESCO) emphasizes traceability — from input to output — as a foundational principle for AI use in the public interest.

To operationalize the A-Frame at a foundational level, education systems must evolve dramatically. Policy makers need to revamp curricula to integrate double literacy, which entails a dual understanding of human decision making and AI technology to curate hybrid intelligence. Double literacy equips learners with two essential competencies: first, a holistic understanding of self and society, and thus the conditions that shape their own cognitive and emotional functions, and second, a candid understanding of AI’s operational logic and limitations. This educational reframing ensures that students not only know how to use AI effectively but also can critically understand when and why to step away from algorithmic assistance.

For instance, teaching approaches that emphasize metacognition — defined as thinking about one’s thinking — can significantly bolster self-awareness and mitigate agency decay. But they must be implemented now, as we are still cruising through the early stages of the transition to a hybrid society. Equally, courses that highlight ethical dilemmas, AI’s biases and scenarios of AI failure can help learners internalize the necessity to remain masters of their own judgment.

A Note to Policy Makers

For policy makers, the path forward is not just about catching up to the pace of AI development — it’s about shifting the narrative. AI cannot be treated solely as a technical issue or economic driver. It is a social force shaping cognition and behaviour, and, ultimately, trust.

Legislation must go beyond risk classification to include mandates for public education, ethical transparency and psychological safety. Schools need funding not only for instruction in the STEM subjects (science, technology, engineering and mathematics), but also to teach self-reflective critical thinking.

To build and deserve trust, public institutions must set an example by publishing AI “model cards,” as a concise and standardized way to document their AI models. These documents should provide information about a model’s intended use, development process, performance and limitations. Furthermore, institutions should be documenting every significant AI-assisted decision in a transparent manner.

Finally, global coordination must continue. Work by UNESCO and the Organisation for Economic Co-operation and Development and the G7’s Hiroshima AI Process offer blueprints, but implementation at national and local levels is patchy. Investment in prosocial AI — AI-systems that are tailored, trained, tested and targeted to bring out the best in and for people and the planet — is not an ideal but a necessity.

Curating Agency, Day by Day

Agency is not something we lose all at once. It’s something we give away in small increments. But that also means we can rebuild it one mindful moment at a time. Individuals can take five minutes to reflect: How many decisions did we delegate to AI today? Could we have made one of them differently, more mindfully? Tomorrow, we can try completing one task without AI assistance — just to feel the difference.

A-Frame What It Means Solution Why It Works
Awareness Notice early signs of overreliance on AI. Log one “manual” decision per day and compare it with AI’s advice. Cultivates acute attention to the choices we make.
Appreciation Recognize the value of human intuition and ethics. Before acting on AI advice, do a human “gut check.” Connects technology with emotional inclination.
Acceptance Embrace AI as a partner, not a crutch. Alternate between automated and human-led processes. Connects online and offline, while showing boundaries.
Accountability Keep decision-making responsibility clear and traceable. Publish or review model cards that explain how decisions are made. Concentrates on outcomes and human liability as the final line.

The silent erosion of agency is not inevitable. By adopting and promoting the A-Frame, policy makers, educators, businesses and individuals can actively protect and promote cognitive autonomy amid AI integration. A hybrid world requires hybrid intelligence — harnessing the complementary magic of natural and artificial intelligence. Ensuring we cultivate such minds demands deliberate action. Now.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Cornelia C. Walther is a visiting fellow at the Wharton Neuroscience Initiative/Wharton AI & Analytics Initiative, as well as an adjunct associate faculty at the School of Dental Medicine at the University of Pennsylvania.