AI Use in Higher Education Does Not Have to Mean the Death of the Humanities

The decline in university enrolment in humanities programs has been a topic of discussion for decades.

January 2, 2024
Since ChatGPT’s emergence in late 2022, we’ve seen countless articles about the threat AI poses to education, the author notes. (Photo illustration by Omar Marques/SOPA Images via REUTERS)

The rise of chatbots that can write decent first-year university essays has raised fears that higher education in the humanities is doomed. But there’s another way of looking at this: as an opportunity to refocus on pursuits that remain entirely human — critical and abstract thinking.

Since ChatGPT’s stunning emergence in late 2022, we’ve seen countless articles about the threat artificial intelligence (AI) poses to education. A recent Toronto Life article focused on the potential to cheat on assignments. One Vox piece claimed that “college professors are tearing their hair out because AI text generators can now write essays as well as your typical undergraduate.”

Stephen Marche’s article in The Atlantic predicted doom for the humanities, blaming the disciplines themselves for what he considers to be their tendency to gaze inward rather than confidently assert the importance of language, history and other aspects of the humanities.

The laments come with a tone of disdain for students and the humanities in general that is unfortunately all too familiar in discussions about the modern university and academic disciplines that have allegedly gone astray.

Indeed, the decline in university enrolment in humanities programs has been a topic of discussion for decades, attributed to greater investment in the STEM fields (science, tech, engineering and math); students pursuing more lucrative, pragmatic degrees; and decreases in government funding. A recent article in The New Yorker does well to trace the many other causes of decline in humanities enrolments.

The discourse on AI in universities and public fora seems driven by three main issues: First, the belief that large language models (LLMs) are more sophisticated than they actually are, and that they possess something resembling human agency, which they do not. Second, there’s an assumption that students are not engaged with their studies and will take any opportunity to cheat on assignments — a “kids these days” argument that fails to understand the complex dynamics of the contemporary student experience and the real reasons why students cheat.

Finally, the discourse seems to fundamentally misunderstand or deliberately obfuscate the purpose and benefit of humanities education, which is ultimately to spark and nurture thinking, including the writing process, and not to produce specific high-quality pieces of writing.

As I say, the arguments don’t hold water. LLMs are not as powerful as fears suggest. Rather than the all-powerful cybernetic minds some imagine, existing AI tools are better characterized as “calculators for words” or  “stochastic parrots”: statistical models that can ape human discourse and ideas but lack any actual creativity or true “thought” in what they produce.

Further, a study predating ChatGPT’s release argues that the statistical nature of LLMs means they are ill-equipped to replace the reasoning and complexity involved in humanities research. Although these systems have rapidly advanced already since their initial release, several experts — one being computer scientist, entrepreneur and popular Substack commentator Gary Marcus — continue to argue that we are overreaching in terms of what these tools can produce.

Then there’s cheating. Although this is a reasonable fear, there is a way around it, which is to offer assignments that test critical thought and comparison, asking students to take abstract concepts and apply them in specific contexts, and to closely analyze a text for linguistic, historical, contextual and narrative meaning. This is known as “close reading” (a cornerstone of literature and humanities education for centuries).

A writing assignment with clear outcomes and a research focus that tests multiple ways of thinking and communication (such as pairing a presentation with an essay) will ensure better outcomes and more engaged students. It will also be difficult to cheat on such assignments, even using AI.

Let’s give humanities education the pride of place it deserves. AI can be used to highlight the skills of communication, thought, analysis, creativity and collaborative discussion that humanities education engenders. Joseph Aoun’s 2018 book Robot-Proof advocates for an approach that fosters dynamic and creative thought that AI cannot and will likely not be able to replicate in the near future.

The tools for such an approach already exist — they simply need to be supported. For example, the educational movement Writing Across the Curriculum (WAC) promotes writing pedagogy across all disciplines, including STEM, that focuses on integrating writing education into regular teaching. The “writing to learn” philosophy, another example, treats writing as a way of thinking through ideas on the page, something ChatGPT cannot do.

Programs such as WAC are being integrated at universities such as in the University of Toronto’s Writing-Integrated Teaching program (inspired in part by WAC), the University of Syracuse’s WAC program and the Writing-Enriched Curriculum program at the University of Minnesota. With funding, such programs could help “robot-proof” university education while also increasing student engagement and interest. In effect, the value of the humanities need not be diminished by AI. The challenge is in making that clear.

Scholars from some universities and associations, such as the Modern Language Association (MLA), have doubled down on the idea that the humanities still have much to offer. The MLA in particular has published several essays and statements on the subject. Organizations outside these disciplines have also recognized the value of such an education in the age of AI, as a means of enhancing AI development in a more human-oriented way and addressing the related ethical, philosophical and cultural issues.

Humanities education has great value and could even be integral in supporting and enhancing the critical thinking necessary for research in new technologies. ChatGPT and other LLM tools will not kill off the academy, if professors can only learn to appreciate and articulate the value of their disciplines to those beyond their world.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Matthew da Mota is a post-doctoral fellow at CIGI’s Digital Policy Hub, where he researches the uses and governance of artificial intelligence and large language models within universities and public research institutions.