ChatGPT Strikes at the Heart of the Scientific World View

That this AI is adaptive and can produce complex outputs is a technical triumph. But at its heart, it's still just pattern recognition.

January 23, 2023
GPTgenerate
ChatGPT is an artificial intelligence tool for automatic language processing developed by OpenAI. (Joao Luiz Bulcao/Hans Lucas via REUTERS)

The past year delivered one tech debacle after another, from the cryptocurrency implosion to Elon Musk’s reputational immolation, to Mark Zuckerberg’s weird bet that people would be thrilled by the addition of legs to avatars in his Second Life-like Metaverse. Until late 2022, it seemed this might be the year that Silicon Valley finally “crashed head-first into reality.”

The overwhelming reaction to the late November release of for-profit American company OpenAI’s ChatGPT, however, suggests that Silicon Valley’s hold on the public imagination remains as strong as ever. Journalists, academics and pundits are fascinated by this technology, which provides articulate, grammatically correct responses to natural language questions and prompts.

As a colleague reminded me, the technology is not new. Governments and companies have been throwing billions upon billions of dollars at machine learning for years. Advances such as ChatGPT, as well as OpenAI’s other product, DALL-E-2, a ChatGPT-esque program, but for generating art and realistic images from text descriptions, are noteworthy but not surprising.

The fascination with ChatGPT is rooted in what people believe they can get from these chatbots. Writers, while fearful that they could be replaced, see a promise that ChatGPT can tell them authoritatively how to assemble their ideas. For most people — possibly excepting Stephen King, if Saturday Night Live is to be believed — writing is hard. This article’s first paragraph alone took me several hours to write and refine. Imagine having it delivered instantly, as if by magic.

For readers, the promise is easy access to authoritative knowledge, packaged and legible, at our fingertips. Educators express fears that students may pass off ChatGPT outputs as their own writing. Although many people have remarked how the bot’s convincing-seeming output is often error-filled, there’s a general expectation that, as the technology improves, the number of mistakes will decline.

The Death of Science

But as important as these effects are, the furor over ChatGPT points to something even more significant. In recent years, expertise with data collection and manipulation has all too often, in almost every area of human endeavour, been equated with a deep understanding of that area. Examples include digital contact tracing (health) and cryptocurrencies (finance).

ChatGPT continues this trend. It offers further evidence of the rise of what is, in effect, a post-rational, post-scientific world view: a belief that if you gather enough data and have enough computing power, you can “create” authoritative knowledge. In this world, it’s the technician, not the scientist, who is seen as the most knowledgeable. It’s a world in which intellectual authority rests not with subject matter experts but with those who can create and manipulate digital data. In short, knowledge itself is being redefined.

As inconvenient as it will be for teachers to police against ChatGPT-enabled cheating, the hassles they’re facing illustrate but one tiny piece of the upheaval we can expect: we haven’t even begun to grapple with the full implications of this transformation.

Taking Science for Granted

For centuries, knowledge and science have been considered as equivalent by most people, most of the time. As a result, it can be hard to grasp that scientific thinking is only one possible way of seeing the world.

Simplifying enormously, science as a form of knowledge privileges rationality and theory building. Theories are our mental images or ideas about how the world works; they form the context that shapes how we act in the world. Science involves testing and refining these ideas against our social and physical worlds. Its aim is to produce understanding of the world. Most importantly, science is humble. Our theories are always shaped by our limited human perceptions. We cannot hope to overcome bias and extreme limitations, but by critically examining our theories and methods we can hope to improve our always-limited understanding of the world.

Other Ways of Knowing

But as much as moderns equate knowledge with scientific knowledge, there are other ways of knowing. Prior to the European Enlightenment, for example, religion and the Catholic Church were the ultimate sources of knowledge in Europe. Or consider political knowledge, in which the goal of knowledge isn’t understanding but results. In its most extreme form, totalitarianism, it’s the leader who serves as the source of legitimate knowledge.

And different forms of knowledge can co-exist. The Enlightenment didn’t eliminate religion in Europe. Theocracies such as Iran today still engage in science, while scientists can be both religious and politically active. What matters is the hierarchy of these different forms of knowledge, which determines which groups we turn to for ultimate guidance.

Correlations and the End of Theory

In contrast with scientific thinking and its emphasis on theory building and context-specific knowledge, ChatGPT and the thinking behind it equate knowledge not with understanding but with correlations. It reflects the thinking of the technician, not of the scientist.

Knowledge through correlation is the ultimate promise of big data and artificial intelligence: that, given enough data and enough computing power, a computer can identify correlations that will speak for themselves — no theory is needed.

Unlike with science, a technician’s world view focuses not on understanding, but on correlations. Like all machine-learning models, ChatGPT breaks words, sentences, paragraphs and texts into data and is designed to look for patterns of words and sentences that tend to appear together in certain situations. That it is adaptive and can produce complex outputs is a technical, well-financed triumph. At its heart, though, it’s still just pattern recognition.

In other words, as scholars danah boyd and Kate Crawford pointed out in a foundational 2012 journal article, “Big Data changes the definition of knowledge.”

Correlation Is Not Scientific Understanding

But the belief that data can speak for itself is absurd. It’s an ideology that scholar José van Dijck calls “dataism.” As van Dijck and boyd and Crawford argue, data is never independent of people: everything about data — from the choice of what should count as data, to its collection, to its storage, to its use — is shaped by our limited perceptions, understandings and abilities, and the contexts in which the data is collected and used.

The natural and unsurmountable limitations of (human-produced) data means that computers can only ever give us the illusion of understanding, at least in the scientific sense. The Turing test, after all, involves programming a computer that can fool people into thinking it is sentient — it doesn’t determine the presence of actual sentience.

ChatGPT itself highlights the intellectual emptiness of the correlation-as-knowledge world view. Many people have remarked that the tool produces outputs that read as plausible, but that subject matter experts tell us are often “bullshittery.” Engineers will almost certainly design more-convincing chatbots. But the fundamental problem of evaluating accuracy will remain. The data will never be able to speak for itself.

This is the paradox at the heart of the correlations-based faith in big data. In the scientific world view, the legitimacy of a piece of knowledge is determined by whether the scientist followed an agreed method to arrive at a conclusion and advance a theory: to create knowledge. Machine-learning processes, in contrast, are so complex that their innards are often a mystery even to the people running them.

Knowledge through correlation is the ultimate promise of big data and artificial intelligence: that, given enough data and enough computing power, a computer can identify correlations that will speak for themselves — no theory is needed.

As a result, if you can’t evaluate the process for accuracy, your only choice is to evaluate the output. But to do that, you need a theory of the world: knowledge beyond correlations. The danger of a dataist mindset is that a theory of the world will be imposed, unthinkingly, on the algorithm, as if it were natural rather than someone’s choice. And wherever they come from, whatever they are, these theories will shape what the program considers to be legitimate knowledge, making choices to prioritize some information over others.

Consider what is likely ChatGPT’s most lauded accomplishment: that its responses, unlike those of other chatbots, don’t go “full nazi” within 10 minutes. This has been a serious problem with previous chatbots. The most infamous of these is probably Tay, an ill-fated Twitter bot from Microsoft — itself an OpenAI investor. Within 24 hours of its 2016 release, Tay’s users tweeted multiple references to Naziism and other hateful ideologies, which Tay neatly, by design, started to repeat, while also serving up unprompted weirdness such as “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism,” in response to the question “is Ricky Gervais an atheist?”

Ensuring that ChatGPT didn’t just reflect the often incredibly vile nastiness that passes for online discourse was almost certainly an express design goal of the OpenAI team. They couldn’t trust their correlations, and not just because it would make them look completely ridiculous. OpenAI is an organization with major money behind it, and ChatGPT is almost certainly not cheap to run. OpenAI was started with an initial investment of $1 billion, from investors such as Elon Musk and noted Trump supporter Peter Thiel, among others. In 2019, Microsoft invested $1 billion in OpenAI and is currently in talks to invest another $10 billion in OpenAI on the strength of the ChatGPT hype. OpenAI knew that if their tool had gone “full nazi” after launch, their multi-billion-dollar investments would have been dead in the water.

The point isn’t that OpenAI got the ideological balance wrong or right in its (not fully successful) efforts to tamp down racist and hateful ideologies. It’s that the designers had an idea about what the output should be — one informed by their own preconceived notions, business interests and ideologies. Clearly, they tweaked their system until it produced the output they wanted to see. Its output depends entirely on the choices of its “trainers” — those who decide what data and data sets are important, and design algorithms to follow the rules of correlation that they themselves decide on. Whoever controls the chatbot’s design will shape what it produces.

The Hierarchy of Knowledge

Science doesn’t disappear in a dataist world. What matters is the hierarchy: which groups are driving the discussion and, most importantly, seen as possessing the most important knowledge. In a theocracy or a totalitarian state, science is subordinated to religious or political knowledge.

The same goes for science and technique. Digital technology made possible the global financial system in the 1980s, but it was the financial sector that ran the show. The information technology technicians were consigned to the back of the shop, as it were. The scientist, or the subject matter expert, proposed, and the engineer disposed.

Now these roles have been reversed. We have Apple-branded payments systems. We have PayPal and Venmo, which are seen as tech companies first, financial companies second. And of course, we have cryptocurrencies, the ultimate expression of tech-driven hubris. In the world of the technician, which is where we increasingly live, it makes perfect sense that a tech company would presume to know enough about finance to replace the global financial system.

The Importance of Belief

Dataism’s definition of knowledge is fundamentally unscientific. But just as it doesn’t matter to me whether or not heaven is real, the important thing about machine learning isn’t whether or not the data speaks for itself, but whether we act as if it can. Our assumptions and beliefs about the world dictate how we act in the world.

Much of the data governance debate has focused on how companies with access to troves of our personal data could use this data — as Shoshana Zuboff argues in her influential polemic, The Age of Surveillance Capitalism — to brainwash us and modify our behaviour. In actuality, the real issue isn’t that companies such as Google and Facebook now have the tools needed to cause, as Zuboff describes it, a “seventh extinction,” the death of the human spirit itself. Rather, it’s that governments, companies and individuals will buy into the ideology of dataism and treat the results of pattern-recognition programs as infallible, and regulate accordingly.

It bears repeating: The issue is not what these machines can do, in other words, but what we believe they can do, and how this changes how we act. The problem is primarily ideological, not technological. Driven by a dataist faith in big data, governments, and society generally, are automating and outsourcing countless important activities to the individuals and organizations with the power to command and manipulate the data we’ve decided is necessary to run our lives. We do this not because machine learning is capable on its own of generating unique insights, but because we believe it can.

Automation without Understanding

Different forms of knowledge put different groups in places of power when it comes to determining what knowledge gets created and used, and for what ends. Different types of knowledge workers — be they priests, scientists or tech billionaires — will define and use knowledge differently.

In a big data world, power over knowledge lies with those individuals and organizations that can marshall the resources to collect and deploy the data and computing power, and create the algorithms, needed to make machine learning work.

Their authority comes from the dataist belief that data, and the process of its collection, is neutral, and that the machines they create will produce authoritative and useful knowledge.

The OpenAI approach to creating knowledge reflects a dataist view of knowledge. It betrays a technician’s mindset: automation without understanding. Automation always involves breaking down a process into its component parts and routinizing the parts that can be, and ditching the parts that can’t be, turned into data.

Sometimes, automation produces an acceptable result. Other times, it can transform the nature of the activity entirely.

During the early stages of the COVID-19 pandemic, for example, tech companies were quick to insert themselves into the public health system by promising digital contact tracing, using location tracking of peoples’ smartphones as a substitute for reporting personal contact with infected individuals. However, as political philosopher Tamar Sharon recognized, this automation stripped long-established manual contact-tracing processes of the aspects that actually make contact tracing useful, such as whether there was a wall between individuals in close proximity. It’s no surprise that, from a public health perspective, digital contact tracing amounted to very little.

Automation without understanding is also on display with ChatGPT and the student essay. As every teacher will tell you, student essays are, almost without exception, boring and repetitive. Countless op-eds have highlighted how ChatGPT can replicate a rote high-school essay. From one angle, it seems to have automated the student essay.

In practice, however, it’s only automated the least important part of it. The essay is a centuries-old, proven technology for teaching people not only facts but also how to think. Ignore for a moment that ChatGPT is a giant auto-complete machine that produces bullshit: text without understanding. ChatGPT automates only the output aspect of the essay. As a technology, it — and OpenAI by extension — ignores that the student essay’s main purpose is not to present information but to teach a student how to think by following the steps to produce essays. We write bad essays today so that we can write good essays tomorrow.

By potentially destroying the essay as a pedagogical tool, OpenAI has taken direct aim at the very foundations of our science-based educational system, all in the name of disruption, of creating a true artificial intelligence designed to shape and create a new form of knowledge.

At Stake: The Power to Control Knowledge Itself

A chatbot or search engine’s clean interface can make it seem like its output appears out of thin air, delivered by a neutral machine. But algorithms, computer programs and machine-learning processes are explicitly designed by people to do some things, and not others. The power to design knowledge-creating machines is a form of ultimate power, to control what counts as knowledge itself.

This power is even more awesome when you consider that we, the great unwashed, can only evaluate the output, not the steps that led to it. Unlike a book, which provides information about the publisher, the author and the author’s sources that you can review to determine its trustworthiness, ChatGPT is an oracle — moreover, one that can be manipulated to produce what its creators consider to be the “correct” outcomes.

As academics Mary L. Gray and Siddharth Suri remind us, so-called artificial intelligence systems always involve behind-the-scenes workers who make decisions within systems designed to make choices when it comes to content or data evaluation. These choices, by definition, favour some groups and outcomes over others. There will always be a thumb on the scale.

That’s from the inside. On the outside, absent scientific verification, dependence on the oracle reduces the rest of us to hapless recipients of automated wisdom who simply must trust that the oracle is correct — which it is, in a post-science world, because it’s the oracle. It’s a form of knowledge that demands awe and acceptance, not understanding. In effect, it degrades knowledge into a form of magic. It removes from individuals the power to understand, question and challenge. It’s infantilizing.

Dataism, Not Machine Learning, Is the Real Threat

But machine learning itself isn’t the problem. I’m writing this using Microsoft Word. While its spell- and grammar-checkers aren’t perfect, they’re still useful. The same is true for technology in general. Tesla’s supposed full self-driving mode, according to safety research organization the Dawn Patrol, may have an unfortunate tendency to lead to collisions, but driver-assistance technology seems like it can be helpful.

By contrast, OpenAI’s technology, and machine-learning tech generally, is only made possible by appropriating the work of billions of people — artists, authors, regular people — turning it into data, and using that data without the creators’ express informed consent to construct the model. This tool, if used as intended, could deprive artists and educators of the ability to earn a livelihood or do their job.

ChatGPT’s designers could have aimed to create a tool with a regard for verified truth, or at least judged their progress by whether that is possible. But such a goal, which would have required privileging scientific, subject expert judgment, goes against the hard core of dataism, where truth is determined by mere correlation.

While the scientific method empowers, oracular tools like ChatGPT create two problems. First, they make it that much more difficult for non-experts to evaluate and reason for themselves. (Some people have suggested that teachers could assign students to evaluate ChatGPT output for accuracy. That might work for a time, at the cost of turning students into fact-checkers instead of training them to produce knowledge.) More importantly, it presumes a world in which the scientific method still dominates. But when a society is in thrall to dataism, that can no longer be presumed. To whom does one turn if there’s no way to differentiate scientific texts from nonsense?

Second, trusting in correlations merely subsumes background ideologies, preferences and beliefs into the data and algorithmic designs. There’s a reason why system after system based on machine learning has been revealed to produce racist and sexist outputs: when you depend on correlations to produce knowledge, you end up with conventional wisdoms and popular (and sometimes unsavoury) opinions, not accuracy. But if we place our trust above all in the correlations, on what grounds can we say that the machine is being “improperly” racist?

Doing Better Than Good Enough

The bottom line: Like so many Silicon Valley pitches, ChatGPT promises more than it can deliver. It’s the proverbial stopped clock that’s right twice a day. It promises understanding; it delivers authoritative-sounding nonsense that must still be evaluated by actual experts before it can be trusted. It eliminates the rote steps (reading actual research, writing bad essays) that we know create scientific knowledge and teach people how to think. It pretends that a technology designed to further the business interests of its backers has been created in the public interest. It unleashes an untested technology on an unwitting public as a form of market research. Imagine the reaction if a drug company did the same thing with an untested drug.

ChatGPT isn’t automating the writing or research process. It’s creating a completely new form of knowledge, one in which correlations confer legitimacy, and in which the evaluation of the truthfulness of these correlations occurs behind the scenes, embedded in programming decisions and hidden labour. It is an approach that places scientific understanding in a secondary, and at best, an evaluative, role. The issues raised by ChatGPT are about more than a single technology. Meta’s and Tesla’s share values may be cratering, but the race to master machine learning and deploy related technologies in government and industry highlights how ingrained dataism has become. As José van Dijck remarks in her 2014 article, businesses, governments and scholars are all deeply invested in the idea that digital data sources provide us with an objective and neutral, even “revolutionary,” means by which to better understand society, make profits and conduct the business of the state.

Whether we — as citizens, educators, politicians and businesspeople — have the will to maintain a commitment to science, to placing technology in the service of understanding, is the question at the heart of the debate over artificial intelligence.

These entrenched interests present significant obstacles to ensuring that machine learning is developed in the interest of the public. However, understanding dataism as an ideology points us toward a few habits that can place machine learning in the service of people, and not the other way around.

First, because of the complex, opaque nature of machine-learning processes, all such processes must involve people as the direct, accountable decision makers, and both the decision makers and the affected individuals must be able to explain and understand any decision made “by” automated processes. Machine learning should complement, never replace, direct human agency.

Second, the data rights discussion needs to move beyond a focus on personally identifiable data when it comes to creating large data sets. We need to take seriously the rights and interests of the artists, writers and regular individuals whose expressions and work form the underpinning of these large language models, and whose lives will be directly affected by them. A focus on individual rights is wholly inadequate as a starting point for this conversation.

Finally, we must prevent companies like OpenAI from treating the general public as guinea pigs for what are effectively marketing exercises. The flurry of ChatGPT op-eds shows how even experts themselves are struggling to understand the implications of tech like ChatGPT. The only consensus so far is that the technology will upend any number of areas. It’s high time such companies receive a degree of regulatory attention befitting their importance.

Such proposals, and well-meaning efforts like the global agreement on the ethics of artificial intelligence from the United Nations Educational, Scientific and Cultural Organization, will certainly face resistance. Governments and companies have put a lot of money and time into machine learning in the name of efficiency and economic competition. A data-driven society is based on the belief that if you don’t maximize data collection, you’re leaving money on the table.

Dataism is an ideology, a world view that shapes how people see and understand everything. It’s a world view embraced by the engineer who confuses correlation with scientific truth, and by the bureaucrat who applies the model with the belief that it reflects reality, that it is good enough.

World views, like ingrained habits, are not easily cast aside. For centuries, we’ve embraced science as an ideology that equates knowledge formation with rational thinking, foregrounding our theories, insisting on rigorous and transparent processes for creating and validating knowledge.

But that world view is also a habit. And habits can be maintained, upheld and strengthened, or they can be broken. Whether we — as citizens, educators, politicians and businesspeople — have the will to maintain a commitment to science, to placing technology in the service of understanding, is the question at the heart of the debate over artificial intelligence. ChatGPT can’t answer it, but our reaction to ChatGPT will.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Blayne Haggart is a CIGI senior fellow and associate professor of political science at Brock University in St. Catharines, Canada. His latest book, with Natasha Tusikov, is The New Knowledge: Information, Data and the Remaking of Global Power.