Kate Crawford On the Toll AI Is Taking on Humans and the Planet

Season 3 Episode 14
Kate Crawford on the Toll AI Is Taking on Humans and the Planet

AI development has been met with questions about its many impacts on our world. Maybe we should consider if it’s worth developing at all?

S3E14 / May 27, 2021

BT S3E14 Guest-Headshot-1200.png

Episode Description

Artificial intelligence (AI) is hailed as a great technological leap forward, one that will enable immense efficiencies and better decision making across countless sectors. But AI has also been met with serious questions about who is using it and how, about the biases baked in and the ethics surrounding new applications. And an ever-growing concern about AI is the environmental toll it takes on our planet. Do the benefits of AI innovations outweigh all these concerns? Is it even worth it to develop AI at all?

In this episode of Big Tech, Taylor Owen speaks with Kate Crawford, research professor of communication and science and technology studies at USC Annenberg, a senior principal researcher at Microsoft Research Lab – NYC and author of the new book Atlas of AI. Crawford studies the social and political implications of AI.

Crawford’s work gets to the core of AI, looking at it as an extractive industry. AI extracts data, but it also extracts labour, time and untold quantities of natural resources to build and power the vast banks of computers that keep it running. Crawford argues that much of the work in AI is not, in fact, built in some “algorithmic nowhere space” on pure data, objective and neutral, but instead grounded on the ghost work and human labour that trains these systems. The industry mythologizes the internal workings, “our deeply abstract mathematical and immaterial understanding of AI,” as a way to avoid scrutiny and oversight. As Crawford explains, rather than try to govern AI as a whole, we need to take a broader approach addressing the many extractive aspects of AI to effectively tackle this problem and its wider planetary costs.

Transcript

This transcript was completed with the aid of computer voice recognition software. If you notice an error in this transcript, please let us know by contacting us here.

Kate Crawford: So I think we do need to start looking at these broader questions around how power is centralized and mobilized by these systems. Who benefits from these systems? And who's harmed? And when we kind of ask those more core questions, AI just becomes part of a much bigger set of questions that we have to ask, really around how society is going to be constructed.

Taylor Owen: Hi, I'm Taylor Owen. And this is Big Tech.

Taylor Owen: I've been wanting to do an episode on AI for a long time now, and I'm really glad this is the one we're doing. Of course, artificial intelligence runs through a lot of the conversations I have on this podcast. In the policy world in particular, AI has become one of the primary ways of thinking about everything, from geopolitics to the future of humanity. And in recent years, much of the debate around AI has focused on bias. We now know that there are deep structural flaws in many of our AI systems, an Amazon hiring AI rejects female applicants, automated parole systems disproportionately sentence black prisoners, and facial recognition has led to the false arrests of people of colour. All of this has led to a tremendously important debate in the field. How do we remove bias from these systems? And how do we develop AI responsibly and ethically? But what if we framed that debate in completely the wrong way? What if the question isn't how we develop ethical AI, it's whether we need to totally rethink our understanding of AI? And maybe we shouldn't be developing it at all? That's the premise of Kate Crawford's new book, Atlas of AI. Kate's been immersed in this field for nearly 20 years. She's a principal researcher at Microsoft Research, and the co-founder of the AI Now Institute at NYU. She's also an artist and was part of an electronic music duo that was nominated for an ARIA Award; sort of like the Australian Grammy's. Kate argues that at it's core, artificial intelligence is rooted in extraction, it has to exploit the planet, people, and the data they produce in order to function, which also means that AI is deeply intertwined with power. It is fundamentally designed to serve the needs of the people that are designing it. Sometimes a book or an idea completely reshapes how we think about a topic. This is one of those books. Kate's reframing pushes the discourse around AI away from the intangible and, in many ways, unknowable world of algorithms and into the tangible world of resources, labor, and power; things we are familiar with. And in shifting our focus to these material elements, Kate has provided a map not just for understanding AI, but also for governing it. Here is Kate Crawford.

I mean, God, there's a million things I'd like to talk about with this book. But I actually wanted to start with where you end the book, which is describing that Jeff Bezos promotional video lecture about Blue Origin.

[CLIP]

Jeff Bezos: What happens when unlimited demand meets finite resources? The answer is incredibly simple, rationing. That's a bad path.

Source: Blue Origin YouTube Channel https://youtu.be/GQ98hGUe6FM

“Going to Space to Benefit Earth (Full Event Replay)”

May 10, 2019

Taylor Owen: The line that you pull out of it kind of took my breath away at the time, which is that he says we'd have to stop growing; which I think is very bad for the future.

[CLIP]

Jeff Bezos: The good news is that if we move out into the solar system, for all practical purposes, we have unlimited resources.

“Going to Space to Benefit Earth (Full Event Replay)”

May 10, 2019

Taylor Owen: So I'd love to hear your thoughts on that video in general, and that talk and his framing. But also how his perception of the planet, how his ambitions, how his power; how you think of all those things in the context of AI, not just in the context of him wanting to build an interplanetary species or whatever it might be?

Kate Crawford: I love this question, Taylor. And I also love starting at the end. For me, it's about how we see the trajectory of the underlying ideology of big tech. It's like, where does all this money go when billions of dollars have been created by this handful of men? How do they spend it? They create a privatized space race to leave the planet. It's an extraordinary abrogation of responsibility and a desire to commit to growth above all things. But it's so extraordinary, when I found that video for Blue Origin, a sort of promotional film, I mean, it really is the stuff of Leni Riefenstahl. I mean, it's so beautifully put together. You've got the images of the Saturn V taking off, you've got these inspirational images, you've got mountain climbers, you've got divers, you've got explorers descending into canyons. And then you have Jeff Bezos himself, just saying that this is literally the most important work that he's doing, that Blue Origin space mission is what it is about, in order, precisely as you say, to maintain growth. Because without growth, we have no future.

Taylor Owen: What's the point of life without growth?

Kate Crawford: Right. Right. And it's such an extraordinary statement because, of course, it takes us back through a fascinating intellectual history, which I trace back in the book and sort of go back to, that classic landmark report by the Club of Rome in 1972 called The Limits to Growth. And they create these predictive models about the end of non-renewable resources and the impacts of things like population growth and the sustainability. And it's so extraordinary, because, no matter what, they kept saying, "We kept trying different models, but nonetheless everything collapses around the year 2100, there simply isn't enough resources to contend with the population demands." And it really is this clarion call to sustainable management and reusing resources. And of course, Jeff Bezos, among many others, found this to be really terrifying. It's like, we can't go to a no growth model, we have to find a new frontier for growth. And of course, the new frontier became space. And so you start to see a range of Bezos initiatives, but many other tech billionaires as well, looking at space mining, looking at space colonization. And what's interesting about this, to me, is not just the kind of colonizing metaphors and frontier mining as corporate fantasies, but it's also this fundamentally troubling relationship to the earth. It completely displaces that idea of sustainability, reducing growth, building forms of mutual aid; it's actually much more about continuing as we are, just extending this industry of extraordinary extraction as far as it can go. And that history, to me, is one that really points to how we got here. It sort of tell us a lot about the AI industry as well, which is also premised on forms of extraction; extraction of natural resources, of labor, of data. And you sort of see that in the space ideology too. You see it's kind of the compressed, if you will, kind of crystalline version of that ideology, but wrapped up in the fantasies of outer space exploration.

Taylor Owen: Yeah. And I want to talk about the various extractive elements of AI, but I wonder if you could first lay out how AI has traditionally been defined?

Kate Crawford: I mean, you can look to the many books that have been written about AI and the many papers. Almost all define AI purely as a set of technical approaches. This focus on algorithmic techniques has been very dominant and also, I think, a parallel focus on the great men of AI, the people who sort of pushed the technical boundaries a little further each time. And I think this is the kind of deeply abstract mathematical and immaterial understanding of AI that's become dominant at the moment, this idea that AI is in the cloud, literally and metaphorically. It is not of the earth, it has no material footprint. And it becomes purely an abstract algorithmic nowhere. I think it's a serious political problem, as well as, I think, a sort of analytic problem. And part of the reason I wanted to write this book was to, if you will, bring AI back to earth and to really ground it as a material technology and to look at that material political economy that drives it. What are the industrial formulations behind it? So, for me, that was really this idea of taking it away from algorithmic nowhere space and looking at the specific places where AI is made, produced, and where people and institutions are making choices.

Taylor Owen: Yeah. It feels like when it stays in that intangible ephemeral space almost, or theoretical space, it allows it to be anything to anybody. Which allows it to be imagined in all these ways to either aggrandize its creators or to empower the people who own it. But something about making it tangible takes away that imagined ability of it, doesn't it?

Kate Crawford: I think that's exactly right. And I think that intangible enchanted -- we use this term, Alex Campolo and I, in an article where we use this idea of enchanted determinism.

Taylor Owen: Yeah, I love that.

Kate Crawford: That these systems are sort of seen as both magical and alien and otherworldly, and yet at the same time deterministic and can be used with predictive certainty to tell us how decisions should be made. That is a political choice. And that has political ramifications. It means that we don't look at these wider planetary costs. It means we don't assess the many forms of human labor that are used to prop up these systems to make them appear intelligent. And it means that we don't look at the different alternative futures that are possible, the other ways of actually ordering information and engaging with each other, and all of the things that we turn to AI to do.

Taylor Owen: Yeah. The humanistic side of your work comes through so clearly in this book, and I'm wondering to what degree that shapes your thinking about this? Not thinking about this just as a scientist or even just as a social scientist, but as an artist and a musician, has that shaped how you viewed this whole discourse that you've been so immersed in for 20 years?

Kate Crawford: Absolutely. If you go back to the earliest years of AI, in the 1950s and 1960s, it was much of an inter-discipline. You had linguists sitting around the table with computer scientists. You had artists collaborating with people working on speech recognition. It was a much more diverse field, both in terms of gender and disciplinary orientation. And we've seen that narrow. And particularly -- this is, again, a history of capitalism -- but when a field becomes very powerful and there's a lot of money involved, it tends to really narrow who is seen as an expert and who has a voice in that space. And obviously, you can see how that happened in Wall Street, and it's certainly happened in machine learning as it's become such a high demand area where people are paid basketballer salaries to work on these systems. Suddenly it became just the space where we can only talk about these systems technically. And while I think the technical component is an important part here to analyze and to critically grapple with, it has prevented us from looking with this more humanistic lenses, with social scientific lenses, with the many different approaches that we should be understanding these systems. And really, that became my methodology, that meant that if my point is that we have to move away from this abstract immaterial understanding of AI, then I have to go to the places where it's being made. I have to physically go there and understand it and actually put myself in that space, rather than pretending we can look at these things at arms length.

Taylor Owen: Yeah. So let's talk a bit about that. I mean, your broader framing sees AI as an extractive technology ultimately, and one that's deeply material. And one of the core ways it is, is by literally extracting resources and using tremendous amount of resources. What's the impact of AI on our natural world, first?

Kate Crawford: Well, it's profound. And this, to me, is one of the most transformative experiences as a researcher in this space, was really starting to study and understand and visit the places of mineralogical extraction, but also the places of smelting and production and shipping that sort of sit behind the conveniences that we experience with planetary-scale computation. And that was kind of extraordinary to me, how these stories are not told, traditionally. We don't think about rare earth minerals and lithium when we think about AI. But of course, these are core to how these systems work. And so, one of the things I did, was I traveled out to the Clayton Valley, which is out in Nevada, to visit the last lithium mine in the United States, which is in a place called Silver Peak. And of course that same area was used for gold and silver mining in the 1800s, and had populations move in and strip the lands of gold and silver and then created these ghost towns. And so, to be in a place like a lithium mine, which is essential for the production of reusable batteries throughout the systems that we use, like iPhones or a Tesla car; this wide range of systems that rely on lithium, and to realize that lithium too has reached this point where we're at a crisis, we don't know how indeed we're going to keep maintaining these sorts of critical minerals that are essential to how AI works today. And in fact, a very recent report that I just read, came out last month, looks at: if we find better ways of dealing with lithium, for example, if we recycle it, if we use it wisely, we might be able to extend it out to 2100. If we don't, it could be as soon as 2040, where we just hit an absolute cliff in terms of what we can do. And so many products have lithium at their core and that makes us ask really different questions. And it's interesting, because these computational networks are participating in geological and climatological processes, they're actually transforming the earth's materials into infrastructures. And if we think about that from the perspective of deep time, we're extracting elements that required billions of years to form inside the earth in order to serve a split second of contemporary technological time in an Amazon Echo or an iPhone; that ultimately get discarded in less than five years. So that kind of obsolescence cycle is fueling this economy of more and more devices being produced and thrown away, and I think makes us forget the deep costs of what is actually being built by these systems for these tiny moments of convenience.

Taylor Owen: So on the storage and device capacity, certainly there is resource inputs. But why is AI, as a technology, or a process, so energy intensive?

Kate Crawford: So there are many different ways in which AI has become extremely energy intensive. And it's really happened -- depending how you count -- in the last 15 years, sort of with the explosion of machine learning. And Emma Strubell at the University of Massachusetts, Amherst, wrote an important paper back in 2019; where she looked at how much energy it takes to train a single NLP model. So that's a natural language model; things that we use to detect the meaning of a phrase of text or to do a translation, for example. And it was such an important paper because it made us sort of contend with just what it takes to train a single model, which she tracked to being around; and again, this is a rough estimate, but around 660,000 pounds of carbon dioxide emissions. But that's the equivalent of 125 round trips between New York to Beijing.

Taylor Owen: Wow.

Kate Crawford: And what's worse, is that she notes that this is at baseline, it's an optimistic estimate. That's what you can do in the academy, it doesn't reflect the true commercial scale of what companies like Apple and Amazon are doing every day. And it certainly doesn't account for what's just happening right now, which is the shift to things like large language models. I mean, we're calling this the era of AI supercomputers, so we're actually making things that are more and more energy and compute intensive, at the same time as the planet is under extraordinary strain. So in so many ways, I think the data economy is premised on maintaining this kind of environmental ignorance.

Taylor Owen: Yeah. And there's no accounting of it, what's so remarkable. I mean, we get these little glimpses into these calculations of estimated energy consumption of certain processes but nothing in terms of an accounting of the energy use of these large companies, for example.

Kate Crawford: And glimpses is the right word. I mean, to really try to get a sense of what, say, Alibaba, or Tencent, or Amazon is really using is extremely difficult and guarded corporate secrets. So that question of AI as a researcher, how you even find this, I mean, it's extremely hard. And it shouldn't be. We should know, very easily, these kinds of resource questions. Because I think, without it, we cannot make a good calculus of what should be built and why and whether it's worth it, because we're really not contending with the true cost of AI at all.

Taylor Owen: So another extractive element of AI, or an input into the AI infrastructure, is labour. You also describe what you call Potemkin AI, which kind of speaks to a number of different scholars who have talked about the labour that goes into ... in a hidden way ... into our supposedly automated systems. But I'm wondering how you see that, how you see both the hidden labour and the, in many ways, unhidden labour too; just the huge amount of human activity that goes into this mirage of automation?

Kate Crawford: I mean, Jathan Sadowski uses this term, Potemkin AI. And Astra Taylor ... who I think is fantastic ... uses this term fauxtomation, which is another great one.

Taylor Owen: Oh yeah. I love that. Yeah. And Ghost Work of course too.

Kate Crawford: And Ghost Work by Mary Gray and Sid Suri is another one. And I think all of these scholars are pointing to this profound occlusion of labour, in terms of how these systems work. And it means that you have many, many humans often being paid extremely low wages -- often it's the equivalent of digital peace work, it's a few pennies on the hour -- to really be sort of clicking and pointing and moderating and choosing and selecting and classifying the data systems that support how machine learning works. So across the entire ecosystem of AI, there are people in the background who were effectively making these systems appear intelligent. And this is part of why I say that AI is neither artificial nor intelligent, it is clearly highly material, it's made from energy and rocks, and all of these extremely earthly components, but it's also made of people. It's made of all of these forms of human labor that we don't see and that we don't pay very well, and that, in many cases, are doing work that's physically and psychologically very taxing. And one of the things I did for the book, the site of where we can see humans engaging with algorithmic systems and robotics is, of course, the Amazon fulfillment warehouse. And, for me, going inside one of these was a really important moment because it means that it's no longer abstract, you actually have to see the experiences of the people who are putting objects into boxes that we may happily order off the internet and not think about how we get them. And it is a physically, extremely stressful job. And you can see it. You see the bandages, you see the statements up on the voice of the worker's boards inside the enormous factory space, where people are saying, "This is not sustainable. I need a break. I'm concerned about the rate, about meeting the picking rate." The psychological stresses are profound. And, of course, I'm sure you saw this announcement from Jeff Bezos at Amazon just a month ago, that they're now going to be introducing a new system, that in addition to tracking your work per minute is going to be tracking the gestures and movements of your ligaments and muscles, at that sort of level of such profound granular surveillance, to try and reduce the physical stress; while, of course, their response is by massively increasing the surveillant gaze. So, for me, these places allow us to look at, I think, these deeper logics of work and what contemporary work could look like if this shift is allowed towards extreme surveillance, extreme micromanagement. That while it takes us back to Fordist and Taylorist principles, is profoundly amplified and ramped up by these new systems that allow you to really track people in just horrific ways. And of course, the pandemic has been a big driver to see those logics deepen.

Taylor Owen: It feels like that AI-generated efficiency is not just a component of an industrial capacity of companies like Amazon, but it's its core value proposition. And that feels different, in a way, than some of these previous industrial moments. You mentioned an Amazon rep, during a labor negotiation, saying that, "We can talk about all sorts of other issues, but the rate, the efficiency, is our business model. We can't change that." They are saying there that, "Everything else we can talk about, but not the way AI is going to be deployed to make our workers more efficient. That is our business."

Kate Crawford: That is the business. And what's really terrifying is that that's a business that a lot of other companies are looking at and saying, "How do we emulate that? How do we catch that and then apply it to any other type of workplace?" Because it is simply this modular component, it's this way of thinking about time and algorithmic control that could be applied to anything. And it is the business model. To me, it was one of the most sort of chilling moments of looking at how that corporate ideology can be made completely explicit, you can just say it and people will be like, "Right, that's the model. Now, how do we apply it?" Rather than seeing it as something which is, in itself, so dehumanizing, so corrosive to the experience of dignity at work. That, to me, was, itself, part of this moment of thinking about: how does that model start to get applied more widely? And how would you stop it? Because it itself, I think, threatens ideas of time sovereignty and agency and a sense of pride in what we do at work every day.

Taylor Owen: Yeah. Of course, the other resources that this technology uses is data. And I feel like that's the one that publics are most familiarized with. I mean, everybody knows that AI needs huge amounts of data. But a couple things really struck me, the way you talk about these data. One is, on the one hand, how limited some of those original datasets actually were that led to AI as we now know it. I mean, these are not massive Facebook datasets of 2.2 billion users' daily input, these were very constructed things almost, and limited datasets, which I found fascinating. But the other is that it seemed like you put more weight on the way those data are classified as a problem, not just the data themselves? And a lot of the bias and data conversation, at the moment, seems to focus on the data themselves being biased. But you take a sort of a broader view of that.

Kate Crawford: Right. I mean, certainly, the problem of bias in artificial intelligence has become very well-known, people can cite a litany of examples of systems that have produced discriminatory results for women, for communities of colour, for people over the age of 65; it's endless. But in researching these questions for well over a decade, of specifically looking at issues around bias and discrimination, it became clear to me that that's the megafauna of this ecology; you can see it, you can describe it, it's clearly problematic. And the response of the tech industry has been, "Oh, we can fix that. Oh, we can remove that problematic classification." Or, "We can balance our datasets so that we don't have systems that only recognize white male faces."

Taylor Owen: "We can make better data." Whatever. Yeah.

Kate Crawford: "We can gather more data." More importantly, "We can get more data and that will address the problem. More. More is better." And in fact, even that phrase "more data is better data" you can trace all the way back to people like Robert Mercer when he was working at IBM 45, 50 years ago, so that ideology runs deep. But it doesn't address the problem. And once you get past the megafauna and it's the quite clearly spectacular failures, you get to this deeper issue of: what are the logics on which these systems are built? What are the worldviews that they been trained to normalize and perpetuate? And to see that actually requires not just looking at the egregious instances of failure and bias, but to look at: what does normalcy look like? What do these systems actually do when they say they're working? And here is something which I think is far more disturbing, which is that the very logics upon which these systems are trained contained, within them, ideas that are clearly either illogical, nonsensical or objectionable. So you could think here of binary gender as being a way that so many of their systems categorize people. Some use five categories for race. I mean things that sort of hearken back to the apartheid system-

Taylor Owen: Including other being the fifth, right?

Kate Crawford: Well, right. There's always a category for other, or not identified; which is always kind of extraordinary. This is a methodology that I developed working alongside Trevor Paglen, when we did a project called Excavating AI. And we used that term, excavating, very consciously, it is like an archeological method, where we would start to look at: what are the ways in which data are labeled and classified? How are images and text said to have a singular meaning? And what sorts of meaning are being constellated by these technical systems? And what we found, time and time again, was a profoundly normative, profoundly racist, sexist and ableist vision of the "average person", but also this idea that you can really categorize people in these kind of very narrow ways, perpetuated throughout so many of the systems that touch our lives every day. And when you see that, the focus shifts away from, "Oh, where where did they fail us?" but, "Why are they working this way and what are the political impacts of that?" Because this is politics by other means. And that's the piece that I think is so often left out, is that AI systems are still somehow seen as objective and neutral, calculating machines, when in actual fact it's politics all the way down.

Taylor Owen: Yeah. It's politics. Those politics intersect with our political systems and our governance systems as well in some complicated ways, as you describe. And there really is this intertwining of corporate and state interest in AI. I wonder if you could describe, as a way into that element of this, this alignment between state and corporate interests, what the IBM terrorist credit score is?

Kate Crawford: Right. Right. Well, it's a really interesting example because, of course, it was a simulation that a group within IBM came up with, almost as a sort of a marketing pitch. It was, for them, a presentation that they could make which said, "Look, what we see is a real movement of refugees across Europe. What if terrorists are traveling along with these groups, pretending to be refugees? How would we start to distinguish between a real refugee and a potential terrorist? And how would we do that with large scale data?" And so they developed this terrorist credit score. And it's drawn on unstructured data, like Twitter data, but also very standardized passport type data. But of course, all of this was simply to show that you could use refugees as test cases for these type of military and policing logics. And it's just one of those moments where you get to see the way in which, I think, there's been a conflagration of state and tech sector power. These sectors that we saw as being ultimately quite separate, the private sector and the public sector, have been, in the space of AI, conjoined for decades. And that connection is only getting stronger under the current guise of the so-called AI war, which I think is really problematic and is operationalized to try and reduce any type of regulation or restriction that could be put on the tech sector. Because if we don't, then China will, et cetera, et cetera. I mean, it's this absolute bogeyman politics that I think is profoundly worrying. But instead, I think if we look at this analytically, we can see that there's something quite interesting happening with the way in which states and the tech sector are operating right now. And Benjamin Bratton, the theorist, sort of looks at the ways in which planetary scale computation is, in many ways, taking on the roles of the state. So, at the same time, the states are taking on kind of the roles of what you might understand are kind of machine logics to do. So we have states taking on the armature of machines, because machines have already taken on the roles and register of states. And that's the parallel that we're starting to see happen.

Taylor Owen: On the governance side of this, I've often been unsatisfied by this push towards AI governance, which is such a hot topic in the tech policy world and the global governance community at the moment. And it's always felt to me like AI wasn't the right unit to be governing. It wasn't the thing that could be governed, even though it was being used as the conceptual framing for a set of policies people were trying to figure out. But that has led to either no progress on a policy agenda or very easy avoidance of meaningful policy from the AI companies, because it really wasn't a tangible thing. So, I guess, what do you think about this broader AI governance framing that is being widely debated around the world? But more importantly, is that right, that AI, as this intangible technical thing, isn't the right governance frame? Do you think it should actually be these extractive inputs that you talk about, or these components of the broader infrastructure? And maybe they're the place we need to govern, and we don't need to think about governing AI as a thing we just need to govern all these other components of it?

Kate Crawford: Exactly. And there's a couple important points to unpack there. First is this idea of: is AI the right unit of analysis for governance, both nationally and internationally? And I would suggest, as you say, it is not. And we need to widen that lens again to look at these deeper questions of extractive infrastructures and the places where these systems are brought to bear on human bodies in institutions; be it criminal justice, or health care, education, or hiring. But at the same time as we're hearing this highfalutin language, particularly around international governance of technology, it's happening with this very dark backdrop of a type of nationalist AI agendas, the usual articulations of xenophobia and mutual suspicion and network hacking. And there's this fantastic book by Tung-Hui Hu, which is called The Prehistory of the Cloud, which really, I think, isolates this shift that has happened, just in the last 10 years really, from this very liberal vision of global digital citizens who were engaging as equals in these abstract spaces of networks. I mean, you'll remember this Taylor, this is very much Web 2.0 optimism about everyone being able to get on an internet space. But now we've sort of seen a shift towards a much more paranoid vision, of defending a national cloud against a racialized enemy and the spectre of the foreigner threat. So that's the context in which people are now saying, "If we are to create some form of computational governance structure or principles, and laws and regulations, how do we do that at a transnational level?" When clearly that's what we have to do. Regulating at the level of the nation state is also too small a unit, if you will. So I think you're exactly right. That to look at the way in which you transcend the bounds of the nation state to think about the ways in which different sorts of structures of governance with bigger questions could be created, that's the big challenge. And that's a really important project. And I think, in some ways, we could talk at length about the proposed EU guidelines for AI regulation. Because you can see both-

Taylor Owen: Yeah. Yeah. Focused on risk. But also, industrialization. Right? They're, as you say, very concerned about European industrial capacity to exploit data, to create data, to build AI companies; so it's serving those two purposes there.

Kate Crawford: Right. Spot-on. So in some ways, it's both concern about the "risk", but at the same time wanting to protect the ability to emulate the model, the current model of centralized power in AI development. And I think that that is a major misstep for the EU. I think, trying to replicate those centralized models is the problem.

Taylor Owen: If we come at the governance frame through the lens of resource extraction, and exploitation, labor, and data exploitation, and create governance mechanisms around those, do we still have an AI problem?

Kate Crawford: I mean, it's interesting, because what it does is it de-centres the technology in, I think, a really useful way. Because otherwise, what we have is a situation where we're chasing the tail of industry. For every new development, does the current governance framework extend to it or does it have to be rethought? And this is the kind of chasing game that we've seen with GDPR in Europe over the last few years. It takes so long to get these kind of regulatory frameworks in place. And when they're in place, they're already behind where the edge of technological development is. So I think we do need to sort of abstract up a layer to start looking at these broader questions around how power is centralized and mobilized by these systems. Who benefits from these systems? And who's harmed? And when we ask those more core questions, I think we get to look at these wider, essentially political economies of infrastructures and data. And I think that means that AI just becomes part of a much bigger set of questions that we have to ask, really around how society is going to be constructed alongside these systems as they start to infiltrate so many of the institutions that we rely on. And I think, without real forms of governance, do challenge our understandings of democracy. And this is something that you've written about as well, Taylor. And I think you're absolutely spot-on. I mean, this is the core democratic threat until we start to contend with how we would govern these systems more broadly, not just AI, but all of those processes of extraction, exploitation.

Taylor Owen: Yeah. Just to close, you mentioned an amazing Ursula Franklin quote, "The limits of technology, like democracy, depends in the end on the practice and on the enforcement of the limits to power." Which feels like it's speaking to that exact final point you make there, that this is about bigger things.

Kate Crawford: Yeah. Yeah. It is. It is about these broader questions around the practice of justice and the enforcement of limits to power. I mean, you'd be hard-pressed, I think, to see where those limits currently extend for the tech sector right now, it's so remarkably unregulated and it faces so few limits in terms of its day-to-day practices. And that foundational problem is the one that I think we have to face. To answer it requires connecting these questions of power and justice, from epistemology to labor rights, from resource extraction to data protections, from racial inequality to climate change. And it actually means that we're having to ask these bigger political questions around what collective political life is going to look like. And where do we say, "That's enough"? Where do we say, "That is where the limit must be set"? What are those politics of refusal and what do they look like? Because without it, as you say, we have no limits and that fundamentally erodes any idea of a functioning democracy.

Taylor Owen: That was my conversation with Kate Crawford.

Big Tech is presented by the Centre for International Governance Innovation and produced by Antica Productions.

Please consider subscribing on Apple Podcasts, Spotify, or wherever you get your podcasts. We release new episodes on Thursdays, every other week.

For media inquiries, usage rights or other questions please contact CIGI.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.