David Carroll On the Dark Side of Digital Advertising

Season 1 Episode 3
David Carroll on the Dark Side of Digital Advertising

S1E3 / December 5, 2019

3-Ep3_DavidC_Headshot.png

Online platforms like Facebook and Google Ads are positioned as superior tools for micro-targeting advertisements. The promise of greater returns on investment and granular control over who will engage with an ad has attracted advertisers.

In this episode of Big Tech, co-hosts David Skok and Taylor Owen speak with David Carroll, an associate professor of media design at Parsons School of Design at The New School. Carroll’s own understanding of the ways in which platforms were monetizing his online activity was featured in The Great Hack documentary. His research into how online advertising systems work reveals just how little thought was put into oversight and monitoring of the systems that advertisers built.

Carroll argues that financial incentives created the current data-hungry advertising environment. “The advertising complex was built to sort us into categories without knowing what our categories are. And then from there the recommendation engines were built to surface vast databases into a user interface. And those were biased towards engagement to move the chart up and then the user interface to reward people right on the surface for engagement, to move the chart up. And it was never about intelligence or goodness, it was just about greed and metrics.” For Carroll, digital advertising platforms are reminiscent of 1830s snake oil salesmen who used penny press newspapers to share misleading information. Eventually, people were protected from such ads when US Congress created the Food and Drug Administration (FDA). Now, governments must step in to build the same kind of oversight bodies to regulate digital advertising.

Transcript

This transcript was completed with the aid of computer voice recognition software. If you notice an error in this transcript, please let us know by contacting us here.

 

David Carroll: The company never built an auditing system and so it goes back to the problem of when an advertiser wants to know, "Where did my ads run?" They never built a system to give you that answer. The automation was built in such a one sided way, it didn't provide for any effective auditability, just from the beginning, and the fact that they could not build that retroactively is an indictment of how badly designed the system was from the start and how unaccountable it was engineered to be.

[MUSIC]

David Skok: Welcome to the third episode of the Big Tech Podcast. I'm David Skok.

Taylor Owen: And I'm Taylor Owen.

David Skok: We've made it to episode three, Taylor.

Taylor Owen: We have. I can't believe it. So if you're joining us for the first time, I encourage you to also, after listening to this episode, go and listen to the first two.

David Skok: In the first episode we spoke to Rana Foroohar about her book, Don't Be Evil, and how big tech companies had unimaginable impacts on our society over the last decade.

Taylor Owen: Then in the second episode, we chatted with Kate Klonick about her work on Facebook's oversight board or what they call the supreme court, so please go check them out and subscribe to the podcast if you haven't done so already.

David Skok: Now on to today's episode. As you heard at the very start, we are speaking with professor David Carroll about data privacy and how platform companies have been caught using our data.

Taylor Owen: Yeah, so David's a pretty interesting character in the whole Cambridge Analytica story. He's an academic who was studying the space from a marketing perspective and ended up becoming a legal activist trying to chase how Cambridge Analytica used his data, and in so doing, ended up the character in a Netflix documentary.

David Skok: Yeah, he featured as kind of a key character in The Great Hack on Netflix. David Carroll, coming up next on Big Tech.

[MUSIC]

David Skok: Professor David Carroll from the Parsons School of Design in New York joins us from New York. Welcome to the show, David.

David Carroll: Hello.

David Skok: So, the popular narrative is that this was a scandal where Facebook allowed numerous organizations access to the data of around 87 million people, but it's actually a much larger problem than that, and involves the entire information ecosystem. You've said that it goes beyond just information and right to the heart of how business is conducted in the global economy, and where corporate responsibility lies.

Taylor Owen: Could you start by describing to our listeners what is the core structural problem that you see with online advertising?

David Carroll: So, for some reason, we've decided that the ability to target ads efficiently, to save money on advertising, is the most important thing in the world, and it sounds like an exaggeration, but it doesn't seem to be, that the business imperative at the core of this was the perception among businesses that they waste money on advertising and any mechanism to make them feel like they're wasting less money on advertising is justifiable, even if it means a lot of downstream effects. So I think we're at a stage where we're realizing there are a lot of externalities to that mentality and it's probably not worth it, and that they're probably not saving as much money and reducing as much waste as they think they are.

Taylor Owen: There's an ecosystem of data collection that is a part of this, and there is the product that's being sold on top of that ecosystem, right? So the ability to micro-target or to influence behavior. How is that capacity being pitched to anyone who might want to influence the behavior of anybody else, whether it's a marketer or a political campaign or what have you?

David Carroll: Certainly the model is pitched to commercial and political clients almost equivalently that the same tools and techniques to sell skin creams and ski vacations are used to select and mobilize or even demobilize voters, and all the tools are used in very similar ways, so that we haven't established any firewall between the advertising industrial complex and the electioneering complex and arguably even the military industrial complex, and so because we haven't established any reasonable boundaries between these worlds, it really gets overlapped in really troubling ways. So, the holy grail of advertising is to predict our behavior with a level of precision so that our behavior can be anticipated and influenced to win the bets on our predicted behavior. So, predicting that you will go to a particular coffee chain next week and order a particular coffee beverage and use a particular incentive program to pay for that, and then what happens when we get nudged to win that bet on our future behavior? The notion that our activity is being traded as a derivative, that really starts to threaten our sense of autonomy when we don't even know that these bets are being made against us.

David Skok: When did it change, David? When did things shift? Because I remember the early 2000s even, it felt like social media was this fun thing to be on and there was a time there where I remember Facebook was facing a lot of heat because they weren't generating enough revenue and people didn't know how they were going to start generating revenue. Was it mobile? When was the pivot to so aggressively and exponentially improve their targeting?

David Carroll: Well, it started even ... this has been a process that's been going on for 20 years. The original promise of digital advertising on the internet was to eliminate the waste, that the internet allowed precision and attribution of whether or not ads worked, and it's been a gradual process of building up an extremely complicated ad tech industry of hundreds of companies that most consumers have never heard of that have been collecting and buying and selling our personal data for the auctions that occur in high speed. Every time you visit a page, the ads that display are the results of these high-speed auctions. So that process built up and it largely ate itself alive and triggered people installing ad blockers. In 2014 I was trying to warn the industry that people were installing ad blockers not just because they thought ads were annoying and they were made annoying because the data that they produced encouraged more annoying behavior, but that people also had some privacy anxiety bubbling below the surface, but the industry insisted that nobody cares about their privacy anymore, so they can keep on doing what they're doing, and I said, "No, this is really going to blow up in your faces someday."

Taylor Owen: Yeah, I wonder if in that arc, how much focus and blame, depending on how you spin it, that Sheryl Sandberg plays in this. I mean, she developed this for the targeting model and the financial model for Google and then did it for Facebook as well. It feels like you were looking for a through line through the narrative of that model that Dave was mentioning. That's probably it, right?

David Carroll: Indeed, the fact that she created the hyper-targeting business model for Google and then did it for Facebook, she single handedly influenced this business in such a particular way. Her justification for it is that she has enabled small businesses to afford attention that was previously unaffordable. Her defense is that she has enabled small business in a way that would not be possible otherwise. In many ways, she could be attributed for that, but the externalities of that are the aspects that she struggles to have good responses to.

David Skok: I will put the blame on a lot of other places beyond just Sheryl Sandberg. I can recall working in a very large news organization where what I called the video industrial complex. So after programmatic advertising and all of these algorithms had dropped the price of CPM so low, it left publishers scrambling to find replacements, all of a sudden, "Oh, hey, there's video with a $30 CPM," and you had publishers, you had agencies and advertisers, and you had the platform companies all kind of complicit in this false notation that video was going to create the next wave of revenue generation, and if any one of us stopped the musical chairs and said otherwise, we were off the boat. It was amazing to watch at that time, with video in particular, how all of these players in this ecosystem were just rowing in one direction.

David Carroll: Yeah, I would agree that it goes back further than Sheryl Sandberg and even really to the decision to build an economy around the metric of the impression, which is a measurement that has no scarcity attached to it, that it doesn't measure a finite resource, even though human attention is a finite resource. And so it created a race to the bottom for all sides of the industry and all these conflicts of interest, that is very reminiscent of the financial crash of 10 years ago now, the mortgage crisis. The notion that everything can be re-bundled and repackaged in very complicated instruments that nobody understands, and a similar moment happened before and around the election where big brands asked, "Where are my ads running?" When reporters published stories that ads for Mercedes were appearing in YouTube videos for violent extremists. The ad tech companies didn't have a good answer to where their ads were running, because nobody bothered to figure that out, and similar to in the credit crisis when the investors asked the banks, "So where are these mortgages?" And the banks really couldn't tell them because everything had been automated into this super complex arbitrage scheme, and so the way that the impression metric just lent itself to arbitrage and a level of arbitrage that is ... you can't even map it out. It's an impenetrable, automated algorithmic structure that has no transparency, could have got us into this mess and it doesn't even measure anything meaningful. So we built this whole thing that's ultimately quite meaningless.

David Skok: How do you now feel about the public's willingness to engage in these difficult and complicated conversations whereas before it probably felt like you were yelling into a tornado?

David Carroll: Yes, for example when I would say to publishers and ad tech people that people are installing ad blockers because they care about their privacy, they would laugh at me, and in some ways you can't blame them, because they were making their decisions based on what is called by some the privacy paradox or by others the trade-off fallacy, this idea that people say they care about their privacy but then they don't behave in such a way that shows that, that we share stuff and we sign up for services. So, I'm feeling very positive and optimistic and reassured now because things like the reaction to the film The Great Hack shows that when the problem could be made into a narrative that regular people could understand, that it reassured me that people really do care about their privacy. They just don't know how to care about their privacy, and they're not given mechanisms to do so. Yeah, we are up against false choices to begin with. A very common response to the film is deleting Facebook, but do those people also delete Instagram and WhatsApp and do they have to make great sacrifices in order to make that choice? Is that choice even doing anything? So I think that the response to the film is for me the most personal validation that it was worth pursuing this and it was worth not being discouraged and that people would come around as soon as they could understand.

Taylor Owen: I really empathize with this notion of the uphill battle this has been over the last number of years and just how much the public discourse has changed in large part because of some of the campaigns you've been involved with and the activist work you've been involved with. I wonder if we're getting across the actual nature of the democratic threat here, and I worry about this a big because of what's just happened in Canada with the Canadian election here. As you know, there were a lot of people, including yourself, warning the Canadian government that there was going to be a whole set of threats to the election and I think we can talk more about the policy side of this, but the government did do some smart things, I think, in response to that concern. I was involved in a very big monitoring effort of the election where we were trying to capture this digital discourse online and see if it was changing people's behavior. The thing we found was not that there was an acute foreign interference or that some of the easy manipulative tools that we know have been used in previous elections were present. I worry it was more just two things: one, there was just a degrading of the public discourse, right? That because we were having this conversation about an election and the medium we were having it in. It was more inflammatory. It was more divided. The electorate was more polarized, right? Those don't have an individual cause. Those are kind of a function of the ecosystem, is how I would describe it. The second piece is that there's probably a lot of stuff that was happening, that we just can't see, right? That it's embedded in the design of these tools and how certain voices get amplified and how certain people could be targeted. I guess as you see these elections unfold and you see the nature of the democratic threat evolving, how has your thinking on that moved on?

David Carroll: Yeah, I think one of the ways that people have started to see that the manipulation is right there on the surface, it's not necessarily this impenetrable, illegible algorithm that we can't even understand, but it's right in the user interface, that the design of the UI is manipulative and creates perverse incentives and rewards bad behavior and trains us to behave in ways that we wouldn't behave if it wasn't for the interface. So a lot of the blame is in the discipline of UX, the discipline that I most closely associate with, the people who are designing the front end. That there was no ethics in this industry. A lot of times it gets blamed on the engineers, but the designers have a lot of responsibility here, and as Roger McNamee says in his book and in the film, that it's the same techniques of casinos and slot machines. This was all driven by the data. It was driven by what interface would create the chart that would please the business people, and that is the same mentality that comes from ad tech. It's what technology can we deploy that makes the chart, that makes people spend money? And that's the whole motivation. There's nothing else. It creates these terrible machines that are just trying to make the chart go up at all costs, without any other considerations. So when it comes to elections, we've built this machine that prioritizes engagement. Whatever engagement works, doesn't matter. Make the chart go up, and so we inadvertently tapped into our lizard brains and our sort of inability to think clearly and be manipulated at multiple levels, on the surface, in the content itself, the algorithms that are selecting it, the machines that are measuring it, feeding it back to us. It's really multi-layered, and so this idea that people are coming to terms with, that the interface itself privileges emotion that is negative and moves conversation towards toxicity because toxicity is rewarded and the opposite of toxicity is not only not rewarded but is squelched out of the flow of experience. So it's quite tragic.

Taylor Owen: You think that's changing our politics?

David Carroll: It seems to be. It's hard to make a causation. There's mostly just correlation, but in certainly the way that the advertising complex was built to sort us into our categories without knowing what our categories are, that was the foundation, and then from there, the recommendation engines were built to surface vast databases into a user interface and those were biased towards engagement to move the chart up, and then the user interface to reward people right on the surface for engagement, again, to move the chart up. It was never about health or intelligence or goodness, it was just about greed and metrics.

Taylor Owen: One of the things you seem to see a lot, particularly in the research community, is some challenging to the questions of either how negative online advertising is, or how effective it is, right? So you see that with responses to the claims that Cambridge Analytica was making about the power of their profiling. I think you saw it with the response to Twitter's ad ban, right? Where a lot of people were saying, "Look, there's actually positive benefits to this micro-targeting ad campaign," and then they're not as effective as people are making them out to be. How do you respond to those context challenges, I guess, that are coming from the research community?

David Carroll: In the big picture, I would hope that these are ... that, for example, Jack Dorsey is really just calling for a moratorium and this would echo what the UK information commissioner has called for. Elizabeth Denham called for an ethical pause on micro-targeting and the UK Association of Ad Agencies called for a moratorium on it, so I think when the term "ban" is used, it's sort of too permanent. I think it's more of we need to pause this and figure out if it works and if it works, how, and if it needs reforms. Let's be more thoughtful about it. So I think it's more of a pulling back and you can see the way that Facebook is really trying to litigate the details of what they're going to allow and what they're not going to allow in the public sphere. So, there is a large discomfort with it. There are concerns that I have, that I haven't seen the skeptics confront and so one concern that I haven't seen skeptics confront is that the definition of a political ad is much larger than the tradition and the existing research considers. The sort of recognizable political ad that everyone who's watching it knows it's a political ad and at the end it even says, "I approve this message." And we know that bad actors don't even stick to that format and do unattributable, untraceable influence campaigns that don't even fit into the framework or definition of a recognizable political ad, but they're doing it with the same data and intent as it would be otherwise, and so I don't think the research community has fully grappled how the boundaries of political advertising have exploded well beyond their capacity to measure it. The other thing is that the nature of the super sample has not really been confronted by the research community. I fear that companies and not just Cambridge Analytica have assembled a super sample of the entire population thanks to the voter rolls and enrich those and then created models that allow an entire electorate to be simulated in different computer models with the intent of modeling different outcomes and, more particular, finding outliers in populations that then they can target by name and test messages on and test their receptivity to message and hone in on them and refine them, and these people have no idea that they have been singled out and that they are undergoing an experiment and their behavior is in a feedback loop that is totally automated and being monitored and moderated in particular ways. There's just no awareness of this, and when elections are won on the narrowest of margins, we're seeing the popular vote skew from the electoral or the polled vote. We see the polls don't line up to the outcomes in different ways. We're seeing the popular vote is skewing farther and farther away, that the existing assumptions aren't working anymore, and so these are some of the aspects of the outcome that make me concerned that it shows that the political science community hasn't figured out what some operators might be doing. So that's a lot of the concern about the super sample.

David Skok: I suspect Taylor will disagree with me on this, but this is kind of where I find the conversation jumps a little bit, to where the culpability to me isn't necessarily on the platforms, but on the democratic institutions and the governance and the infrastructure we have in place. What I mean by that is 'twas ever thus, you know? Whether you had a social media platform or a newspaper or a fireside chat or TV commercial or, going back, a train, people have been doing misinformation messaging and it's mostly the parties and partisans themselves who do that, not necessarily foreign actors, and I wonder if sometimes when we look at this and we assess it through the prism of platforms, are we not simplifying it too much and absolving everybody else of their responsibility?

David Carroll: Yeah, I agree with that. When I talk about this stuff, it can sound like I am attributing a single cause, but I tend to take the position that the answer is all of the above, that you can describe a condition that caused the outcomes and you could say, "Yeah, that contributed to it and all things contribute to them," and I don't think you can even tease them out because they're quite interconnected, and like you said, they're part of a long tradition of ... we can see in the cycles of history how the advent of the penny press and yellow journalism and the way that the patent medicine industry was a literal snake oil industry that was funding news and then once a muckraker figured out that the whole thing was a scam, it caused the creation of the Food and Drug Administration and sort of this relationship between news production propaganda, fraudsters, scammers, the need to regulate, the role of journalists to poke at what's paying their own salaries. This is a repeating cycle, and how it feeds into the political framework and the advantages of weaponizing information. So yeah, I think these are all concerning, but for me, the original concern that motivated me was the international nature of Cambridge Analytica, that the political technology industry had become internationalized since the Obama campaign, and that, to me, seemed to cross a line that was no longer acceptable because just the basic principle that you would want elections to be domestic affairs exclusively, and that fear was really born out and then ultimately showed that the conduct was unlawful, at least according to UK law. And so the way that data is an atmosphere that doesn't respect boundaries, and now that's interfering in the democratic process, it's an escalation of the problems that had already existed and is now fundamentally affecting national sovereignty and personal sovereignty in a way that I think is new and alarming, and we have to figure out how to discourage it.

David Skok: You mentioned the anecdote of the FDA and the penny press and all of that, and I wonder now, when we talk about the data breaching and the next phase of it, can you tell us about some of the more interesting bills being put forward in the United States to tackle somebody's problems that we've outlined?

David Carroll: One bill that's interesting but I don't know if it'll ever see the light of day, but it's something to be said that it's even been put forth, is by Senator Dianne Feinstein of California called the Voter Privacy Act. It seems specifically to narrowly address the problem that Cambridge Analytica itself posed, and gives voters' rights to their voter profiles in very specific ways that we don't have right now, and it's important because it directly looks at the political technology industry as something separate from the regular advertising industry even though there's massive overlap, but one of the innovations of the bill is that it has the necessity for proactive disclosure. That is, candidates and super PACs would have to proactively tell voters that there's a file on them that exists, and "click here to validate your identity to see it." This goes farther than the GDPR. This goes farther than Europe, and so seeing the moments when the US decides to innovate and push the boundaries are very exciting to me, because in many ways, the GDPR is itself influenced by some aspects of US privacy laws and so on. So there's an international influence of the legal constructs which is important. We get a bank statement every month, so why is it that unreasonable to think that we should get a data statement whether it's monthly, quarterly, annually? That's to be hashed out, but this idea that it's our data and it's not just the data that we supply but it's the data that's inferred about us, and the European model considers inferred data personal data when it's attached to an identity. So, all of these mechanisms are really interesting. So that's an interesting bill. Of course, Senator Elizabeth Warren's Corporate Executive Accountability Act, which would make executives personally liable for these kinds of violations and crimes. It's fairly interesting to me because you can have the best privacy laws but if all it does is result in a fine, it really doesn't deter bad actors adequately, and in the case of the-

Taylor Owen: Well, Facebook's stock went up after the $5 billion fine, right?

David Carroll: Exactly. Fines don't seem to be an adequate deterrent on the top end where you have huge companies, and on the bottom end, I discovered in my quest for my data that bankruptcy and insolvency law is the problem on the other side, that small companies, medium companies, can just go out of business when they get caught, and they're just shielded from liability. And indeed, like Cambridge Analytica LLC has been abandoned in bankruptcy court in New York and the FTC can't even really pursue them. Of course they basically got away with it under insolvency law in the UK. So we hit the limits on small companies on one problem and big companies on the other, and so the way that Senator Warren's plan really puts the onus on executives to run accountable companies otherwise they go to jail ... You know, Equifax, incredible breach ... It's so frustrating that the penalties are just laughable. It's a joke.

Taylor Owen: We put executive liability on financial sector, right?

David Carroll: Many people feel that because no bankers went to jail after the mortgage crisis, nothing really changed. The US is very reluctant to prosecute white collar crime and very reluctant to put executives behind bars because we have deference to business, but data crimes are a thing now and we have to realize that.

Taylor Owen: I want to talk about the international piece of this in a minute, but focusing on the US, I mean, these federal bills, as you say, are seen fairly unlikely, require a pretty big political change in the next few years, but it seems like the states are running with some of this stuff right now. So, can California change American or even international behavior of some of these companies?

David Carroll: There's certainly precedent for the California effect where a state law can affect a whole economy far outside of its boundaries. It's arguable that the entire Western Hemisphere's automotive industry is regulated by a California state law that really originated from dealing with the smog problem in Los Angeles and the auto emissions standards created a condition where it became economically infeasible to build a car that could not be sold or driven in the state of California. So, the California Consumer Privacy Act could continue in that tradition. It makes no sense to build a service that ... one for Californians and one for everyone else. The GDPR has had effects of this already in the sense that the big platforms do have features to download your data. I dispute whether you're getting all your data but that's to be litigated. So, their extraterritoriality of data protection is in effect, and even the language is changing. We like to use the word "privacy" on this side of the world where on the other side of the world, meaning Europe, they use the word "data protection" instead, and to hear an FTC commissioner use the phrase "data protection" is a really positive sign that we're moving away from a term that I would argue is completely meaningless and the industry has really capitalized on the meaninglessness of the word "privacy".

Taylor Owen: The national regulatory approach or even state level ... I guess maybe California is a small exception just because of the location of some of these companies, is bumping into jurisdiction in the scale of their markets. In Canada we had this ad transparency law come into effect six months before the election. There was a degree of compliance from Facebook and Twitter, but Google simply said, "We're leaving the market. It's not worth our investment to build the system to create an ad archive for the Canadian market alone," which seems to be a clear suggestion that we need international cooperation on this. You need markets that are big enough to force change. Do you see that happening and do we need new institutions for that almost? It doesn't feel like there's an appropriate international coordinating body for this kind of stuff right now.

David Carroll: Yeah. The situation where Google pulled out was really interesting to me because it also illustrated the market effects but also illustrated that the company never built an auditing system, and so it goes back to the problem of when an advertiser wants to know, "Where did my ads run?" They never built a system to give you that answer. The automation was built in such a one-sided way, it didn't provide for any effective auditability, just from the beginning, from the get go, and the fact that they could not build that retroactively is an indictment of how badly designed the system was from the start and how unaccountable it was engineered to be. But my solicitor, Ravi Naik, described to me that it's a similar problem to piracy, that from a legal standpoint, the world had to figure out how to deal with pirates, and it took international cooperation to achieve that. There are data pirates out there and the more that we succeed at regulating, they will just be pushed deeper and deeper into the shadows, become even more difficult to detect and enforce. So, indeed, international cooperation is the ultimate necessity and that is difficult, especially as countries form a kind of splinternet where ... Obviously China has its own internet and tech industry and Russia is literally disconnecting from the internet, you could say, and data localization rules and so on are all sort of like ... The same issues are sort of a pro and a con against keeping the internet a global network. I think a higher global standard will help everyone. It's been quite unfortunate to see how the publishers and ad tech companies have not respected the principle of the GDPR in implementing the cookie notices. I guess it was overoptimistic to say they would, but the CCPA in California is going to push in that direction anyway. But then, will it only work when you're in California? And so the stubbornness to keep the status quo in place is stickier than I thought it was going to be.

Taylor Owen: The flip side of that disconnect between the capabilities that are being enabled in this system, and the strength or potential strength of some government interventions, whether it be in Europe or in California or Canada, is it ... I guess a lot of countries, particularly emerging markets and other regimes, it may very well be the case, that the speech policies of the platforms, for example, might be better than what's there right now, right? And at the very least, we're opening up the system that's a bit of a wild west for some of the behavior you're talking about in these countries, right? How do you see that side of this debate playing out in emerging markets? In liberal regimes, we've seen lots of stories about African countries where Cambridge Analytica-like behavior is kind of a free-for-all. What do we do in those situations?

David Carroll: The Cambridge Analytica whistleblower memoirs by Christopher Wiley and Brittany Kaiser talk in detail in ways that we haven't learned about before, in particular about the work in Africa or just the general mentality of looking for places with weak enforcement and no laws and the ability to infiltrate a country through its elections in order to just exert a kind of colonialism, and Wiley explicitly talks about this as a new form of colonialism. Literally British aristocrats thinking they have a right to meddle in the affairs of other countries, and they're just using tools and tech and data and corruption to achieve it. So, this is a huge problem and it's important that countries like the US and Canada build upon what the EU has achieved to set a better example and to help the global South protect themselves and understand that this serves as a deterrent. One of the things that I felt really positive about in the response to the film, The Great Hack, is that the response from people in South America and Africa was visceral and powerful and there was an awareness of these things in those countries and so there is an awareness and the film did help to do that and some of that is the sort of international reach of Netflix itself, that put the movie into media ecosystems all around the world in a way that few other, if any, platform can do. So, I think that's a sign for a positive awareness, at least. What we do with that is a different story, but it's sort of similar, that the problem of offshore companies and offshore tax shelters and the sort of atmospheric quality of data, they're all sort of similar problems of international flow and the worst expression of it is a kind of data piracy.

David Skok: That same international flow, ironically, has actually helped you and this movement, that you've become a real hero for a lot of people trying to gain access to your own data. Just curious, where will you take your campaign next if you do get ahold of your data? What happens if you don't get it? Where are we now?

David Carroll: Well, at the end of the film, I do say that ... I sound quite pessimistic about getting my data back, but I'm feeling much more optimistic now. That was filmed a while ago and since then, it was described in a UK Parliament committee where the chair, Damian Collins, asked the deputy information commissioner what was the status and they had described that they're making progress in their forensic investigation, so I'm looking forward to the final report from the ICO, which will provide at least a narrative of how Americans' data was collected, blended through different sources and algorithms and will finally get a forensic picture of what happened by a truly neutral arbiter. The information commissioner is quasi-independent. Of course, the information commissioner herself is a Canadian working for the UK and so you sort of have a voice that has looked at forensic proof, that has no skin in the game, so it's going to be really hard to dispute whatever they find. Something that is so politicized will have a neutrality to it that will be really important to advance the conversation forward and to have a second round of engagement between skeptics and people that are worried about this, and we can have a more informed debate about whether we should be worried about this, or maybe it was overblown because now we really see what was on the servers or whatever it turns out to be. Then in terms of me being able to actually see my real data set, it will be very interesting to me to see what's there, because I was not one of the 87 million people that Facebook notified that my data had been harvested through the API and the personality quiz, but it didn't have any affect on my standing because I had to file anyway, and it will be interesting to see if I have a psychographic model applied to me, which would make sense given the company's scheme, which was to collect 87 million, match 30 million of those to voters, and the 30 million model was the statistical model applied to all 200 million plus registered voters. So the idea is everyone had a psychographic score, whether they were on Facebook or not, but in particular, I had my privacy settings set in Facebook to prevent my data from leaking because I was one of those rare privacy nerds who dug deep into the settings and turned everything off, and that protected me from this particular event but didn't protect me in the grand scheme of things. So, I'm quite interested to see as somebody who is privacy defensive and has been privacy defensive for some time just how much data is in there anyway, and how potentially futile my efforts to opt out have been, given the picture that we're able to create about me regardless. And so that's a particular aspect that I'm looking forward to being able to see, and also showing that the fight can be won. It's ridiculous what it takes to win it, and hopefully people can see we need to make it much easier to do what I did, and that can be where I take it from there.

Taylor Owen: We've talked to a lot of people on this show so far who have gone through a transformation in their roles over the past number of years as this conversation about technology and society more broadly has evolved. You've described a bunch of your roles you've played, right? As a transition from an academic to becoming an activist, to becoming a legal activist, to becoming a policy entrepreneur in this space too. How do you see that arc and where do you want that to go? Which of those roles are you most comfortable with and do you want to keep doing?

David Carroll: Some of it is just being idiosyncratic myself and managing to create an academic profile that does not fit in a box. Luckily being in an idiosyncratic university that doesn't require its faculty to fit into the normal definitions. So some of it is just being in the right place at the right time and being really privileged and really lucky to have found myself in this place. I don't know if I could have done what I did at a different university. I don't know if I could have done what I did without the academic freedom, job security, the encouragement to be public practitioner and not having the pressures that most of my colleagues in academia face. So, some of it's just being lucky and just recognizing that luck and just trying to do the most I can with it, but at the same time, being a shape shifter is confusing and it's hard to characterize myself. It's hard to know who I am, and so I hope that actually my identity will maybe stabilize from here on out.

Taylor Owen: Cambridge Analytica probably knows that. Maybe you could get that data.

David Carroll: Yeah, that's one of the things that's really interesting to know, if when I get my profile, it tells me who I really am.

Taylor Owen: There you go.

David Skok: David, I think what you are in a lot of ways is a journalist. You don't have to be a professional journalist working every day to do acts of journalism and your pursuit of your own data is to me a journalistic pursuit in the same way that many other journalists every day try to get to the truth about something. So, as a result of that, I think maybe it helps ... That's how I view you, and we're both grateful that you took the time today. Thanks very much.

David Carroll: It's been great to be on and get into the weeds and talk about this stuff, because I wouldn't be able to get into such detail if this stuff hadn't happened.

Taylor Owen: Thanks so much. Really appreciate it.

David Carroll: Thank you.

David Skok: That was Professor David Carroll from Parsons School of Design in New York.

[MUSIC]

David Skok: Well, I hope you found this as interesting as Taylor and I did.

Taylor Owen: Yeah, and let us know what you thought. Use the hashtag #BigTechPodcast on Twitter to talk about this episode. We'll be there.

David Skok: Thanks for listening. I'm David Skok, editor-in-chief of The Logic.

Taylor Owen: And I'm Taylor Owen, a CIGI senior fellow and a professor at the Max Bell School of Public Policy at McGill. Until next one.

David Skok: Bye for now.

[MUSIC]

Narrator: The Big Tech Podcast is a partnership between the Center for International Governance Innovation, CIGI, and The Logic. CIGI is a Canadian non-partisan think tank focused on international governance, economy and law. The Logic is an award-winning digital publication reporting on the innovation economy. Big Tech is produced and edited by Trevor Hunsberger and Kate Rowswell is our story producer. Visit www.BigTechPodcast.com for more information about the show

For media inquiries, usage rights or other questions please contact CIGI.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

Opinion

Read