Transcript

This transcript was completed with the aid of computer voice recognition software. If you notice an error in this transcript, please let us know by contacting us here.

 

David Skok: Welcome to season two of Big Tech. I'm David Skok, the Editor in Chief of The Logic, a publication focused on the innovation economy.

Taylor Owen: And I'm Taylor Owen, Senior Fellow at the Centre for International Governance Innovation and a professor of Public Policy at McGill.

David Skok: And this is a podcast that explores a world reliant on tech, those creating it and those trying to govern it.

Taylor Owen: So we started recording interviews for this season of Big Tech before the outbreak of COVID-19 and we'll be sharing all of those with you in the coming weeks. But we can't really talk about tech right now without talking about the coronavirus.

David Skok: Like every other facet of our lives, Taylor, the coronavirus has had wide ranging effects on our technologies and the way we use them. From the shift to telecommuting to the privacy concerns around contact tracing.

Taylor Owen: And like everything, governments are also having to adapt right now as they struggle to fight the disinformation that threatens our democracies and indeed actually our public health. The stakes have been risen around this issue. There's a lot to unpack in this season of Big Tech.

David Skok: If you've listened to this podcast before, you know that fake news and disinformation aren't exactly new.

[CLIP]

Jessica Dean: At other round of highly anticipated tech hearings are underway in Washington. The leaders of Facebook and Twitter are testifying about how they're trying to stop the spread of misinformation on their platforms.

SOURCE: CBS Philly YouTube Channel

https://youtu.be/vk6f0WhZeQc

Social Media Execs Testify On Spread Of Misinformation On Their Platforms

September 5, 2018

Taylor Owen: But COVID-19 has brought about a wave of disinformation that is unprecedented, both in its scale.

[CLIP]

Joshua Johnson: I do think that the concern in terms of government messaging has to do with being able to cut through all of the foolishness online.

Chuck Todd: That is a new level that MERS and SARS we are in a ... That, to me is something that's a new level of difficulty.

SOURCE: MSNBC YouTube Channel

https://youtu.be/VIBItcvdNu4

Misinformation, Social Media Threatens Coronavirus Response | MTP Daily

March 2, 2020

Taylor Owen: And its impact.

[CLIP]

Robin Roberts: They claim to have a holy potion that medical professionals are calling poison.

SOURCE: ABC News YouTube Channel

https://youtu.be/akBe9Z9m34M

Church Uses Bleach as ‘Miracle Cure’ | Archbishop Confronted

Oct 28, 2016

David Skok: All of this has meant a lot of work for Angie Drobnic Holan. Angie is the Editor-in-Chief of PolitiFact, the Pulitzer prize winning fact-checking organization.

Taylor Owen: We'll sit down with Angie to talk about where all this fake news is coming from and whether this could be a tipping point for the way social media platforms and governments handle the problem of disinformation.

[MUSIC]

David Skok: Angie Drobnic Holan, welcome to Big Tech.

Angie Drobnic Holan: Thank you for having me.

David Skok: So I imagine you're as busy as you've ever been. How are you holding up?

Angie Drobnic Holan: We're doing all right. We are very, very busy. Our website traffic is off the charts and we're all working from home. But so far so good. Our fact-checking method is pretty well established, so we do have a lot of comfort in how we fact-check and how we go about doing the work.

David Skok: Maybe you just alluded to this with the traffic numbers, but can you give us a temperature check on the state of fact-checking right now? Are you seeing more or less misinformation online you were before the COVID-19 outbreak?

Angie Drobnic Holan: We are definitely seeing more. It does follow a trend that we see whenever there is a high profile news event, and it can be national or it can be worldwide. We have seen a trend where there's an explosion of misinformation that follows. It's a very clear trend. Big news, followed by misinformation, followed by fact-checking.

Taylor Owen: And I mean in previous moments where we've seen that kind of spike in misinformation, there've been pretty wide spectrums of what we've seen, right? Everything from state sponsored disinformation to hoaxes and financial opportunism around it. Are you seeing that same spectrum of harmful information around this?

Angie Drobnic Holan: Well, I think we're seeing the spectrum of misinformation. We're definitely seeing hoaxes that seem to play on people's fears. There's always an element online that's what I would call clickbait. There is an element of people who do seem to be trying to share good information and just have caught the wrong piece of information and they're sharing it. It's not ill intentioned, it's just part of the confusion of dealing with a new illness. And then we also have the political layer on top of that. We've been fact-checking President Donald Trump fairly regularly. He's been doing daily press briefings and there's a lot of false statements in those.

Taylor Owen: It feels like there's some real persistent pieces of fully false information right around like originating from a bat and from a lab in China. And are there some key things like that that are really circulating widely that you're concerned about?

Angie Drobnic Holan: Well, I think the biggest category we've seen lately is false treatments. So people say that drinking bleach can cure you or drinking warm water. Now drinking warm water won't hurt you, but drinking bleach will, and there's a range of kind of phony cures. We've also seen a lot of conspiracy theories about where the virus came from. And the truth is we don't know where it came from, but scientists can say it does not look manmade. It doesn't look like something that was created and deployed. Now I'm not a scientist, but they say that these viruses have markers and the scientists say this is something naturally occurring. Now how it exactly came in contact with humans, we don't know, but it was not something that was engineered.

Taylor Owen: That idea that we don't know seems to be sort of what makes this moment a bit idiosyncratic, and that there is just a ton we don't know and our facts are evolving so regularly. I think even things of wearing masks, right? Where the scientific community and the medical community seems to be or was for a time, divided. How do you treat that kind of content that sort of sits in this gray area? It can't really be fact-checked or the fact-checks change from moment to moment, right?

Angie Drobnic Holan: Right. We are in this moment of what I would call informational flux where there's a lot of stuff we don't know. And I think that's why people have turned to news so often and why TV is seeing higher ratings, we're seeing more hits, because people want to know. And there's something of an information scarcity around this novel coronavirus. For us, we pick claims to fact-check that are often making extreme claims. So we can't say exactly where the virus came from, but we do feel confident that it was not engineered in the lab, because it doesn't have those markers. So that's kind of the area where we stick as fact-checkers. We try to pick claims that we feel there's certainty about. We say what we don't know and sometimes the facts on the ground change and then we come back and we do more reporting. And that's been our philosophy for years now before coronavirus and I think it still works now. Even though it is a new situation, scientists are still studying this. There is a lot we don't know.

David Skok: Angie, what drives people to make and disseminate this misinformation? Is it to make money? Is it to manipulate elections?

Angie Drobnic Holan: I think there are multiple motivations. Certainly, we see one group that is trying to make money and it's just getting people to click on the phony information. I think we also saw in the last election that some of these are organized misinformation campaigns coming from foreign countries. Russia was the big purveyor in 2016, but in recent times there's been some evidence that China is also involved in this. And then some of them are in the U.S. Domestic political advocates, like fans are detractors of Donald Trump who are trying to put together some powerful emotional messaging to criticize him or motivate voters. And then finally there's a category that is kind of the most baffling to me, which are people who just seem to get a kick out of fooling other people and they're just kind of trolling. And that's a very frustrating category of people, because they don't really seem to have any other motivation than fooling people. And so I think they're going to be harder to dissuade.

Taylor Owen: In terms of the foreign interference category, I've seen a fair amount of discussion about Russian interference right now, but also that potentially, the Chinese government is using some of the same tactics that the Russian government did in the 2016 election. Are you seeing that as well?

Angie Drobnic Holan: Well, it's hard for us to tell to be honest. Usually we turn to academic researchers after the fact who give us better information than we can get on our own about where the messaging is coming from. Because we're fact-checkers, we're focused on the content. I mean we're interested in who's purveying messages, but a lot of times it's really hard to tell.

Taylor Owen: And you're looking at the output, right? Not necessarily the input and the network patterns.

Angie Drobnic Holan: Exactly. We're looking at the output, whereas some of the academia can really look at internet traffic patterns and really see which campaigns are coordinated and which are not. One of the interesting things though that's happened since 2016, which that the academics have told us, and we can see for ourselves too, is that lots of Americans have picked up the tactics of misinformation and that kind of very negative messaging. So Americans are learning too.

Taylor Owen: Yeah, we've run a big monitoring project in Canada and see that here too, right? It's third party groups. It's individuals' political campaigns all using these same tactics because they work, unfortunately.

Angie Drobnic Holan: Yeah. And for us at PolitiFact, when we debunk things, we have this standard methodology. We're basically tracking information back to see where it came from. If something is misinformation, it usually seems to spring out of nowhere. There's not a lot of history to the piece of information until you find it being circulated on the internet with like a, "Can you believe this?" tag. When things are real, they tend to have some sort of history to them, so that you can find a study in a peer review journal or you can find an on the ground news report from a reputable news organization. And so we really are focused on the idea that we can know what is real or not, and we just need to think about it systematically and go through the channels looking for evidence. And then we can find it and say, that's true. Or we say there's no evidence for this. It's a conspiracy theory. It's false.

Taylor Owen: So in some ways this is an epistemological problem. It's about how we determine and then signal what is true and false in society and how we come to know about the world. But one of the real challenges here, is that very often it's political figures that exist in this grey area. And particularly now, we're seeing disinformation flow from the top. How do you as a journalist deal with that, where it's the political figures themselves that are spreading the myths and disinformation?

Angie Drobnic Holan: Oh, it's incredibly challenging. And I studied literature in the 1980s, so I'm very familiar with these epistemological arguments that say the truth is subjective and relative. And I think that's fine for where what I would describe as metaphysical questions, but in the realm of politics, things can be confirmed or debunked. And it's very seldom that I've been fact-checking a political claim and come to the conclusion that, "Oh, it just depends on how you look at it." I mean, things can either be confirmed or debunked. And I've seen that over and over again. So whenever a politician says, "Oh well it's my opinion, I'm entitled to my own opinions, and facts are subjective." No, we push back very, very hard against that. And that's what we're seeing now. I mean, I think with President Trump's press briefings, especially on the coronavirus, he's trying to create his own more positive reality about the state of the pandemic and his own administration's response to it. And I think that dynamic is what is so confounding and strange to deal with as journalists.

Taylor Owen: So I wanted to transition a bit to the platform's role in this. You guys are part of Facebook's fact-checking network. Can you give us a sense of how that's working now and how you evaluate that system at the moment?

Angie Drobnic Holan: Yeah, it's a very interesting program, because the way it grew was that after the 2016 election, some of us fact-checkers around the world had connected into a network. We call it the International Fact-checking Network. And it's really just a way for us to share best practices and support each other in the work. But after the 2016 election, we wrote an open letter that we published on the internet to Facebook that basically said, "You have a problem with misinformation on your platform and we think we could help." And then Facebook, frankly to my surprise, took us up on that offer and started what they call a third party fact-checking program.

Taylor Owen: Yeah. And there's been a lot of concerns about, I guess the scale of it and whether it is matching the scale of the problem discreet from the efforts of individual organizations. And I saw the other day that I know Avaz found that 40% of the content that they had identified that were flagged by the fact-checking network were still up on the site. Do you think the resources that are needed to actually fact-check a platform that has 2 billion pieces of content a day are there? And how do you come at that scale challenge here?

Angie Drobnic Holan: That's really hard to tell, because I mean it's a huge platform. And one thing that I have definitely learned as we've done this work, the scale of the platform, how many users there are, the freedom that users have to post. So I can't really answer that question, and really I think the only people who could answer it would be Facebook. And as I'm sure you know, they're pretty private about their data and how they handle these things. I can say from our point of view, we are able to access and fact check misinformation at a level we never had before. Because before this program we would tell our readers, "Hey, if you see something on Facebook, email it to us and we'll be happy to fact-check it." Well now we're fact-checking many more posts and putting out those fact-checks. And when we started the program, the fact-checks they did okay traffic wise, I mean we could see that people were clicking on them and they were just okay. But now, for whatever reason, they're some of the most popular fact-checks on our site. So from an impact point of view, from our metrics, from PolitiFact world, the program seems to be working well. I can't tell from Facebook's point of view how well it's working. But you can see how misinformation is more of a process to be managed. I don't think we'll ever get to a day for any of the platforms when we're like there are zero posts of misinformation on this platform. Because I think there's some kind of human impulse to create this kind of content, whether it's for monetary gain or for trolling purposes or for a tall tale. So I think if we see improvement here, and I do think just the dynamics of the program there is improvement. That satisfies me as a fact-checker.

David Skok: So a few months ago, the company said it wouldn't take down political ads that contained misinformation. Now they're taking down misleading content fairly aggressively related to COVID-19. What do you think that means for content moderation after this pandemic is over?

Angie Drobnic Holan: I am very intrigued and eager to see how this develops. I have always been in favour of fact-checking political content. Whether it's on Facebook or anywhere else. So Facebook's exemptions are something that in principle I disagree with, and we've told them that. And I think what we're seeing with the coronavirus, because it's such a critical public health issue, they have quite a bit of power to reign in misinformation. And I am hoping that they will take lessons from this really extreme case and apply it to more everyday types of misinformation. Part of this is my bias as a fact-checker, but I am all in favour of freedom of speech and the first amendment, but I think false information is extraordinarily pernicious and it needs to be handled in a relatively aggressive manner. It can't just be left to say, "Oh well we hope people will find the right information eventually." No, that's not a way for a healthy democracy to function with misinformation swirling all around and people being not sure what's true or not.

Taylor Owen: And part of the challenge of that misinformation swirling all around is how it is incentivized. It seems to me there's not just a point of creation and point of distribution problem around this false information, but there's also a structural problem that sensational information is incentivized and certain types of engagement go viral on these platforms versus others. We talked about the financial motivations behind digital advertising and micro-targeting. And that's not just an issue of fact-checking. That's an issue of the design of the system itself. And so far I think we've been really reluctant to go there, because it gets at the financial models of the companies, the very nature of our information system. And I guess I personally kind of hope that this moment where we're really putting a priority on everybody getting reliable information, might shift that discussion a bit. But I'm curious what you think about that.

Angie Drobnic Holan: I think it definitely could and basically time will tell. I know that there is pressure on the platforms coming from a variety of sources right now. So their users are not happy with the false information. Regulatory authorities are not happy. So then you have this example of the coronavirus and you are seeing tools being used where maybe there would be some reluctance before. So I do hope it would shift. But I do think that we are in early days of internet technology and there's more to be learned. And I do think one day, we're going to look back on these years and be like, "Wow, can you believe it was like that or that we put up with that kind of thing?" And the answer will be like, "Yeah, it just took us a while to figure out how to handle this communications tool in a way that was relatively healthy."

Taylor Owen: God, I hope that's the answer.

Angie Drobnic Holan: I hope that's the answer too. I'm an optimist now you'd say relatively healthy, but I feel very fortunate, because I've been born in the digital age and the pre-digital, so I've seen both. I remember when people said, "Oh, it's terrible that newspapers have this monopoly on information and they only show what they want to show and they only serve powerful interests and minority voices are not represented. And if only the people could have a tool to share their point of view, everything would be so much better." Well then we did get the tool and I do think that you do hear a lot more voices and that's a good thing, but it didn't come without problems. So technologies, they kind of give and they take. And what we want to do is, I think from an informational point of view, we want to find good ways to manage them. It's not a problem that's going to be solved. There's not going to be some golden utopia in the future, but we can make our processes work better or worse depending on our choices.

David Skok: One thing that I've found quite heartening through this terrible crisis that we're going through is the flocking to traditional news outlets that a lot of readers have done. Obviously as a journalist, which I am, that's been an encouraging sign. Now you counter that with in March, Facebook, YouTube and Twitter sending thousands of their moderators home, and they aren't journalists in the traditional sense, but in many ways they are. And these companies are being forced to rely more heavily on AI to do their content moderation. And that's led to some problems. Outlets like the Atlantic are being taken down because the AI isn't good enough to do that work yet. So how do you think the impact of AI content moderation now will impact all of this moving forward? Because we know with a lot of certainty that all of these companies are going to do everything they can to make sure that that gets up to speed and is even more effective now than it was.

Angie Drobnic Holan: I'm kind of down on the AI technology. And you mentioned the Atlantic, PolitiFact was a victim there too. There were two days at the beginning of the crisis where our readers were emailing us, telling us our content was blocked on Facebook. And we got in touch with Facebook and they did straighten it out relatively quickly. But I think it's a perfect example of the perils of AI. When you're dealing with problems of truth and falsity, they're very abstract concepts and they don't lend themselves to zeros and ones. And I know that there's a lot of hopefulness among technologists that automated solutions will help here, but I just have a hard time seeing how they're going to be implemented on a wide scale, because whenever they're tried, we see these faults and failings, and because computers are just not as sophisticated as human beings when it comes to natural language processing and understanding messaging and being able to deal with the same content being expressed in different ways and using different wordings. So I'm not saying I'm against AI, let's try it. I just haven't seen it work very well and I'm a little dubious that we're going to get to some perfect place with AI.

David Skok: All right, so journalists win. I'll take that as a victory.

Angie Drobnic Holan: Yeah. Again, another bias I have, I'm in favour of the journalists.

Taylor Owen: I wonder if with the public health impacts so clear of misinformation now, the governments are going to use that to step more aggressively into this content moderation conversation. The Canadian government, for example, has just said they're considering a far more aggressive fake news law in response to health information. I wonder what you think about that. Is this going to be a new space of government intervention?

Angie Drobnic Holan: I think we're going to see some governments step in. I mean certainly Europe has been quite aggressive in creating new regulations and legal frameworks for the platforms and for misinformation, and I think they're leading on these issues. I think in the United States we have two problems. Number one, we have this tradition of first amendment freedom, so any sort of curtailment of messaging is problematic on that level. But then we have complete congressional gridlock going on, because is legislation in the U.S. Congress to crack down on misinformation and false messaging, especially around elections. But it's just not going to go anywhere while our government has the makeup that it does. One thing that's heartening to me is that I think that everyday people, and I'll end up on a positive note, are seeing an understanding that they don't have to settle for bad information, that they have sources that they can go to like PolitiFact, like other fact-checkers, and that they can put pressure on some of these platforms to improve their practices. And I do think that we are on the other side of things getting worse. I do expect things to get better from here on out or stay the same, because I think that the people have lost patience with so much misinformation and that we are working towards solutions now.

David Skok: I welcome and appreciate and value your optimism. So I'm going to do what I do, which is come in with one last question for you that's maybe not as optimistic. What if we don't?

Angie Drobnic Holan: Yeah. What if we don't?

David Skok: What at stake if we're unable to curb this flood of disinformation that we're up against?

Angie Drobnic Holan: Oh, nothing good. A return to the dark ages. I don't know. I mean, the metaphor that comes to my mind is when barbarians over ran Europe and some of the great manuscripts of civilization were hidden in monasteries until the days of chaos pass, so to speak. But I mean to me, it very much does feel like what journalists and other people who value reason and logic, what we're working for are. It is an approach where we deal with the world through intellect and science and rationality. And so if we're going backwards in that respect, I mean I think we could be in for some dark days, but I do think there are still those of us who are working to preserve the old methods of how we approach the world; using that reason, logic, science. And I'm hopeful those will prevail.

David Skok: Angie, thank you very much for joining us today. It's been a fascinating conversation. I really enjoyed it.

Angie Drobnic Holan: Thanks for having me.

Taylor Owen: Big Tech is presented by the Centre for International Governance Innovation and The Logic and produced by ANTICA Productions.

David Skok: Make sure you subscribe to Big Tech on Apple Podcasts, Spotify or wherever you get your podcasts. We release new episodes on Thursdays every other week.

[ More Less ]
For media inquiries, usage rights or other questions please contact CIGI.
The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.