Residents watch as opposition protesters erect barricades before clashing with police after the election result was announced, in the Mathare slum of Nairobi, Kenya, October 30, 2017. (AP Photo/Ben Curtis)
Residents watch as opposition protesters erect barricades before clashing with police after the election result was announced, in the Mathare slum of Nairobi, Kenya, October 30, 2017. (AP Photo/Ben Curtis)

While Cambridge Analytica might be best known for harvesting millions of Facebook users’ personal data to meddle in both the UK Brexit referendum and the 2016 US presidential election, the consulting firm has also had a hand in influencing elections in the Global South. In Kenya, as Nairobi-based author and activist H. Nanjala Nyabola notes in a recent essay for the Centre for International Governance Innovation, Cambridge Analytica mined voter data to aid President Uhuru Kenyatta’s campaigns in 2013 and 2017, exploiting the country’s deep tribal divisions.

Since Kenya’s 2007 election, Nyabola writes, regulating hate speech and disinformation has become much more challenging. Digital platforms — such as Facebook, Twitter and Google — play a significant role in challenging negative stereotypes about political behaviour in the country and making room for citizen agency. But they also contribute to the spread of radicalized and fake content, and they often operate outside of existing regulation. What should the approach to platform governance look like? And what happens when companies — Cambridge Analytica, for example — engage in what Nyabola calls “digital colonialism”: the harnessing of those platforms to influence politics in countries far from the one in which they’re based?

Following the publication of her essay, we spoke with Nyabola about Kenyan politics in the digital age, why she advocates for a balanced approach to platform regulation and the need for a “more sophisticated conversation” around the impact of technology on society.

Nyabola’s essay is one of several published by CIGI on new models for platform governance — the entire series can be read here.

Models for Platform Governance

Your most recent book is Digital Democracy, Analogue Politics: How the Internet Era Is Transforming Kenya. What would you like people to know about how digital technology has affected Kenyan politics?

As I argue in all of the pieces that I’ve written, and in the book, you can’t understand how tech is going to affect a society if you don’t understand the society in question. What I think is interesting about misinformation in the political space in Kenya is how it intersects with foregoing patterns of misinformation — how the digital amplifies things that have been happening before. In Kenya, we have a long history of rumours and political misinformation, coming from the state especially. I say in my book, and I’ve said many times in public, the number one purveyor of fake political news in Kenya is the government of Kenya. That has a lot to do with state control and state capture of traditional media, and how the period of free press in Kenya has been greatly compromised, especially from the pre-digital era into what I call Kenya’s first “digital decade,” from 2007 to 2017. 

All of that by way of context to say what we’ve seen in Kenya is an interesting action and reaction. Many of the things that people are afraid of in other parts of the world are coming to play, and that’s where the Cambridge Analytica piece comes in. We’re seeing Western [companies], and also companies from China and other parts of the world, coming and spending significant amounts of money, or being hired by political parties to influence the information that is created and shared in the digital space. That could be social media manipulation, it could be fake news websites, it could be creating fake websites for a specific politician — that was really big in 2017. Some of the companies are local, some of them are international working with local partners.

The fundamental reason why this happens is because there is a lot of money in African politics. There’s a lot of money in Kenyan politics. And not just big money by Kenyan standards, but big money by global standards. What I say in the book is this brings out the parasites — that has to do with the changing nature of global capital. The digital frontier itself is new but the idea of mercantilism — foreign capital going to “explore” the rest of the world on behalf of power — that is not new. That’s why I call it digital colonialism, because it’s replicating the contouring of colonization.

The platforms on which much of this disinformation is being spread are not owned by Kenyans, are not controlled by Kenyans. So what does accountability for political misinformation look like when a British company uses an American platform to influence political discourse in a Kenyan election, [and] that results in deaths and destruction in Kenya? What does accountability look like in that framework? I think this is going to be one of the most urgent questions that we’re going to have to deal with in the coming years. 

What are the challenges that come with political discourse shifting to online spaces, especially in an election context?

[This shift] is fundamentally changing the way in which people process political information. We’re going to have another election in 2022 in Kenya, and it’s going to be the first election with the first generation of “digital natives” — people who are [roughly] between 18 and 25. These are people who don’t remember what Kenya was like before mobile phones were everywhere and the internet was everywhere. These are people who don’t know how to read newspapers. That’s not a disparaging statement — it’s just the way print media has gone in the country; newspapers became incredibly expensive, a luxury item. That is also part of the broader narrative of state capture. At the heat of the democratic moment in Kenya, the biggest sites for criticism of the state was print media. That is why they sustained so much attack and co-optation from the state.

What does accountability for political misinformation look like when a British company uses an American platform to influence political discourse in a Kenyan election?

So now we have a print media that is reluctant to be the voice of criticism, that has retreated, and we have these kids who don’t know what it is to sit through a 30-minute news broadcast, don’t know what it is to read a newspaper cover to cover, don’t know what it is to be confronted by information complexity. I’m not even just saying that people are getting their news from Twitter, because that’s not the case; it’s coming from blogs, it’s coming from WhatsApp messages that have been forwarded to you. It fundamentally changes the way you consume and understand political complexity.

Sixty percent of Kenyans are below the age of 35. So we are going to be confronted with a new political reality, and a new reality for political information. That’s why, to me, this is going to be an incredibly urgent question: how are people supposed to think about politics when the traditional tools that we use to communicate political information have retreated, and the new tools are increasingly not fit for purpose? 

What would you like to see when it comes to platform governance?

I think the first thing is that we need to keep up the momentum that we’re seeing on platform accountability. A lot of these platforms have gotten away with this “we’re the good guys” sort of “white hat” narrative, and not really been held accountable or taken to task for the negative outcomes. I’m not a Luddite, I’m not anti-tech. It’s just a question of…realizing that the things that people do at the platform level have consequences at the social level, and sitting in the discomfort of that, and forcing these platforms to sit in the discomfort of what disruption means. It’s not just a fancy word that you throw around at conferences; what does it actually mean and how do you deal with that?

So, I want to see a little bit more accountability for platforms at the legislative level. In the United States we’re seeing a lot more of that. I think Singapore also has done a lot on this, for better or worse. More parliaments bringing these tech executives into a room and saying, Well, if you’re going to operate here, it’s not going to be a mercantilist sort of frontier market approach. There’s going to be legislature and social political frameworks in which you have to operate. That’s one thing. 

1190080263964909573

At a social level, I want to see more investment in understanding the societies in which this tech is deployed. To me, this assumption that developing countries are blank slates onto which technological fantasies can be projected is really dangerous. What we’re seeing is the consequences are usually far more grave than they are in countries that have robust legal and political frameworks. We’re talking about Myanmar and genocide, about Ethiopia and this escalation of hate speech leading to what some people are calling…a campaign of ethnic cleansing, influenced greatly by Facebook posts. We’re talking about elections that fall apart and people dying in Kenya. 

This fantasy that you can just build platforms in Silicon Valley and spread them around the world without having to engage with the realities of the societies in which you’re projecting, I think, needs to be challenged at a social level. To me, that really boils down to bringing human beings back into the conversation, through engaging with social sciences and humanities — and people. Tech cannot continue to live in a conceptual bubble away from the reality of human behaviour and human history. 

How would you approach the regulation of digital platforms? Are there lessons to be learned from the traditional print media model?

In Kenya, like I said, the internet replicates the contours of what went before it, occupying the space that the traditional media abdicated. [With] print media, there was awkwardness, but then there were the right questions asked, and then the right type of regulations were developed. Some of that was self-regulation and some of it was state-driven regulation, and you have this sort of complex synergy between the two. You have, for example, guidelines that media houses set for themselves, but then you also have legislation. Just about every country that has aspirations of a free press is constantly having this interaction between self-regulation and state-driven regulation.

What I want us to have is this kind of complexity and this conversation when it comes to platform governance. I don’t like the black-and-white nature of the current conversation, which is like, it’s either all or nothing; you’re either all self-regulating or it’s all state-driven regulation. That’s not the reality of a lot of sectors — the truth is you need both. So that’s what I would love to see: for us to sit in the complexity of what regulation could look like, and really start to ask those questions. 

How does social media amplify existing inequalities in society — and how can those responsible for building platforms and apps guard against that?

To me, first you’ve just got to bring people back into the room; you’ve got to bring human nature and human behaviour back into the room. That means having a really good understanding of how people have behaved in the past, and how people are likely to behave in the future — and not how you want people to behave, which is a lot of what this predictive, speculative building is doing.  

You have to build for inclusion, you have to build with perspective, and you have to build with intention. [You need to ensure] that you’re not just building things and hoping for the best, that you’re not building things that don’t reflect the reality of human history and human behaviour, and that you’re not building things that in and of themselves will intensify existing power imbalances.

Tech cannot continue to live in a conceptual bubble away from the reality of human behaviour and human history.

The truth is that every society has power imbalances, and the internet is always going to intensify whatever it finds in those societies, sometimes for the good, sometimes for the bad. The only thing we can really do is have a good understanding of what those imbalances are, so that we can, as far as possible, mitigate for them and not end up with this whole, “Oh crap, the horses are already out of the stable, the genocide’s already happening, and we have absolutely no idea how to fix it.”

I think a lot of times when critics or analysts speak people tend to interpret it as tech pessimism and I’m very careful to emphasize it’s really not; it’s just about providing that reality check so that people can have a deeper perspective. I think especially when it comes to Africa and policy making, and tech, we’ve seen so much unchecked optimism, and tech companies being given an incredible amount of power. The concerns of the citizenship and of ordinary people are kind of a distant fourth, fifth concern. What people might be missing is that people are not rejecting technology; we’re asking for a more sophisticated conversation around the impact of technology on our lives.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.
  • Catherine Tsalikis is the senior editor for OpenCanada.org. Previously, she worked as a producer for the CBC’s Fifth Estate and CTV News Channel. She also worked as a politics producer for London’s Sky News, and as an editorial assistant for The World Today magazine, published by Chatham House.