Last month, the Kenyan judiciary delivered a second major blow to the government’s planned rollout of a new biometric-enabled digital identity scheme — a decision that will hopefully shape and inform public debate over digital ID systems as they rapidly proliferate around the world.
The scheme is meant to provide Kenyan citizens and residents with a Huduma Namba (Swahili for “service number”) and a biometric-enabled digital ID known as the Huduma Card, with data processed and stored in a database known as the National Integrated Identity Management System (NIIMS). Much like India’s Aadhaar scheme, NIIMS would act as a national population register. The Huduma Card would effectively consolidate an individual’s national ID (important documentation that Kenyans seek to acquire at the age of 18), passport, driver’s licence, social security card and national insurance card into one credential, with a Huduma Namba becoming necessary to access public services and benefits, enrol in school, vote and, indeed, operate across most domains of life.
Such a vast collection of data implies equally vast civil and human rights considerations, among them significant privacy concerns. A recent breach to a similar centralized ID database in Argentina compromised the personal information and credentials of an entire population.
The problematic rollout began in March 2019, when the Kenyan government commenced the widespread collection of biometrics and other personal information via hardware tablets provided by a secretive and controversial contract with French biometrics company IDEMIA; the firm, whose worldwide revenues surpassed €2.2 billion in 2020 alone, was recently banned from doing business by the Kenyan Parliament. Tens of thousands of individuals living in Kenya are plagued by a “double registration” problem, leaving them unable to obtain the digital IDs because their biometrics are included in UN Refugee Agency databases; other logistical and political hurdles included registration documents being only available in English; little to no public consultation on the program; and, reportedly, citizens being threatened, to compel their registration.
As activist Nanjala Nyabola observed, “The Huduma Namba was never subjected to adequate public participation: there was no interest in bringing citizens on board to shape and deploy the system.”
The first major legal challenge to the Huduma Namba scheme was brought by the Nubian Rights Forum and the Kenyan Human Rights Commission, who challenged its constitutionality, particularly in the absence of a law to govern data collected and processed under it. While the case was pending judgment, the government rushed to enact the Data Protection Act of 2019. In January 2020, the High Court ruled that collection of DNA and GPS data under the scheme was “intrusive and unnecessary” and that implementation of NIIMS could only proceed with an “appropriate and comprehensive regulatory framework” in place to ensure equality, non-discrimination, privacy and data protection rights, requiring meaningful operationalization of the act. That case is pending judgment before the Court of Appeal.
In a new judgment delivered in October, the High Court delivered a second major blow to the scheme’s rollout in holding that the government’s failure to undertake a data protection impact assessment (DPIA) before issuing Huduma Cards and processing data in NIIMS contravened Kenya’s Data Protection Act. While it found a plea to have the rollout stopped to be moot (as 11.2 million Huduma Cards have already been processed and more than US$90 million has already been spent on the scheme), the court did order the government to undertake a DPIA before proceeding.
As Kenya’s High Court is a court of first instance, and the government has already appealed the decision, its legal precedence is uncertain. Nevertheless, it sets an important social and political precedent about the role of digital ID systems in furthering discrimination and exclusion, promoting control and surveillance, and threatening core civil and human rights, and is noteworthy for several reasons.
First, the decision clarifies the relationship between data protection and privacy under Kenyan law, whereby data protection rights are secondary rights derived from primary rights enshrined in the Constitution. As the Court explains, Kenya’s Data Protection Act is meant to give effect to article 31 of Kenya’s Constitution, which guarantees the right to privacy. Per the Court, retroactive application of the Data Protection Act is justified because the act is “more of a bulwark against the excesses of the state than a tool imposing new obligations or duties on the state.” When we lose sight of this relationship and the source of data protection law, or ground rights in consumer-based frameworks (as is the case with several American laws), it is easy for administrative or commercial incentives to overtake legal, ethical and other normative concerns.
Second, and relatedly, the case reminds us of the importance of ex ante rules and measures in safeguarding rights. Section 31 of Kenya’s Data Protection Act states that a DPIA must describe the purposes, necessity and proportionality of data processing, an assessment of the risks posed to individual rights and freedoms, and measures and safeguards to address those risks. In this way, DPIAs and other impact assessments, such as human rights impact assessments, are, effectively, ways of introducing beneficial friction into activities that wield disproportionate power. As the Open Society Justice Initiative observes, “The ruling highlights an important matter regarding protective mechanisms for digital ID systems more generally, namely that legal and regulatory frameworks should be in place before these wide-ranging systems are introduced.”
Third, digital ID systems are inherently high-risk due to the nature of data they implicate and the potential consequences for people. As such, the High Court’s decision is also an important reminder that our assessment of how any given digital ID scheme impacts civil and human rights cannot begin with the technical contours of a system itself. Rather, it must begin with questions of process, including the social and political context into which it will be introduced; the way that it is designed, developed and deployed; and who is consulted along the way (and how). As Nyabola writes, “The Huduma Namba is a reminder that it is impossible to understand what effects a digital technology will have on a society without understanding the society in the first place.” Meaningful consultation with and participation in decision making on the part of those most impacted by a system, as well as mechanisms for democratic oversight, are essential.
Fourth, as legal scholar Laura Bingham has commented, the judgment is a watershed moment in respect of “fairness in relations between individual rights and state power.” While the Kenyan government argued that the Data Protection Act could not apply because it only became law after the system’s rollout began, the Court deemed it retroactively applicable on fairness grounds. As Justice Jairus Ngaan said in the October judgment, “Since the state chose to put the cart before the horse…it has to live with reality there now exists legislation against which its actions must be weighed irrespective of when they were taken”; further, “I would stand with the individual or the citizen against the might of the state and hold that fairness [requires retrospective application].”
This characterization of the judiciary as separate from the state and aligned with the people is a reminder that in assessing a digital ID scheme, we must account for the power dynamics between people and the state, including, for example, whether there is a strong or independent judiciary in place and what commercial incentives may be in play.
The indirect approximations of our behaviours, preferences and actions achieved through practices such as online behavioural advertising and surveillance capitalism have been a focal point for privacy and digital rights advocates over the last two decades. Digital ID schemes, in contrast, can directly track and surveil our lives.
As such, it is critical for digital rights defenders to reflect on the experiences and jurisprudence emerging in Kenya, India, Nigeria, Jamaica, Uganda, Pakistan and other countries — places too often overlooked by those nations that typically dominate the global discourse around technology law and governance.