On December 11, 2018, Honduran migrants marched to the US consulate in Tijuana, Mexico, calling for authorities to speed up the asylum application process. (AP Photo/Moises Castillo)
On December 11, 2018, Honduran migrants marched to the US consulate in Tijuana, Mexico, calling for authorities to speed up the asylum application process. (AP Photo/Moises Castillo)

In 2018, nearly 70 million people were forcibly displaced from their homes. To deal with multiple complex migration crises, states are increasingly turning to emerging technologies to “manage migration. Its use is widespread; technology is contributing to predictions around population movement in the Mediterranean, decision making in Canada’s immigration systems and refugee retinal scanning in Jordan.

These implementations come with the promise of increased fairness and efficiency. However, technology is not neutral. It impacts people differently and exposes existing power relations in society.  These considerations are particularly important when thinking about the impact of technology on the often discretionary and opaque policies and decisions that occur at and around borders. The growing use of artificial intelligence (AI), big data and machine learning in migration is also a new way for states to create different hierarchies of rights between citizens and non-citizens, to exercise control over migrant populations, and to renege on their responsibilities to uphold human rights by over-relying on the private sector without appropriate oversight.

Experimenting with new technologies in immigration systems has a serious impact on human lives — can it be done responsibly?

Big Data: Predicting Refugee Movements

With large numbers of people on the move, predicting population movements and creating appropriate and quick humanitarian responses is paramount. A variety of initiatives increasingly rely on the collection of big data, or extremely large data sets that can be analyzed to reveal associations and patterns in human behaviour. For example, the UN Refugee Agency is experimenting with using weather data to predict where people may move, and the International Organization for Migration has developed the Displacement Tracking Matrix to track and monitor populations on the move to better predict the needs of displaced people. Information that may be collected to populate these data sets can include analyses of social media activity, geotagging and mobile phone call records. Big data is also used to try to predict successful outcomes for resettled refugees based on community links in various geographic locations.

Of course, when predicting the movement of diverse populations, context matters and data is not apolitical. In a divisive political landscape, migration data has been misinterpreted and misrepresented for political ends, in order to affect the distribution of aid dollars and resources. Data can also be used to stoke fear and xenophobia, as seen in the characterization of the group of migrants attempting to claim asylum at the United States-Mexico border (sometimes referred to as the migrant caravan). Societal fear is then used as justification for increasingly hardline responses that contravene international law and profound concerns around basic civil liberties and human rights.

Decision Making: Emerging Uses of AI in Immigration

States that receive large numbers of immigrants are also experimenting with the use of automated decision making in a variety of applications. A September 2018 report (co-written by this article’s author) titled Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System explored the impacts of automated decision making in Canada from a human rights perspective. The report considers the use of AI in replacing or augmenting administrative immigration decision making and how the process might create a laboratory for high-risk experiments within an already highly discretionary and opaque system. In the United States, these experiments are already in full force. For example, in the wake of the Trump administration’s executive orders cracking down on migration, the Immigration and Customs Enforcement agency used an algorithm at the United States-Mexico border to justify detention of migrants in every single case. Technology can be used to support and justify hardline policies and assist state policies that profoundly infringe on people’s rights. Under-resourced communities — such as non-citizens — often have less access to robust human rights protections and fewer resources with which to defend those rights.

Instances of bias in automated decision making are widely documented. When algorithms rely on biased data — on race or gender or sexual orientation, for example — they produce biased results. These biases could have far-reaching results if they are embedded in the emerging technologies being used experimentally in migration. For example, in airports in Hungary, Latvia and Greece, a new pilot project called iBorderCtrl has introduced an AI-powered lie detector at border checkpoints. Under the guise of efficiency, passengers’ faces will be monitored for signs of lying, and if the system becomes more “skeptical” through a series of increasingly complicated questions, the person will be selected for further screening by a human officer. While this use might seem innocuous at first glance, what happens if a refugee claimant starts interacting with these systems? Can an automated decision making system account for trauma and its effects on memory, or for cultural differences in communication? These experimental uses of AI also raise concerns about privacy and information sharing without people’s consent, as well as about bias in identification through facial recognition, as facial recognition technologies continue to struggle when analyzing women or people with darker skin.  

Biometrics: Changing the Concept of Informed Consent

The use of new technologies, particularly in spaces with clear power differentials, raises issues of informed consent and the ability to opt-out. For example, when people in Jordanian refugee camps have their irises scanned in order to receive their weekly food rations in an experimental new program, are they able to say no? Or do they have to live with any discomfort they experience in having their biometric data collected if they want to feed their families that week? Does efficiency trump human dignity? In an IRIN investigation inside the Azraq refugee camp, most refugees interviewed were uncomfortable with such technological experiments but felt that they couldn’t refuse if they wanted to eat. Consent is not truly informed and freely given if it is given under coercion, even if the coercive circumstances masquerade as efficiency and better service delivery. Moreover, individuals who choose not to participate in activities such as the use of digital devices or social media — whether due to privacy concerns or simply as a matter of preference — may also be subject to prejudicial inferences.

Further, it is unclear where all this collected biometric data is going. In the Jordanian retinal scanning pilot project, the UN Refugee Agency expressly reserves the right to collect and share data to third parties, including the private sector, without clear safeguards and significant privacy concerns. Yet, privacy is not simply a consumer or property interest: it is a human right, rooted in foundational democratic principles of dignity and autonomy. Various international legal instruments also recognize that privacy rights are contextual and that vulnerable groups, such as migrants, have different expectations of privacy, given the risks to their personal safety if their data is shared with repressive governments. Electronic surveillance and biometric practices disproportionately target — and have disproportionate consequences for — marginalized groups.

Technology is, indeed, a useful lens through which to examine state practices, democracy, notions of power and accountability. But it can also be used to justify the different treatment of citizens and non-citizens. While emerging research is beginning to highlight how new technologies such as biometrics, big data and even private-sector AI lie detectors such as iBorderCtrl are used in the management of migration, cutting-edge research is needed on the disproportionate impact of technological experimentation on migrants and refugees. Making migrants more trackable justifies more technology and data collection under the guise of national security, humanitarianism and development. Yet, technological development does not occur in a vacuum; rather, it replicates existing power hierarchies and differentials. Technology is not inherently democratic, and issues of informed consent and right of refusal are particularly salient in humanitarian and forced migration contexts.

A Path Forward: Legal and Accountability Frameworks

There is tension between the push for technological innovation and the existing state obligations to provide refugee protection, as enshrined under domestic and international law such as the 1951 Refugee Convention, the UN Charter and the Universal Declaration of Human Rights. While broad UN strategies and regional mechanisms are being explored, a sharper focus on accountability and oversight mechanisms is needed. The growing role of the private sector in the governance of AI highlights the movement away from state responsibility.

Private sector businesses do have an independent responsibility to ensure that the technologies they develop do not violate international human rights. Technologists, developers and engineers responsible for building this technology also have special ethical obligations to ensure that their work does not facilitate human rights violations. Unfortunately, government surveillance, policing, immigrations enforcement and border security programs can incentivize and reward industry for developing rights-infringing technologies. Emerging technologies raise complex legal and ethical issues for businesses and engineers alike. Going forward, companies engaged in the sale of new technology cannot turn a blind eye to how it will ultimately be used, or to its potential threat to human rights.

Countries like Canada can be leaders in the development of new technologies with human rights-protecting frameworks squarely at the centre of discussion. Technology travels. A country’s decision to implement particular technologies, whether they are developed in the private or the public sector, can set a new standard for other countries to follow. Canada may also be responsible for any rights violations arising out of the export of these technologies to countries with weak human rights records who are more willing to experiment on non-citizens and to infringe the rights of vulnerable groups. Canada has a unique opportunity to develop international standards that regulate the use of these technologies in accordance with domestic and international human rights obligations.

These emerging conversations must also address the affected communities’ lack of involvement in technological development. Lived experiences matter and human migration is complex. Rather than make more apps and technology “for” or “about” refugees and migrants and collect vast amounts of data, why not instead involve migrants in the discussion around when and how emerging technologies should be integrated into refugee camps, border security or refugee hearings, if at all?

With the increasing use of emerging technologies in decision making, refugee camps and border spaces, it is worth asking who benefits from these technologies. While efficiency and technological development are valuable, those responsible for human lives should not pursue innovation at the expense of fairness, accountability and oversight. Fundamental human rights must hold a central place in this discussion.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.
  • Petra Molnar

    Petra Molnar is a lawyer and researcher at the International Human Rights Program, University of Toronto Faculty of Law.

to cigi