To Protect Our Social Selves, We Need Data Trustees

Because of near-constant data-sharing and collection, our ability to retain some sense of authorship over the way we project ourselves is at stake.

July 22, 2020
2018-09-28T000000Z_1803136646_RC1F9D329C70_RTRMADP_3_FACEBOOK-CYBER.JPG
REUTERS/Russell Boyce

It is nearly impossible to go about one’s life without leaking data on a daily basis, even for those remaining mostly offline. Over time, that data paints a detailed picture of our social selves — who we are and what motivates us. Those with access to such data can dissect our lives to an unprecedented degree, and in many cases, this makes us — the data-sharers — vulnerable. Because of near-constant data-sharing and collection, our ability to retain some sense of authorship over the way we project ourselves is at stake.

To understand this kind of vulnerability (and how it differs from our primary, physical vulnerability), one may draw a parallel with the upheaval experienced by those diagnosed with a grave illness, or unemployment. Just as we do not want to be defined by our illness or unemployment, we may not want to become defined by our machine-readable past. While a lawyer or doctor can provide legal advice about a wrongful dismissal or help us manage an illness, the manner in which they do so — their professional stance — may either alleviate or aggravate the vulnerability that underlies their need for professional help. This power imbalance is, in part, why we consider doctors and lawyers to hold a different kind of responsibility than, say, a scuba diving instructor holds (even though physical vulnerability is at least as great 20 metres underwater).

Although society is waking up to the insidious ways in which systematic data collection triggers new forms of social vulnerability, we have mostly responded with top-down regulation. Instruments such as the European General Data Protection Regulation are crucial and have introduced a battery of rights and responsibilities, but they cannot on their own adequately address the vulnerabilities at stake. To fulfill this role, a new profession — that of the data trustee — is needed. Acting as an intermediary between data subjects and data controllers, these data trustees would be bound by a fiduciary obligation of undivided loyalty to the data subjects, whose data rights they would exercise.

Designed to allow groups of people to pool together the rights they have over their data, data trusts are but one type of bottom-up empowerment structure. Along with data cooperatives (based on contracts), data commons and data collaboratives, data trusts are not only designed to address the power imbalances between data subjects and data controllers but also key to developing sorely needed data-sharing infrastructure. Among the many systemic frailties exposed by the coronavirus 2019 (COVID-19) pandemic is the clear lack of infrastructure that could have allowed us to overcome the ugly dilemma standing in the way of data sharing for the public good. Had bottom-up data trusts already seen the light of day by the time COVID-19 struck (pilots are still at the development stage; Datatrusts.uk will be posting updates), we would not be in the position of having to choose between the risk of further entrenching surveillance, on the one hand, and depriving ourselves from much needed data-dependent tools, on the other.

Why would data trusts have made the difference? For the reason that they have, at their heart, the trustee’s fiduciary responsibilities. They require the data trustee to represent the interests of the trust’s beneficiaries with undivided loyalty, and these responsibilities act as a strong safeguard that sets data trusts apart from other types of data-sharing frameworks. By pooling data rights within a trust, individuals can wield the collective power of data to exert influence over how it is used. This may require better terms and conditions from service providers, but also, most importantly in the present context, monitoring data-sharing agreements (and concomitant safeguards). By building an ecosystem of data trusts — each with different approaches to data use, and different governance models — both individuals and groups could select a trust that best reflects their aspirations and attitudes to risk. Instead of relying on a one-size-fits-all regulatory approach to setting the boundaries of data use, each trust would define its own sets of priorities, taking into account the aspirations (and vulnerabilities) of its members.

Given the significance of the responsibilities that data trustees assume on behalf of the trust’s members, data trustees will need professional training and skills to ensure the decisions they make are sound. They will also need bottom-up standard-setting institutions (such as professional societies) and oversight mechanisms, just like lawyers and doctors have today. All of this institution building and training of data trustees will not happen overnight. Just as the practice of medicine gradually progressed and became professionalized in response to developments in the medical sciences, so will the emergence of “data trustee” as a twenty-first-century profession evolve in response to the constantly changing digital ecosystem. Right now, one may hope for more ups than downs; the COVID-19 situation has provided ample evidence, if any was needed, that the task of balancing our digital freedoms against the public need for data is no less critical than the work of medical professionals on the front lines of today’s pandemic.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Sylvie Delacroix is a professor in law and ethics at the University of Birmingham.