What You Need to Know about the Grand Committee on Big Data, Privacy and Democracy

May 29, 2019
Grand_Committee_SD_052819.jpg
Seats for Facebook's Mark Zuckerberg and Sheryl Sandberg remain empty at the Grand Committee on Big Data, Privacy and Democracy hearing in Ottawa. (CIGI Photo)

Canada is at the front line in the battle for the future of democracy, as governments struggle to comprehend and respond to the challenges posed by the surveillance capitalism practised by global technology companies.

That was the testimony of Shoshana Zuboff, author of The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, speaking in Ottawa on Monday before the International Grand Committee on Big Data, Privacy and Democracy. While Zuboff was largely referring to the controversial Sidewalk Labs project and the development of Toronto’s Quayside district, her words were also a nod to Canada’s role in helping bring together this committee of politicians from around the world, which is gathering this week from Monday to Wednesday.

Individually, the governments represented in Ottawa this week have little chance of addressing the power that global technology companies possess. But together, they can build momentum and increase their capability to respond and be responsive.

The grand committee, which held its inaugural meeting in the United Kingdom last year, is made up of politicians from Canada, the United Kingdom, Argentina, Belgium, Brazil, France, Ireland, Latvia and Singapore. This year, the committee members are joined by representatives from Estonia, Ecuador, Mexico, Morocco, and Trinidad and Tobago.

Representatives of Facebook, Twitter, Google, Amazon, Apple, Microsoft and the Mozilla Foundation are all scheduled to appear before the committee. Top Facebook executives Mark Zuckerberg and Sheryl Sandberg have even been subpoenaed, although they have ignored the request to testify. Kevin Chan, head of public policy for Facebook Canada, will answer questions instead.

The grand committee has already agreed to five principles, set out in a declaration signed last November:

  • The internet is global, and law relating to it must derive from globally agreed principles.
  • The deliberate spreading of disinformation and division is a credible threat to the continuation and growth of democracy and a civilizing global dialogue.
  • Global technology firms must recognize their great power and demonstrate their readiness to accept their great responsibility as holders of influence.
  • Social media companies should be held liable if they fail to comply with a judicial, statutory or regulatory order to remove harmful and misleading content from their platforms, and should be regulated to ensure they comply with this requirement.
  • Technology companies must demonstrate their accountability to users by making themselves fully answerable to national legislatures and other organs of representative democracy.

However, these are just principles; the real work comes in translating these principles into actual public policy, regulatory enforcement and concrete actions.

The path ahead for the grand committee will involve addressing a range of issues and challenges, many of them highlighted by witnesses who have testified so far this week.

Privacy

The exploitation and weaponization of personal information is a dominant and recurrent theme of the hearings. Invited guests and committee members have repeatedly come back to how surveillance capitalism — which Zuboff defined as “the unilateral claiming of private human experience as free raw material for translation into behavioral data” — depends on and is fuelled by the activities of citizens and technology users.

One of the key witnesses before the committee will be Privacy Commissioner of Canada Daniel Therrien, who recently published a report charging that Facebook had broken Canadian privacy laws in the context of the Cambridge Analytica episode.

Facebook disputes the report’s findings — but if it disagrees with the direction governments are going, why is the company not sending its senior executives to testify before the committee?

Elections

Another issue of significant concern to the members of the committee is the impact of social media on elections.

In his testimony before the committee, Ben Scott, the director of policy and advocacy at Luminate, part of the philanthropic organization Omidyar Network, argued that conspiracy and hate are essentially the business model of algorithmic-driven media, due to their ability to drive and hold people’s attention.

This dynamic was recently evident in Ukrainian elections, where the measures Facebook took (in response to government pressure) to prevent misinformation and election interference were not sufficient and were easily evaded and undermined.

In last week’s European Parliament elections, a tremendous amount of disinformation was “unleashed,” as well as algorithms that favoured extremist and sensationalist content. Activist network Avaaz conducted a study on far-right networks’ use of Facebook and found it to be pervasive in spite of Facebook’s attempts to prevent it.

Canadian politicians are naturally concerned about this issue as a federal election is on the horizon. Speaking with CBC, Facebook’s Chan said the company will be watching the election closely and removing any pages or users that engage in “coordinated inauthentic behaviour.”

Given Facebook’s track record and its executives’ failure to appear before the committee, little weight or trust should be given to these promises.

Leadership and Break-up

In his testimony before the committee, Jason Kint, the CEO of the trade association Digital Content Next, raised the question of leadership, and whether those leading technology companies should be held responsible or even removed.

There is a growing chorus of voices, including those of Facebook shareholders, calling for Zuckerberg and even Sandberg to step down. After all, this is what would happen in a democratic government. When leadership has lost the trust of their constituents, they resign to make way for new leadership.

There is a growing chorus of voices, including those of Facebook shareholders, calling for Zuckerberg and even Sandberg to step down.

Perhaps a similar logic drives the suggestion that Facebook be broken up — that the best way to deal with these (near) monopolies is via antitrust action that seeks to restore a competitive environment. In Facebook’s case, that could involve spinning off Instagram, WhatsApp and any other acquisition that prevents a healthy and competitive marketplace. Amazon and Alphabet (Google’s parent company) are also targets for antitrust action.

Audit and Oversight

Similarly, another regulatory response is the growing desire for audit and oversight.

After a steady stream of bad news and lack of accountability, how can Facebook be trusted? It can’t. And new frameworks for governing Facebook must include external regulatory supervision.

For example, in her testimony, Zuboff argued that technology companies make their money on what she calls “shadow text,” which is the proprietary side of predictive analytics. We see the public side of digital media in the form of our content, but the shadow side is the data and predictive models created using that content. These predictive models are where the money is made, and we have no way of scrutinizing how the shadow side works and what information exists.

Taylor Owen, senior fellow at CIGI and the Beaverbrook Chair in Media, Ethics and Communication at McGill University, similarly argued in his testimony that self-regulation and co-regulation are insufficient in addressing this problem. He argued that the design and business model of algorithmic media are what undermine democratic norms and institutions. The result is the private monopolization of public (media) infrastructure.

Indeed, almost all witnesses before the committee argued that transparency, oversight and, as Jason Kint argued, the audit of user account practices must happen, in order to understand the impact and scope of algorithmic media.

Heidi Tworek noted in her testimony that most users of algorithmic media do not know or understand the role that algorithms play in what they see and why.

Owen warned the committee to beware of overly complicated solutions and instead to keep it simple. Just because algorithms are hard to understand does not mean that the solutions have to be as mystifying. Owen quickly cited increased transparency, greater individual rights for users, upgraded tax laws that target algorithmic media advertising, restrictions on acquisitions and concentration of ownership, as well as increased funding for civic media and literacy initiatives. The issue, he argued, is political will, not technical complexity.

The Digital Charter and the GDPR

In Canada, that political will may finally be forming. The federal government has recently unveiled its plans for a digital charter, a framework to address many of the issues raised by the grand committee. While critics have charged that it is only words, and not actions, it at least provides a starting point for debate during the federal election, and encourages political parties to articulate public policies that address digital technology and platform monopolies.

The path toward Canada’s potential digital charter was certainly galvanized by Europe’s General Data Protection Regulation (GDPR), which provides privacy protection for all EU citizens while compelling all businesses operating in the European Union to have substantive data protection measures. The GDPR is often cited by non-EU politicians as an example to follow, and has helped catalyze a global conversation around the regulation of data, technology and the protection of privacy.

The United States is now debating federal privacy legislation, and countries around the world are establishing data protection authorities.

However, a year after taking effect, the GDPR has resulted in very few penalties, and all of them small, although potentially large ones are looming.

Speaking with Politico, Paul-Olivier Dehaye, a privacy expert who helped uncover Facebook’s Cambridge Analytica scandal, argued that “big companies like Facebook are 10 steps ahead of everyone else, and 100 steps ahead of regulators. There are very big questions about what they’re doing.”

The GDPR hasn’t stopped or even slowed down the giants. Rather, it was a first step in a dance between regulators and the companies to negotiate a new social contract based on the values associated with digital culture.

A Need for a New Multilateral Institution

A new social contract will be superficial if it does not have an institution powerful enough to enact it or back it up. This is a looming issue for the grand committee, as their power is derived from their respective national governments and not a larger multilateral institution.

This is why Jim Balsillie has been calling for a second Bretton Woods. In his testimony before the committee, Balsillie argued for the creation of a new institution, a new international trade organization, that could have the power and capacity to regulate global technology companies. He argues that we need new rules of the road for digitally mediated economies.

As the grand committee continues with its work, a momentum is building, an awareness is sharpening, and a regulatory capacity is being developed in tandem with a growing political will. These may be early days for the larger conversation about what the world of technology — and the world itself — should look like and how it should work. The results will be substantive, and the impact profound. What happens next will depend on how the politicians on the committee articulate their collective response, and how voters support their various proposals.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Jesse Hirsh is a researcher, artist and public speaker based in Lanark County, Ontario. His research interests focus largely on the intersection of technology and politics, in particular artificial intelligence and democracy. He recently completed an M.A. at Ryerson University on algorithmic media.