Since the 2016 US election, there has been a growing public concern and debate about the effect and the role that platforms like Facebook, Twitter and Google are having on the integrity of democracy and on the character of society. Although the term “fake news” is often misused to discredit the free press, misinformation is originating from social media platforms that operate in unregulated spaces — a presenting a very real threat to democracy.

In this video, University of British Columbia assistant professor Taylor Owen explains that governments need to act quickly to regulate these platforms but the solution is not always clear. Although election laws do exist in democratic countries to dictate what can be said and who it can be said to, social media platforms aren’t required to disclose who is spending what money, from where in the world, and to which audience. Since governments have a hand’s off approach to the technology sector, social media platforms are left to govern themselves.

Owen argues that ultimately, only policy that reflects the way in which reality is increasingly shaped by digital technology will lead to better governance of social media platforms. Without effective policy tools — particularly those expressed in national data strategies — the spread of misinformation and the associated threat to democracy will continue.

Transcript

Since the 2016 US election, there has been a growing public concern and debate about the effect and the role that platforms like Facebook and YouTube are having on the integrity of our democracy and on the character of our society.

It used to be if you wanted to reach a large audience, you needed to buy an ad with a broadcaster, or a publication, that claimed to reach the type of group you wanted to target. So, if you wanted to reach upper-middle-class people in Manhattan, you would buy ads in The New York Times. That was the granularity with which you could target people.

What Facebook offers is of entirely different order of magnitude of specificity. You can specify any number of hundreds of attributes of people and locations that you might want to target — and go directly to them.

This is a particular problem during elections, which is the one time in democratic society where we have very strict rules on who can say what, and who can purchase which audience. Why I think this is such a clarifying moment for how we view these platform companies is that they have created an architecture that is in breach of those rules. On Facebook, you don’t know who is buying access to Canadian eyeballs. They don’t disclose that — we don’t know who is spending what money, from where in the world, and to which audience. The problem is, because governments have had a hands-off approach to the technology sector, we are leaving responsibility for these governance challenges to the companies themselves.

There are two primary ways that governments have attempted to control the problem of misinformation that our society seems to be plagued by right now. One is by controlling what is acceptable speech in their society. Governments have always regulated speech to a certain degree, but some countries, such as Germany and France, are taking it upon themselves to penalize platform companies who do not abide by those very strict rules of what is acceptable speech. This runs into a whole host of problems because governments are increasingly having to determine — and be the judge and the adjudicator of what is acceptable speech and not. 

The other main way that governments are dealing with this problem is actually by enabling the rights of their citizens to control their own data. So instead of limiting the rights of their citizens to speak, they are enabling the rights of their citizens to have ownership and control over the data that they produce. And I think the most sophisticated example of this, at the moment, is the new European General Data Protection Regulation. What it essentially says is that: “I as an individual citizen, if I live in the EU, have a right to know if data is being collected about me; if I opt out of that data being collected about me, the company that was collecting it can’t deny me services; and the company has to provide a clear indication about how that data is being used by them.” And this I think this fundamentally changes the power relationship between citizens who are providing the data, and companies that are taking it and using it.

Given the influence of digital technology and the internet on our economy and our society, it’s impossible for me to imagine our government not having a national data strategy. But having one is going to mean taking very seriously the way in which data about Canadians is used, the way in which our realities are increasingly shaped by digital technology, and the way in which our economy is structured around what I think is a very pernicious and monopolistic digital architecture.

This series of videos explores topics including the rationale for a data strategy, the role of a data strategy for Canadian industries, and policy considerations for domestic and international data governance.
The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

Rationale of a Data Strategy

The Role of a Data Strategy for Canadian Industries

Balancing Privacy and Commercial Values

Domestic Policy for Data Governance

International Policy Considerations

Epilogue