After 18 months of investigating disinformation, fake news and the growing influence of social media platforms, the British parliamentary committee concluded that between 2011 and 2015 Facebook intentionally and knowingly violated both data privacy and anti-competition laws, and that it must be overseen by an independent regulator.
The committee’s chair, British Member of Parliament Damian Collins, argued that the “age of inadequate self regulation must come to an end.”
He’s not alone in calling for this. A growing chorus of world leaders and politicians are also calling for greater regulation of technology companies. At the recent World Economic Forum held in Davos, Switzerland, the leaders of China, Germany, Japan and South Africa all voiced support for global rules to govern the technology industry and the transnational flows of data.
Japan, in particular, is looking to use its current chairmanship of the Group of Twenty nations to push for some kind of global regulatory framework. The World Trade Organization is regarded as one body within which to achieve and manage such regulation.
Nick Clegg, the former deputy prime minister of the United Kingdom and ex-leader of the British Liberal Democrats, is currently Facebook’s vice-president for global affairs and communications. At a recent event in Brussels, Clegg was quoted by Politico as acknowledging that Facebook is “at the start of a discussion which is no longer about whether social media should be regulated, but how it should be regulated. We recognize the value of regulation, and we are committed to working with policymakers to get it right.” This echoes Facebook head Mark Zuckerberg’s own remarks when he testified before Congress.
Following Zuckerberg’s appearance before Congress — and other high-profile missteps by social media giants — it seems that elected officials are in a rush to illustrate that they are, indeed, on the public’s side, and that they will protect citizens from digital behemoths. The goal is, in theory, an admirable one. But in practice, politicians and governments are aiming too low as they scramble to regulate technology companies.
Currently, companies like Facebook and Twitter are largely regulated as if they are media companies rather than technological platforms. That designation isn’t totally off-base, since they accept advertising revenue and share content — activities that fall within the purview of a publisher or media organization. And, fake news, fake accounts, misleading information, trolling and electoral interference are also important and legitimate reasons for regulation. Privacy and data protection on these sites are also being considered within the context of media, specifically the way in which data or personal information can be copied, published or otherwise distributed without consent or proper permission.
While media regulation frameworks have managed to rein in technology companies in the few areas listed above, the policies are also short-sighted and inadequate. Simply put, they were written for publishing companies, not ever-expanding platforms.
The term “social media” may be partly responsible for this approach.
Media regulation in democratic societies has traditionally been a delicate balance between the rights and freedoms of citizens, and the responsibilities that come with the power of publishing. Governments do not want to be seen as muzzling the free press, and as a result, regulations have been light and only applied in extreme cases like hate speech and defamation.
Similarly, governments don’t want to be viewed as interfering with the freedoms associated with social media, in particular freedom of expression (and to some extent, freedom of assembly), and as a result, treat social platforms with kid gloves. Many governments have yet to regulate Facebook, Twitter or other platforms as they do media companies, although these giants hold as much power as traditional media, if not more.
Europe has started to move in that direction. Google was recently fined €50 million for violating the General Data Protection Regulation (GDPR). Specifically, the company was found to have not sufficiently gained users’ consent to use their data in the provision of personalized advertising. Consent speaks to the power imbalance that exists between users and social media companies — and is certainly outside the scope of media regulation frameworks.
Traditional media organizations, while powerful and influential in society, do not actually hold much sway or control over their audience or subscribers. At any time, the audience or advertisers could choose to move their attention or business elsewhere.
The same cannot be said of social media companies. While many claim the ability or desire to quit or leave a platform, few actually do. And even when users do choose to leave a platform, social media companies often have possession of both personal information and social connectivity that cannot be exported or migrated easily. Facebook, Google and similar services are nearly impossible to avoid. Their ability to track digital activity and influence people beyond their platforms is significant.
This is why Dwayne Winseck, professor at the School of Journalism and Communication at Carleton University, argues that if any existing regulatory framework should regulate technology companies, it should be those that regulate banking and financial services — not media companies.
For Winseck, the relationship is quite simple: data has become a kind of currency, a commodity of considerable value and, much like a relationship with a bank, our relationship with a technology company is based on trust. A technology company manages data on our behalf in the same way that a bank manages money on our behalf.
The gap between the user and the corporation presents more similarities still; banks and technology companies alike promise to keep our assets (whether data or money) secure, and in both cases, consumers are largely unable to verify or audit that those promises have been upheld.
Policy recognizes the power that a bank yields, and therefore has strict regulations and harsh penalties to ensure that financial institutions do not abuse their position or the trust that users have placed in them. A few of these regulations could be applicable to social media platforms.
Deposits: Policy requires banks to have clear, accessible rules around deposits. Banks need to prove that they not only protect their clients’ money, but that clients can access money when they want it (think deposit insurance). If technology companies applied similar requirements to privacy and data protection, users would have increased ease of data mobility; if they were unhappy with a platform, they could withdraw their data and move to another service entirely.
Governance models: Banks also have regulations around governance. They need to be well managed, have proper oversight, compliance and overall responsibility. Social media companies generally lack proper governance models and would benefit from regulations that forced them to take governance more seriously.
Social and economic stability: Banks are also regulated with an eye to preserving social and economic stability because generally, if not well regulated, banks are in a position to seriously destabilize a society. Similar social and political disruption is arguably a risk when trust is placed in the hands of a poorly regulated social media platform. Social media’s role in influencing election outcomes is example enough.
Of course, the financial services industry may not be a direct parallel to the technology industry, but Winseck makes a convincing argument: financial regulation frameworks could have something to offer when it comes to improved social media regulation.
At the very least, the comparison is a good reminder that right now, the policy needed to effectively rein in technology companies and protect citizens from the services they use on a daily basis is not in place.