- The generation, control and use of data are inherently political activities governed by formal and informal laws, regulations and norms.
- Because the rules governing data have society-wide effects, governments have an important role to play in constructing and limiting the market for data.
- In regulating this economy, policy makers must take into consideration the unique dynamics of a data-based economy and the central political issues of control over and use of data.
- Governments must also confront the reality that the surveillance required for the efficient functioning of a data-driven economy conflicts with, among other things, the norms supporting a liberal-democratic society.
e have constructed a data-driven economy and society, in which the list of what can be turned into data and commodified — heartbeats, conversations, our expressed preferences — is limited only by our imaginations. Scholars, trying to come to grips with how this economy works, have referred to this phenomenon as “data capitalism” (West 2017), the “surveillance economy” (Zuboff 2015), “platform capitalism” (Srnicek 2017) and the “information-industrial complex” (Powers and Jablonski 2015), among other names. These conceptualizations all share an appreciation of the fact that the control of knowledge, such as data and intellectual property, is fast becoming the key determinant of economic, social and political power. Production powerhouses such as General Motors have been supplanted in terms of market capitalization and innovation by Google, while industries old and new are increasingly acting according to economic logics that prioritize the capture and commodification of data (Srnicek 2017).
Like all economies, and like data itself, the data-driven economy is created by people through social conventions and norms, laws and regulations, algorithms (which are merely digitized sets of rules) and social interactions. These rules and norms affect what data is created, who controls this data (“big,” personal or otherwise) and to what use data is put. They are also inherently political, inevitably favouring certain groups and outcomes over others.
Who will set the rules and norms of this new economy is one of the biggest issues currently facing policy makers. To date, the framework of the data economy has been set primarily by those private actors for whom the control of data is most central to their existence, such as Google, Amazon and Uber. Most governments, including Canada’s, have yet to establish policy in this area, with the notable exception of the European Union’s General Data Protection Regulation.1
Driven by concerns about personal privacy (for example, Schwartz 1999; Obar and Oeldorf-Hirsch 2016), economic benefits and the potentially widespread destabilizing uses of personal data (Madrigal 2017), more and more people are asking questions about who should control this data and for what purposes. This essay argues that, as the main actors responsible for mediating social objectives and the conflicts of self-interested actors, governments have a fundamental role to play in constructing the data-driven economy. Drawing on political economist Karl Polanyi’s thinking about fictitious commodities, it argues that state regulation is necessary not only to promote economic prosperity, but to limit the data-driven economy’s excesses so that it does not endanger non-economic societal priorities.
Data as a Fictitious Commodity
The concept of data itself remains contested. For some, it is a natural, neutral representation of reality, “information collected, stored and presented without interest” (Ruppert, Isin and Bigo 2017, 3). From this perspective, knowledge, like oil, is all around us, just waiting to be discovered and exploited.
This perspective is deeply misleading. Data is a partial form of knowledge that we create to interpret an independently existing world: data “does not just exist — it has to be generated” (Manovich 2001, 224). This generation is undertaken by people and inevitably reflects the conscious and unconscious biases of those responsible for generating data, as in the case of Google’s image recognition algorithm that labelled gorillas as “black people” (Vincent 2018). Data “is not an already given artefact that exists (which then needs to be mined, analyzed, brokered) but an object of investment (in the broadest sense) that is produced by the competitive struggles of professionals who claim stakes in its meaning and functioning” (Ruppert, Isin and Bigo 2017, 1).
Data is what Polanyi would call a “fictitious commodity,” created and defined by social conventions and human-made rules. In his monumental work The Great Transformation (2001), he applied the term fictitious commodity to land, labour and money. Normal market commodities are created, bought and sold. However, noted Polanyi, neither land, nor labour, nor money are actually created or produced. As Bob Jessop (2007, 16) remarks, “what we call labour is simply human activity, whereas land is the natural environment of human beings, and money is just an account of value.”
Data can also be seen as a fictitious commodity. What we call data is simply the measure of human activity or the natural world that exists independently of the desire to measure it. For example, heartbeats exist before and independent of a desire to measure them. Data is always collected for some purpose. Commodifying data, detaching the data from the individuals or contexts that produced it, gives it an instrumental (often for-profit) characteristic, often placing it in a closed economic system and under the control of specific groups or individuals (Jessop 2007). Context matters a great deal when evaluating the benefits of datafication. There is a great difference between heartbeats measured by a doctor to improve a patient’s health or by an insurance company that wants to limit coverage to supposedly “healthy” people.
Forgetting that land, labour and money are merely useful conceits can have disastrous consequences. Nature treated only as real estate risks environmental ruin. Humans treated instrumentally as economic fuel finds its extreme in the institution of slavery. Similarly, ignoring that data is an imperfect, partial rendering of reality can lead to perverse policy outcomes, as when data-driven financial artificial intelligence systems offer higher interest rates to blacks and Latinos, as opposed to Asian or whites (Alang 2017).
That data, land, labour and money are fictitious commodities means that the rules that govern them are set by people. Because rules are set by people, they will create winners and losers. Individual actors, left to their own devices, will try to set the rules to their own advantage. It is up to governments, through legislation, regulation, investment and moral suasion, to maximize the economic and non-economic benefits of the data-driven economy at a societal level.
Understanding the Data-driven Economy
The market for data will be constructed with government involvement or in the presence of governmental inaction. However, government involvement is necessary in order to ensure that this market functions in a socially optimal manner rather than in the interests of its most powerful actors. The following three points offer an illustration of the types of issues that government regulation of the data-driven economy must face.
The data-driven economy must be understood and regulated on its own terms: The data-driven economy runs according to a different logic than one that prioritizes finance or production. Consequently, policy making designed to maximize employment and economic activity in a production-based economy will not necessarily have the same effects when targeting the data-intensive giants of the information age. Previously, for example, it would have been a great coup to attract a company’s head office or production facility to one’s town or province because of the number of jobs this move would generate. However, tech-based companies are not large employers. Soshanna Zuboff (2015, 80) notes: “The top three Silicon Valley companies in 2014 had revenues of $247 billion, only 137,000 employees and a combined market capitalization of $1.09 trillion. In contrast, even as late as 1990, the three top Detroit automakers produced revenues of $250 billion with 1.2 million employees and a combined market capitalization of $36 billion.”
Similarly, while free trade policies may make sense (assuming certain assumptions are met) for planning a manufacturing-based economy, allowing the free flow of (intangible) data and intellectual property across borders raises several concerns. The most obvious issue has to do with the privacy of citizens’ personal data in countries with lax personal data protections. However, it is also not clear that the free-trade analogy is the most appropriate way to think about cross-border data flows. Intangible commodified data does not function economically in the same manner as tangible widgets. Just as most economists will now concede that free cross-border capital flows can be incredibly destabilizing (Beattie 2012), because the proprietary control of data invites potentially global anti-competitive network effects (to name only one issue) (Organisation for Economic Co-operation and Development 2016), some restrictions on cross-border data flows may make economic sense. At any rate, this issue must be studied on its own terms, not through the use of inappropriate analogies to trade in goods.
Who controls data, and to what end, are crucial political questions: A data-driven economy is founded on the ability to control data. Who controls data, who decides what data is worth collecting and how data is used are therefore key political questions with society-wide ramifications. For example, as Teresa Scassa (2017) remarks in the context of Airbnb’s proprietary collection of housing-related data, access to data is essential for the planning and delivery of heretofore public services. Such control over data can also be used to create relations of economic dependency that more closely resemble feudal economies than free markets. In the increasingly infamous case of John Deere tractors, farmers must pay for access to the proprietary information on “soil and crop conditions” collected by the sensors in the tractors the farmers purchased from John Deere (Bronson and Knezevic 2016, 1).
Balancing the complex economic and non-economic interests of all stakeholders is something that only governments can do and requires full and democratic consultations. Resolving these issues will necessarily create winners and losers. For example, providing individuals with strong rights to control how the data they generate is used will necessarily affect those industries whose business model depends on the collection and commodification of this data.
A society based on the exploitation of knowledge requires constant surveillance in order to function properly and efficiently: A data-driven economy derives value from the identification, commodification and use of ever-expanding data flows. Capturing all desired data requires continuous monitoring of as many activities as possible. It is for this reason that, in the words of Andrew Ng, head of artificial intelligence at Baidu and the founder of the Google Brain project, tech companies “often launch products not for the revenue but for the data ... and we monetize the data through a different product” (Ng quoted in Morozov 2018). Constant surveillance is also fundamental to the functioning of internet-connected devices that work only with a constant data stream.
A data-driven economy, in other words, is also a “surveillance economy” (Zuboff 2015). It has long been established that merely the threat or assumption of constant surveillance can have negative effects on people’s actions, leading them to restrain themselves from the expression of potentially unpopular opinions (Schwartz 1999). This type of self-censorship is anathema to life in a liberal-democratic society.
This challenge does not only appear in the economic realm. The economic logic of efficiency that drives companies to maximize their data collection is apparent in the realm of national security. Even liberal-democratic states such as Canada have engaged in ever-growing surveillance of their citizens (Kari 2017). The logic in the security and economic cases is the same: in a knowledge economy, anything less than total surveillance is seen as a potential threat or economic loss.
In a surveillance economy and society, therefore, effective democratic oversight of both the state and economic actors is essential to resolving the tension between the threats posed by such surveillance and the necessary role of surveillance in enabling the data-driven economy.
Conclusion: Enabling and Restraining the Data-driven Economy
While the state has a crucial role to play in constructing the data-driven economy, its most important role will be in setting the limits on this economy. In an economy where value is created through the commodification and use of data, the temptation to create more value through ever-greater “datafication” of our social lives and the natural world will be almost overwhelming: failure to do so will amount to leaving “money on the table.” However, as Polanyi’s discussion of fictitious commodities suggests, disaster lies this way, not least through the over-expansion of surveillance.
Minimum-wage laws and the maintenance of national parks are justified by appeals to fundamental notions of human dignity and the need for environmental protection, not primarily on economic grounds. Similarly, decisions about what should not be surveilled and turned into data, and what forms of data usage are beyond the pale, need to be based not just on economic values, but on the greater needs of a liberal-democratic society.
Alang, Naveet. 2017. “Turns out algorithms are racist.” New Republic, August 31. https://newrepublic.com/article/144644/turns-algorithms-racist.
Beattie, Alan. 2012. “IMF drops opposition to capital controls.” Financial Times, December 3. www.ft.com/content/e620482e-3d5c-11e2-9e13-00144feabdc0.
Bronson, Kelly and Irena Knezevic. 2016. “Big Data in food and agriculture.” Big Data & Society January−June 3 (1): 1−5. doi:10.1177/2053951716648174.
Jessop, Bob. 2007. “Knowledge as a Fictitious Commodity: Insights and Limits of a Polanyian Analysis." In Reading Karl Polanyi for the Twenty-first Century, edited by Ayse Buğra and Kaan Ağartan, 115−34. Basingstoke, UK: Palgrave.
Kari, Shannon. 2017. “The new surveillance state.” Canadian Lawyer. October 2. www.canadianlawyermag.com/author/shannon-kari/the-new-surveillance-state-13735/.
Madrigal, Alexis C. 2017. “What Facebook Did to American Democracy.” The Atlantic, October 12. www.theatlantic.com/technology/archive/2017/10/what-facebook-did/542502/.
Manovich, Lev. 2001. The Language of New Media. Cambridge, MA: The MIT Press.
Morozov, Evgeny. 2018. “Will tech giants move on from the internet, now we’ve all been harvested?” The Guardian, January 28. www.theguardian.com/technology/2018/jan/28/morozov-artificial-intelligence-data-technology-online.
Obar, Jonathan A. and Anne Oeldorf-Hirsch. 2016. “The Biggest Lie on the Internet: Ignoring the Privacy Policies and Terms of Service Policies of Social Networking Services.” http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2757465.
Organisation for Economic Co-operation and Development. 2016. “Big Data: Bringing competition policy to the digital era.” October 27. https://one.oecd.org/document/DAF/COMP(2016)14/en/pdf.
Polanyi, Karl. 2001. The Great Transformation: The Political and Economic Origins of Our Time. Boston, MA: Beacon Press.
Powers, Shawn and Michael Jablonski. 2015. The Real Cyber War: The Political Economy of Internet Freedom. Chicago, IL: University of Illinois Press.
Ruppert, Evelyn, Engin Isin and Didier Bigo. 2017. “Data politics.” Big Data & Society 4 (2): 1–7.
Scassa, Teresa. 2017. “Sharing Data in the Platform Economy: A Public Interest Argument for Access to Platform Data.” UBC Law Review 50 (4): 1017−71.
Schwartz, Paul M. 1999. “Privacy and Democracy in Cyberspace.” Vanderbilt Law Review 52 (6): 1609−702.
Srnicek, Nick. 2017. Platform Capitalism. Cambridge, UK: Polity Press.
Vincent, James. 2018. “Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech.” The Verge, January 12. www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai.
West, Sarah Myers. 2017. “Data Capitalism: Redefining the Logics of Surveillance and Privacy.” Business & Society: 1−22. doi:10.1177/0007650317718185.
Zuboff, Soshanna. 2015. “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization." Journal of Information Technology 30: 75−89.