When Managers Rely on Algorithms of Suspicion: Fraud Logics and Their Fallouts

July 4, 2022
13_Irani-color_BG 13_Irani-color_MG1 13_Irani-color_MG 13_Irani-color_FG2 13_Irani-color_MG-figures

This essay is part of The Four Domains of Global Platform Governance, an essay series that examines platform governance from four distinct policy angles: content, data, competition and infrastructure.

Two decades ago, platform champions promised that digital platforms would enable transparency and ease access to media, work and government services. In attempting to broker these relationships at scale, platforms have created the problem of vetting large volumes of unknown participants while minimizing their investments in human workers to mediate participant-platform relationships. Managers of these platforms adopt a philosophy of automated management, where opaque machine-learning algorithms designed for fraud detection are used to guess the “bad actors.” Companies use these algorithms of suspicion to dispense with workers based on risk in the name of fraud. The law lets them.

Workers for whom jobs might be a lifeline could wake up one day to find themselves cut off from work without notice. For those brave or stubborn enough to contest it, they may be met by silence from platform technical support. They may be told that there is nothing the platform can do. Or, if they are lucky, the platform may investigate their case and determine their account was suspended in error. In those cases of repair, companies do not redress their error by restoring lost earnings.

Innocent activities such as sharing infrastructure can be flagged as fraud when algorithms are trained to assume each user has a different Wi-Fi network. For example, a data-processing worker in the United States woke up one morning and found herself locked out of her job. On a normal day, she did data processing, transcription and classification — the type of work that powers much of the internet and artificial intelligence. That day, however, she found herself locked out of the marketplace where employers were offering the day’s work. Only after the worker-run advocacy group Turkopticon1 intervened did Amazon reinvestigate the suspension and admit it had made a mistake. The worker’s account had been suspended because she and her son had logged in from the same Wi-Fi router at home and, thus, had the same Internet Protocol (IP) address. Amazon had interpreted the second login on the same IP address as a worker trying to get paid to do the same social science study twice with different accounts, corrupting research results. Algorithmically accused of fraud, she had no legal right to recourse in her legal jurisdiction, despite there being no prohibition on working from the same place in Amazon’s terms of service. Amazon investigated and explained that it made an error; however, it offered no compensation for the two weeks of income that the data-processing worker had lost.

While Amazon uses fraud algorithms to guess at bad actors, it uses other algorithms to elevate some workers as trusted. Amazon’s Mechanical Turk platform has developed a program it calls “Masters.” Amazon promotes Masters workers as trusted producers of high-quality work on the platform, a premium for workers. Masters workers get exclusive access to a large stream of work; however, Amazon takes a larger cut of what other employers pay.

Workers express frustration and confusion about how to gain access to Masters work. Workers with very high ratings or long track records may not receive the designation. Masters acts as a kind of algorithmic glass ceiling installed by Amazon. A 2014 Amazon patent2 holds some clues as to how Amazon grants workers this privileged status. The patent suggests that Amazon may calculate a “judge error rate” or confidence rating it assigns to workers behind the scenes.3 The patent also suggests that Amazon has several strategies for judging the quality of workers. It might place hidden tests with known answers to test individual workers. More controversially, it may have multiple workers complete the same work task but — not knowing the answer to the task — it will judge the workers who give the most popular or “plural” answers correct. This way of measuring correctness judges workers as less competent when their interpretations deviate from the norm, even if the question is subjective.

The automation of trust and suspicion can justify unintuitive and expansive forms of corporate surveillance in search of signals of risk.

More disturbingly, the Amazon patent also suggests that workers may be evaluated by the speed of task performance, alongside accuracy and error rate. The patent authors, perhaps, do not imagine users with disabilities — for example, repetitive stress injury, impaired vision or neurodiverse cognition — as good workers. Workers who are slower because they are juggling care work in the home may also be judged as less competent in their approach. At the scale of data work Amazon processes, the patents evidence an approach that seeks to automate at the cost of indifference to situations shaped by gender or disability.

The automation of trust and suspicion can justify unintuitive and expansive forms of corporate surveillance in search of signals of risk. One patent, titled “Authentication and fraud detection based on user behavior,”4 describes a technique to track habitual user behaviours, such as sequences of applications opened in the morning, combined with contextual markers such as location, microphone data and relationship to surrounding objects. The company can build a data dossier on each user to guess at when the body at the keyboard or on the phone is different than the one who usually logs in.

These machine-learning approaches do not make rules about what makes a good or bad worker that can easily be explained. Rather, they describe signals that machine learning will “learn” to hone in on as it infers a way of guessing which workers are “good” or “bad.” Workers then can be flagged for seeming close to workers — in the eyes of the algorithm — who were previously judged as bad, or seeming far from workers who were judged as good. To be an outlier, rather than breaking a rule, can be enough for the company to flag a worker as a problem.

Fraud engineers who build models often recognize that the algorithms make imperfect guesses, and might recommend, as Anne Jonas and Jenna Burrell (2019) found, that companies use the algorithms to launch an investigation, engage with the user and then decide how to act.

Amazon, like many companies, seems to jump straight to punishing flagged accounts without a clearly outlined appeal or redressal process. Former Amazon engineers, Spencer Soper (2021) reported, told management that innocent people might be flagged as bad actors, but management chose to suspend first, correct later (and only for those workers who pursued the matter). Managerial departure from engineering recommendations is not surprising, as companies place all the costs of algorithmic mistakes on the shoulders of workers.

Emerging digital rights frameworks have loopholes that leave workers accused of “fraud” without transparency or recourse. California’s marketplace rights law (AB 1790)5 requires marketplace operators to disclose grounds for suspending a marketplace seller, except if it could “negatively impact the safety or property of another user or the marketplace itself.” The California Consumer Privacy Act (CCPA) of 20186 also exempted companies from deleting consumer data on request if data would be kept to “detect…fraudulent…activity, or prosecute those responsible for that activity.” (A 2020 voter initiative amended this exception in the CCPA.) Companies argue that revealing their algorithms only empowers “adversaries” seeking to game the system.

Companies protected by this opacity shoot first and apologize later, if at all. The law provides little disincentive to tech company commitments to large-scale workforces with largely automated industrial relations. This is not a problem of engineering knowledge but rather a problem of companies that treat workers as disposable and without the right to due process. Policy makers must look to workers on platforms, such as those of Turkopticon, who are experts in the kinds of harms and vulnerabilities they face and create policy that strengthens their voice in the platforms they power through their work.

Acknowledgment

Thank you to the worker organizers of Turkopticon for their insights and contributions to this work.

  1. See www.blog.turkopticon.net.
  2. “Evaluation of Task Judging Results”, US Patent No 8,868,471 (21 October 2014), online: <https://patentimages.storage.googleapis.com/7f/27/47/07d5bb26cafb5d/US8868471.pdf>.
  3. Ibid., Figure 1.
  4. “Authentication and Fraud Detection Based on User Behavior”, US Patent No 10,108,791 (23 October 2018), online: <https://patentimages.storage.googleapis.com/1f/b7/bf/3f337fd4521141/US10108791.pdf>.
  5. See US, Bill AB 1790, ch 635, AB-1790 Marketplaces: marketplace sellers, online: <https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB1790>.
  6. See US, Bill S 1121, ch 735, California Consumer Privacy Act of 2018, online: <https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1121>.

Works Cited

Jonas, Anne and Jenna Burrell. 2019. “Friction, snake oil, and weird countries: Cybersecurity systems could deepen global inequality through regional blocking.” Big Data & Society 6 (1): 1–11.

Soper, Spencer. 2021. “Fired by Bot at Amazon: ‘It’s You Against the Machine.’” Bloomberg, June 28. www.bloomberg.com/news/features/2021-06-28/fired-by-bot-amazon-turns-to-machine-managers-and-workers-are-losing-out.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Lilly Irani is an associate professor of communication and science studies at the University of California, San Diego.

The Four Domains of Global Platform Governance

In the span of 15 years, the online public sphere has been largely privatized and is now dominated by a small number of platform companies. This has allowed the interests of publicly traded companies to determine the quality of our civic discourse, the character of our digital economy and, ultimately, the integrity of our democracies. This essay series brings together a global group of scholars working in four distinct domains of the platform governance policy discourse: content, data, competition and infrastructure.