Two decades ago, platform champions promised that digital platforms would enable transparency and ease access to media, work and government services. In attempting to broker these relationships at scale, platforms have created the problem of vetting large volumes of unknown participants while minimizing their investments in human workers to mediate participant-platform relationships. Managers of these platforms adopt a philosophy of automated management, where opaque machine-learning algorithms designed for fraud detection are used to guess the “bad actors.” Companies use these algorithms of suspicion to dispense with workers based on risk in the name of fraud. The law lets them.
Workers for whom jobs might be a lifeline could wake up one day to find themselves cut off from work without notice. For those brave or stubborn enough to contest it, they may be met by silence from platform technical support. They may be told that there is nothing the platform can do. Or, if they are lucky, the platform may investigate their case and determine their account was suspended in error. In those cases of repair, companies do not redress their error by restoring lost earnings.
Innocent activities such as sharing infrastructure can be flagged as fraud when algorithms are trained to assume each user has a different Wi-Fi network. For example, a data-processing worker in the United States woke up one morning and found herself locked out of her job. On a normal day, she did data processing, transcription and classification — the type of work that powers much of the internet and artificial intelligence. That day, however, she found herself locked out of the marketplace where employers were offering the day’s work. Only after the worker-run advocacy group Turkopticon1 intervened did Amazon reinvestigate the suspension and admit it had made a mistake. The worker’s account had been suspended because she and her son had logged in from the same Wi-Fi router at home and, thus, had the same Internet Protocol (IP) address. Amazon had interpreted the second login on the same IP address as a worker trying to get paid to do the same social science study twice with different accounts, corrupting research results. Algorithmically accused of fraud, she had no legal right to recourse in her legal jurisdiction, despite there being no prohibition on working from the same place in Amazon’s terms of service. Amazon investigated and explained that it made an error; however, it offered no compensation for the two weeks of income that the data-processing worker had lost.
While Amazon uses fraud algorithms to guess at bad actors, it uses other algorithms to elevate some workers as trusted. Amazon’s Mechanical Turk platform has developed a program it calls “Masters.” Amazon promotes Masters workers as trusted producers of high-quality work on the platform, a premium for workers. Masters workers get exclusive access to a large stream of work; however, Amazon takes a larger cut of what other employers pay.
Workers express frustration and confusion about how to gain access to Masters work. Workers with very high ratings or long track records may not receive the designation. Masters acts as a kind of algorithmic glass ceiling installed by Amazon. A 2014 Amazon patent2 holds some clues as to how Amazon grants workers this privileged status. The patent suggests that Amazon may calculate a “judge error rate” or confidence rating it assigns to workers behind the scenes.3 The patent also suggests that Amazon has several strategies for judging the quality of workers. It might place hidden tests with known answers to test individual workers. More controversially, it may have multiple workers complete the same work task but — not knowing the answer to the task — it will judge the workers who give the most popular or “plural” answers correct. This way of measuring correctness judges workers as less competent when their interpretations deviate from the norm, even if the question is subjective.
The automation of trust and suspicion can justify unintuitive and expansive forms of corporate surveillance in search of signals of risk.
More disturbingly, the Amazon patent also suggests that workers may be evaluated by the speed of task performance, alongside accuracy and error rate. The patent authors, perhaps, do not imagine users with disabilities — for example, repetitive stress injury, impaired vision or neurodiverse cognition — as good workers. Workers who are slower because they are juggling care work in the home may also be judged as less competent in their approach. At the scale of data work Amazon processes, the patents evidence an approach that seeks to automate at the cost of indifference to situations shaped by gender or disability.
The automation of trust and suspicion can justify unintuitive and expansive forms of corporate surveillance in search of signals of risk. One patent, titled “Authentication and fraud detection based on user behavior,”4 describes a technique to track habitual user behaviours, such as sequences of applications opened in the morning, combined with contextual markers such as location, microphone data and relationship to surrounding objects. The company can build a data dossier on each user to guess at when the body at the keyboard or on the phone is different than the one who usually logs in.
These machine-learning approaches do not make rules about what makes a good or bad worker that can easily be explained. Rather, they describe signals that machine learning will “learn” to hone in on as it infers a way of guessing which workers are “good” or “bad.” Workers then can be flagged for seeming close to workers — in the eyes of the algorithm — who were previously judged as bad, or seeming far from workers who were judged as good. To be an outlier, rather than breaking a rule, can be enough for the company to flag a worker as a problem.
Fraud engineers who build models often recognize that the algorithms make imperfect guesses, and might recommend, as Anne Jonas and Jenna Burrell (2019) found, that companies use the algorithms to launch an investigation, engage with the user and then decide how to act.
Amazon, like many companies, seems to jump straight to punishing flagged accounts without a clearly outlined appeal or redressal process. Former Amazon engineers, Spencer Soper (2021) reported, told management that innocent people might be flagged as bad actors, but management chose to suspend first, correct later (and only for those workers who pursued the matter). Managerial departure from engineering recommendations is not surprising, as companies place all the costs of algorithmic mistakes on the shoulders of workers.
Emerging digital rights frameworks have loopholes that leave workers accused of “fraud” without transparency or recourse. California’s marketplace rights law (AB 1790)5 requires marketplace operators to disclose grounds for suspending a marketplace seller, except if it could “negatively impact the safety or property of another user or the marketplace itself.” The California Consumer Privacy Act (CCPA) of 20186 also exempted companies from deleting consumer data on request if data would be kept to “detect…fraudulent…activity, or prosecute those responsible for that activity.” (A 2020 voter initiative amended this exception in the CCPA.) Companies argue that revealing their algorithms only empowers “adversaries” seeking to game the system.
Companies protected by this opacity shoot first and apologize later, if at all. The law provides little disincentive to tech company commitments to large-scale workforces with largely automated industrial relations. This is not a problem of engineering knowledge but rather a problem of companies that treat workers as disposable and without the right to due process. Policy makers must look to workers on platforms, such as those of Turkopticon, who are experts in the kinds of harms and vulnerabilities they face and create policy that strengthens their voice in the platforms they power through their work.
Thank you to the worker organizers of Turkopticon for their insights and contributions to this work.