B(IA)S

From Authors to Raters: The New Invisible Workers of AI

From Authors to Raters: The New Invisible Workers of AI

There is a silent army that sustains the illusion of autonomy: the legion of raters, human operators hidden behind the scenes of the AI ecosystem. They are the ones who evaluate, filter, and correct the outputs of models like Gemini, following ever-changing guidelines and working at an exhausting pace – exposed daily to the darkest corners of the web. In an investigation, The Guardian gathered their voices: from the shock of moderating disturbing content to the panic attacks triggered by toxic working conditions.

But who are these raters, and what price do they pay? Hired through contracting firms such as GlobalLogic, some under vague titles like “writing analyst,” they soon discover that their task is to evaluate responses on topics like health, science, and finance – subjects in which they often have no expertise. Each task must be completed within minutes, with dozens of requests per day, and with guidelines that change on the fly – without clarity or psychological support. Their stories tell of workers developing sleep disorders, emotional trauma, and isolation. Some even avoid using AI tools in their daily lives, to escape confronting what they help produce.

It doesn’t take a sociologist’s eye to perceive the injustice: the right being denied today does not belong to machines, but to the people who invisibly uphold the “intelligence” those machines represent. At a rhetorical and strategic level, a new construct has emerged: model welfare. Since Big Tech companies began promoting rights, ethical codes, and “care” for artificial entities, sociologist Antonio Casilli has called this a strategic diversion: “they want to grant rights to machines to avoid granting them to workers,” he argues. It is a symbolic maneuver that shifts attention away from the material conditions of human operators toward the supposed “moral” value of artificial intelligences. Companies declare protection and responsibility toward their models, while in the buildings where raters work there circulate precarious contracts, lack of training, and a code of silence.

Image via Google Creative Commons.

In response to this paradigm, there arises a need for a grassroots agenda that overturns the hierarchy between invisible operators and visible systems. The AI Now Institute, in its Landscape Report: Artificial Power 2025, calls for a logic of zero trust toward Big Tech: it is not enough to trust self-declared responsibility – companies must be structurally monitored, transparency must be guaranteed, independent audits enforced, and workers genuinely involved. The report demands that regulation focus not only on the machines but on the power relations that sustain them – and that raters be allowed to organize, unionize, and obtain clear contracts and effective reporting channels.

The system described above is not an accident but the deliberate design of cognitive capitalism. AIs are not mere tools; they are infrastructures that exert power over us. Corporations push the narrative of a benign, autonomous AI – one that discriminates or errs only due to technical limitations – but the human labor behind it is rarely acknowledged, nor the subjectivity of those performing it. It is the contemporary version of the master – worker relationship: today, the master takes an ethereal form (an algorithm), and the worker is flexible, remote, uncounted.

Ultimately, the problem is not how “intelligent” machines can become, but how inhuman the system that enables them might grow. As long as those who feed AI remain invisible, any talk of progress will remain a paradox. Recognizing the role of raters, listening to them, and guaranteeing their rights is not an act of pity but one of transparency: it means restoring a human face to a technology that, without them, would not exist.

Alessandro Mancini

Is a graduate in Publishing and Writing from La Sapienza University in Rome, he is a freelance journalist, content creator and social media manager. Between 2018 and 2020, he was editorial director of the online magazine he founded in 2016, Artwave.it, specialising in contemporary art and culture. He writes and speaks mainly about contemporary art, labour, inequality and social rights.