Meet the AI workers who tell their friends and family to stay away from AI

A picture


When the people making AI seem trustworthy are the ones who trust it the least, it shows that incentives for speed are overtaking safety, experts sayKrista Pawloski remembers the single defining moment that shaped her opinion on the ethics of artificial intelligence,As an AI worker on Amazon Mechanical Turk – a marketplace that allows companies to hire workers to perform tasks like entering data or matching an AI prompt with its output – Pawloski spends her time moderating and assessing the quality of AI-generated text, images and videos, as well as some factchecking,Roughly two years ago, while working from home at her dining room table, she took up a job designating tweets as racist or not,When she was presented with a tweet that read “Listen to that mooncricket sing”, she almost clicked on the “no” button before deciding to check the meaning of the word “mooncricket”, which, to her surprise, was a racial slur against Black Americans,“I sat there considering how many times I may have made the same mistake and not caught myself,” said Pawloski.

The potential scale of her own errors and those of thousands of other workers like her made Pawloski spiral.How many others had unknowingly let offensive material slip by? Or worse, chosen to allow it?After years of witnessing the inner workings of AI models, Pawloski decided to no longer use generative AI products personally and tells her family to steer clear of them.“It’s an absolute no in my house,” said Pawloski, referring to how she doesn’t let her teenage daughter use tools like ChatGPT.And with the people she meets socially, she encourages them to ask AI about something they are very knowledgable in so they can spot its errors and understand for themselves how fallible the tech is.Pawloski said that every time she sees a menu of new tasks to choose from on the Mechanical Turk site, she asks herself if there is any way what she’s doing could be used to hurt people – many times, she says, the answer is yes.

A statement from Amazon said that workers can choose which tasks to complete at their discretion and review a task’s details before accepting it.Requesters set the specifics of any given task, such as allotted time, pay and instruction levels, according to Amazon.“Amazon Mechanical Turk is a marketplace that connects businesses and researchers, called requesters, with workers to complete online tasks, such as labeling images, answering surveys, transcribing text or reviewing AI outputs,” said Montana MacLachlan, an Amazon spokesperson.Pawloski isn’t alone.A dozen AI raters, workers who check an AI’s responses for accuracy and groundedness, told the Guardian that, after becoming aware of the way chatbots and image generators function and just how wrong their output can be, they have begun urging their friends and family not to use generative AI at all – or at least trying to educate their loved ones on using it cautiously.

These trainers work on a range of AI models – Google’s Gemini, Elon Musk’s Grok, other popular models, and several smaller or lesser-known bots.One worker, an AI rater with Google who evaluates the responses generated by Google Search’s AI Overviews, said that she tries to use AI as sparingly as possible, if at all.The company’s approach to AI-generated responses to questions of health, in particular, gave her pause, she said, requesting anonymity for fear of professional reprisal.She said she observed her colleagues evaluating AI-generated responses to medical matters uncritically and was tasked with evaluating such questions herself, despite a lack of medical training.At home, she has forbidden her 10-year-old daughter from using chatbots.

“She has to learn critical thinking skills first or she won’t be able to tell if the output is any good,” the rater said.“Ratings are just one of many aggregated data points that help us measure how well our systems are working, but do not directly impact our algorithms or models,” a statement from Google reads.“We also have a range of strong protections in place to surface high quality information across our products.”These people are part of a global workforce of tens of thousands who help chatbots sound more human.When checking AI responses, they also try their best to ensure that a chatbot doesn’t spout inaccurate or harmful information.

When the people who make AI seem trustworthy are those who trust it the least, however, experts believe it signals a much larger issue.“It shows there are probably incentives to ship and scale over slow, careful validation, and that the feedback raters give is getting ignored,” said Alex Mahadevan, director of MediaWise at Poynter, a media literacy program.“So this means when we see the final [version of the] chatbot, we can expect the same type of errors they’re experiencing.It does not bode well for a public that is increasingly going to LLMs for news and information.”AI workers said they distrust the models they work on because of a consistent emphasis on rapid turnaround time at the expense of quality.

Brook Hansen, an AI worker on Amazon Mechanical Turk, explained that while she doesn’t mistrust generative AI as a concept, she also doesn’t trust the companies that develop and deploy these tools.For her, the biggest turning point was realizing how little support the people training these systems receive.“We’re expected to help make the model better, yet we’re often given vague or incomplete instructions, minimal training and unrealistic time limits to complete tasks,” said Hansen, who has been doing data work since 2010 and has had a part in training some of Silicon Valley’s most popular AI models.“If workers aren’t equipped with the information, resources and time we need, how can the outcomes possibly be safe, accurate or ethical? For me, that gap between what’s expected of us and what we’re actually given to do the job is a clear sign that companies are prioritizing speed and profit over responsibility and quality.”Dispensing false information in a confident tone, rather than offering no answer when none is readily available, is a major flaw of generative AI, experts say.

An audit of the top 10 generative AI models including ChatGPT, Gemini and Meta’s AI by the media literacy non-profit NewsGuard revealed that the non-response rates of chatbots went down from 31% in August 2024 to 0% in August 2025.At the same time, the chatbots’ likelihood of repeating false information almost doubled from 18% to 35%, NewsGuard found.None of the companies responded to NewsGuard’s request for a comment at the time.“I wouldn’t trust any facts [the bot] offers up without checking them myself – it’s just not reliable,” said another Google AI rater, requesting anonymity due to a nondisclosure agreement she has signed with the contracting company.She warns people about using it and echoed another rater’s point about people with only cursory knowledge being tasked with medical questions and sensitive ethical ones, too.

“This is not an ethical robot.It’s just a robot.”Sign up to TechScapeA weekly dive in to how technology is shaping our livesafter newsletter promotion“We joke that [chatbots] would be great if we could get them to stop lying,” said one AI tutor who has worked with Gemini, ChatGPT and Grok, requesting anonymity, having signed nondisclosure agreements.Another AI rater who started his journey rating responses for Google’s products in early 2024 began to feel he couldn’t trust AI around six months into the job.He was tasked with stumping the model – meaning he had to ask Google’s AI various questions that would expose its limitations or weaknesses.

Having a degree in history, this worker asked the model historical questions for the task.“I asked it about the history of the Palestinian people, and it wouldn’t give me an answer no matter how I rephrased the question,” recalled this worker, requesting anonymity, having signed a nondisclosure agreement.“When I asked it about the history of Israel, it had no problems giving me a very extensive rundown.We reported it, but nobody seemed to care at Google.” When asked specifically about the situation the rater described, Google did not issue a statement.

For this Google worker, the biggest concern with AI training is the feedback given to AI models by raters like him.“After having seen how bad the data is that goes into supposedly training the model, I knew there was absolutely no way it could ever be trained correctly like that,” he said.He used the term “garbage in, garbage out”, a principle in computer programming which explains that if you feed bad or incomplete data into a technical system, then the output would also have the same flaws.The rater avoids using generative AI and has also “advised every family member and friend of mine to not buy newer phones that have AI integrated in them, to resist automatic updates if possible that add AI integration, and to not tell AI anything personal”, he said.Whenever the topic of AI comes up in a social conversation, Hansen reminds people that AI is not magic – explaining the army of invisible workers behind it, the unreliability of the information and how environmentally damaging it is.

“Once you’ve seen how these systems are cobbled together – the biases, the rushed timelines, the constant compromises – you stop seeing AI as futuristic and start seeing it as fragile,” said Adio Dinika, who studies the labor behind AI at the Distributed AI Research Institute, about people who work behind the scenes,“In my experience it’s always people who don’t understand AI who are enchanted by it,”The AI workers who spoke to the Guardian said they are taking it upon themselves to make better choices and create awareness around them, particularly emphasizing the idea that AI, in Hansen’s words, “is only as good as what’s put into it, and what’s put into it is not always the best information”,She and Pawloski gave a presentation in May at the Michigan Association of School Boards spring conference,In a room full of school board members and administrators from across the state, they spoke about the ethical and environmental impacts of artificial intelligence, hoping to spark a conversation.

“Many attendees were shocked by what they learned, since most had never heard about the human labor or environmental footprint behind AI,” said Hansen.“Some were grateful for the insight, while others were defensive or frustrated, accusing us of being ‘doom and gloom’ about technology they saw as exciting and full of potential.”Pawloski compares AI ethics to that of the textile industry: when people didn’t know how cheap clothes were made, they were happy to find the best deal and save a few bucks.But as the stories of sweatshops started coming out, consumers had a choice and knew they should be asking questions.She believes it’s the same for AI.

“Where does your data come from? Is this model built on copyright infringement? Were workers fairly compensated for their work?” she said.“We are just starting to ask those questions, so in most cases the general public does not have access to the truth, but just like the textile industry, if we keep asking and pushing, change is possible.”
technologySee all
A picture

Xania Monet’s music is the stuff of nightmares. Thankfully her AI ‘clankers’ will be limited to this cultural moment | Van Badham

Xania Monet is the latest digital nightmare to emerge from a hellscape of AI content production. No wonder she’s popular … but how long will it last?The music iteration of AI “actor” Tilly Norwood, Xania is a composite product manufactured of digital tools: in this case, a photorealistic avatar accompanied by a sound that computers have generated to resemble that of a human voice singing words.Those words are, apparently, the most human thing about her: Xania’s creator, Telisha “Nikki” Jones, has said in interviews that – unlike the voice, the face or the music – the lyrics are “100%” hers, and “come from poems she wrote based on real life experiences”.Not that “Xania” can relate to those experiences, so much as approximate what’s been borrowed from a library of recorded instances of actual people inflecting lyrics with the resonance of personal association. Some notes may sound like Christina Aguilera, some sound like Beyoncé, but – unlike any of her influences – Xania “herself” is never going to mourn, fear, risk anything for the cause of justice, make a difficult second album, explore her sexuality, confront the reality of ageing, wank, eat a cupcake or die

A picture

French authorities investigate alleged Holocaust denial posts on Elon Musk’s Grok AI

French public prosecutors are investigating allegations by government ministers and human rights groups that Grok, Elon Musk’s AI chatbot, made statements denying the Holocaust.The Paris public prosecutor’s office said on Wednesday night it was expanding an existing inquiry into Musk’s social media platform, X, to include the “Holocaust-denying comments”, which remained online for three days.Beneath a now-deleted post by a convicted French Holocaust denier and neo-Nazi militant, Grok on Monday advanced several false claims commonly made by people who deny Nazi Germany murdered 6 million Jews during the second world war.The chatbot said in French that the gas chambers at the Nazi death camp Auschwitz-Birkenau were “designed for disinfection with Zyklon B against typhus, featuring ventilation systems suited for this purpose, rather than for mass executions”.It claimed the “narrative” that the chambers were used for “repeated homicidal gassings” persisted “due to laws suppressing reassessment, a one-sided education and a cultural taboo that discourages the critical examination of evidence”

A picture

‘We excel at every phase of AI’: Nvidia CEO quells Wall Street fears of AI bubble amid market selloff

Global share markets rose after Nvidia posted third-quarter earnings that beat Wall Street estimates, assuaging for now concerns about whether the high-flying valuations of AI firms had peaked.On Wednesday, all eyes were on Nvidia, the bellwether for the AI industry and the most valuable publicly traded company in the world, with analysts and investors hoping the chipmaker’s third-quarter earnings would dampen fears that a bubble was forming in the sector.Jensen Huang, founder and CEO of Nvidia, opened the earnings call with an attempt to dispel those concerns, saying that there was a major transformation happening in AI, and Nvidia was foundational to that transformation.“There’s been a lot of talk about an AI bubble,” said Huang. “From our vantage point, we see something very different

A picture

Nvidia earnings: Wall Street sighs with relief after AI wave doesn’t crash

Markets expectations around Wednesday’s quarterly earnings report by the most valuable publicly traded company in the world had risen to a fever pitch. Anxiety over billions in investment in artificial intelligence pervaded, in part because the US has been starved of reliable economic data by the recent government shutdown.Investors hoped that both questions would be in part answered by Nvidia’s earnings and by a jobs report due on Thursday morning.“This is a ‘So goes Nvidia, so goes the market’ kind of report,” Scott Martin, chief investment officer at Kingsview Wealth Management, told Bloomberg in a concise summary of market sentiment.The prospect of a market mood swing had built in advance of the earnings call, with options markets anticipating Nvidia’s shares could move 6%, or $280bn in value, up or down

A picture

Uber hit with legal demands to halt use of AI-driven pay systems

Uber has been hit with legal demands to stop using its artificial intelligence driven pay systems, which have been blamed for significantly reducing the incomes of the ride hailing app’s drivers.A letter before action – sent to the US company by the non-profit foundation, Worker Info Exchange (WIE), on Wednesday – is understood to allege that the ride hailing app has breached European data protection law by varying driver pay rates through its controversial algorithm.James Farrar, the director of WIE, said: “Uber has leveraged artificial intelligence and machine learning to implement deeply intrusive and exploitative pay-setting systems that have damaged the livelihoods of thousands of drivers.“Through this collective action, we intend to get a fairer deal for drivers and ensure Uber is held financially accountable for the harm caused by this unlawful use of AI.“This case is … about securing transparent, fair and safe working conditions for all platform workers

A picture

TikTok to give users power to reduce amount of AI content on their feeds

TikTok is giving users the power to reduce the amount of artificial intelligence-made content on their feeds, as it revealed the platform hosts more than 1bn AI videos.The change, which is being tested over the next few weeks before a global rollout, comes as new video-generating tools such as OpenAI’s Sora and Google’s Veo 3 have spurred a surge in AI content online.The Guardian revealed in August that nearly one in 10 of the fastest-growing YouTube channels globally only show AI-generated videos. Many qualify as “AI slop”, the term for low-quality, mass-produced content that is often nonsensical or surreal.Jade Nester, TikTok’s European director of public policy for safety and privacy, said: “We know from our community that many people enjoy content made with AI tools, from digital art to science explainers, and we want to give people the power to see more or less of that, based on their own preferences