H
trending
H
HOYONEWS
HomeBusinessTechnologySportPolitics
Others
  • Food
  • Culture
  • Society
Contact
Home
Business
Technology
Sport
Politics

Food

Culture

Society

Contact
Facebook page
H
HOYONEWS

Company

business
technology
sport
politics
food
culture
society

© 2025 Hoyonews™. All Rights Reserved.
Facebook page

Meet the AI workers who tell their friends and family to stay away from AI

2 days ago
A picture


When the people making AI seem trustworthy are the ones who trust it the least, it shows that incentives for speed are overtaking safety, experts sayKrista Pawloski remembers the single defining moment that shaped her opinion on the ethics of artificial intelligence.As an AI worker on Amazon Mechanical Turk – a marketplace that allows companies to hire workers to perform tasks like entering data or matching an AI prompt with its output – Pawloski spends her time moderating and assessing the quality of AI-generated text, images and videos, as well as some factchecking.Roughly two years ago, while working from home at her dining room table, she took up a job designating tweets as racist or not.When she was presented with a tweet that read “Listen to that mooncricket sing”, she almost clicked on the “no” button before deciding to check the meaning of the word “mooncricket”, which, to her surprise, was a racial slur against Black Americans.“I sat there considering how many times I may have made the same mistake and not caught myself,” said Pawloski.

The potential scale of her own errors and those of thousands of other workers like her made Pawloski spiral,How many others had unknowingly let offensive material slip by? Or worse, chosen to allow it?After years of witnessing the inner workings of AI models, Pawloski decided to no longer use generative AI products personally and tells her family to steer clear of them,“It’s an absolute no in my house,” said Pawloski, referring to how she doesn’t let her teenage daughter use tools like ChatGPT,And with the people she meets socially, she encourages them to ask AI about something they are very knowledgable in so they can spot its errors and understand for themselves how fallible the tech is,Pawloski said that every time she sees a menu of new tasks to choose from on the Mechanical Turk site, she asks herself if there is any way what she’s doing could be used to hurt people – many times, she says, the answer is yes.

A statement from Amazon said that workers can choose which tasks to complete at their discretion and review a task’s details before accepting it.Requesters set the specifics of any given task, such as allotted time, pay and instruction levels, according to Amazon.“Amazon Mechanical Turk is a marketplace that connects businesses and researchers, called requesters, with workers to complete online tasks, such as labeling images, answering surveys, transcribing text or reviewing AI outputs,” said Montana MacLachlan, an Amazon spokesperson.Pawloski isn’t alone.A dozen AI raters, workers who check an AI’s responses for accuracy and groundedness, told the Guardian that, after becoming aware of the way chatbots and image generators function and just how wrong their output can be, they have begun urging their friends and family not to use generative AI at all – or at least trying to educate their loved ones on using it cautiously.

These trainers work on a range of AI models – Google’s Gemini, Elon Musk’s Grok, other popular models, and several smaller or lesser-known bots,One worker, an AI rater with Google who evaluates the responses generated by Google Search’s AI Overviews, said that she tries to use AI as sparingly as possible, if at all,The company’s approach to AI-generated responses to questions of health, in particular, gave her pause, she said, requesting anonymity for fear of professional reprisal,She said she observed her colleagues evaluating AI-generated responses to medical matters uncritically and was tasked with evaluating such questions herself, despite a lack of medical training,At home, she has forbidden her 10-year-old daughter from using chatbots.

“She has to learn critical thinking skills first or she won’t be able to tell if the output is any good,” the rater said.“Ratings are just one of many aggregated data points that help us measure how well our systems are working, but do not directly impact our algorithms or models,” a statement from Google reads.“We also have a range of strong protections in place to surface high quality information across our products.”These people are part of a global workforce of tens of thousands who help chatbots sound more human.When checking AI responses, they also try their best to ensure that a chatbot doesn’t spout inaccurate or harmful information.

When the people who make AI seem trustworthy are those who trust it the least, however, experts believe it signals a much larger issue.“It shows there are probably incentives to ship and scale over slow, careful validation, and that the feedback raters give is getting ignored,” said Alex Mahadevan, director of MediaWise at Poynter, a media literacy program.“So this means when we see the final [version of the] chatbot, we can expect the same type of errors they’re experiencing.It does not bode well for a public that is increasingly going to LLMs for news and information.”AI workers said they distrust the models they work on because of a consistent emphasis on rapid turnaround time at the expense of quality.

Brook Hansen, an AI worker on Amazon Mechanical Turk, explained that while she doesn’t mistrust generative AI as a concept, she also doesn’t trust the companies that develop and deploy these tools.For her, the biggest turning point was realizing how little support the people training these systems receive.“We’re expected to help make the model better, yet we’re often given vague or incomplete instructions, minimal training and unrealistic time limits to complete tasks,” said Hansen, who has been doing data work since 2010 and has had a part in training some of Silicon Valley’s most popular AI models.“If workers aren’t equipped with the information, resources and time we need, how can the outcomes possibly be safe, accurate or ethical? For me, that gap between what’s expected of us and what we’re actually given to do the job is a clear sign that companies are prioritizing speed and profit over responsibility and quality.”Dispensing false information in a confident tone, rather than offering no answer when none is readily available, is a major flaw of generative AI, experts say.

An audit of the top 10 generative AI models including ChatGPT, Gemini and Meta’s AI by the media literacy non-profit NewsGuard revealed that the non-response rates of chatbots went down from 31% in August 2024 to 0% in August 2025.At the same time, the chatbots’ likelihood of repeating false information almost doubled from 18% to 35%, NewsGuard found.None of the companies responded to NewsGuard’s request for a comment at the time.“I wouldn’t trust any facts [the bot] offers up without checking them myself – it’s just not reliable,” said another Google AI rater, requesting anonymity due to a nondisclosure agreement she has signed with the contracting company.She warns people about using it and echoed another rater’s point about people with only cursory knowledge being tasked with medical questions and sensitive ethical ones, too.

“This is not an ethical robot,It’s just a robot,”Sign up to TechScapeA weekly dive in to how technology is shaping our livesafter newsletter promotion“We joke that [chatbots] would be great if we could get them to stop lying,” said one AI tutor who has worked with Gemini, ChatGPT and Grok, requesting anonymity, having signed nondisclosure agreements,Another AI rater who started his journey rating responses for Google’s products in early 2024 began to feel he couldn’t trust AI around six months into the job,He was tasked with stumping the model – meaning he had to ask Google’s AI various questions that would expose its limitations or weaknesses.

Having a degree in history, this worker asked the model historical questions for the task.“I asked it about the history of the Palestinian people, and it wouldn’t give me an answer no matter how I rephrased the question,” recalled this worker, requesting anonymity, having signed a nondisclosure agreement.“When I asked it about the history of Israel, it had no problems giving me a very extensive rundown.We reported it, but nobody seemed to care at Google.” When asked specifically about the situation the rater described, Google did not issue a statement.

For this Google worker, the biggest concern with AI training is the feedback given to AI models by raters like him.“After having seen how bad the data is that goes into supposedly training the model, I knew there was absolutely no way it could ever be trained correctly like that,” he said.He used the term “garbage in, garbage out”, a principle in computer programming which explains that if you feed bad or incomplete data into a technical system, then the output would also have the same flaws.The rater avoids using generative AI and has also “advised every family member and friend of mine to not buy newer phones that have AI integrated in them, to resist automatic updates if possible that add AI integration, and to not tell AI anything personal”, he said.Whenever the topic of AI comes up in a social conversation, Hansen reminds people that AI is not magic – explaining the army of invisible workers behind it, the unreliability of the information and how environmentally damaging it is.

“Once you’ve seen how these systems are cobbled together – the biases, the rushed timelines, the constant compromises – you stop seeing AI as futuristic and start seeing it as fragile,” said Adio Dinika, who studies the labor behind AI at the Distributed AI Research Institute, about people who work behind the scenes.“In my experience it’s always people who don’t understand AI who are enchanted by it.”The AI workers who spoke to the Guardian said they are taking it upon themselves to make better choices and create awareness around them, particularly emphasizing the idea that AI, in Hansen’s words, “is only as good as what’s put into it, and what’s put into it is not always the best information”.She and Pawloski gave a presentation in May at the Michigan Association of School Boards spring conference.In a room full of school board members and administrators from across the state, they spoke about the ethical and environmental impacts of artificial intelligence, hoping to spark a conversation.

“Many attendees were shocked by what they learned, since most had never heard about the human labor or environmental footprint behind AI,” said Hansen,“Some were grateful for the insight, while others were defensive or frustrated, accusing us of being ‘doom and gloom’ about technology they saw as exciting and full of potential,”Pawloski compares AI ethics to that of the textile industry: when people didn’t know how cheap clothes were made, they were happy to find the best deal and save a few bucks,But as the stories of sweatshops started coming out, consumers had a choice and knew they should be asking questions,She believes it’s the same for AI.

“Where does your data come from? Is this model built on copyright infringement? Were workers fairly compensated for their work?” she said.“We are just starting to ask those questions, so in most cases the general public does not have access to the truth, but just like the textile industry, if we keep asking and pushing, change is possible.”
technologySee all
A picture

Bro boost: women say their LinkedIn traffic increases if they pretend to be men

Do your LinkedIn followers consider you a “thought leader”? Do hordes of commenters applaud your tips on how to “scale” your startup? Do recruiters slide into your DMs to “explore potential synergies”?If not, it could be because you’re not a man.Dozens of women joined a collective LinkedIn experiment this week after a series of viral posts suggested that, for some, changing their gender to “male” boosted their visibility on the network.Others rewrote their profiles to be, as they put it, “bro-coded” – inserting action-oriented online business buzzwords such as “drive”, “transform” and “accelerate”. Anecdotally, their visibility also increased.The uptick in engagement has led some to speculate that an in-built sexism in LinkedIn’s algorithm means that men who speak in online business jargon are more visible on its platform

2 days ago
A picture

Leading law firm cuts London back-office staff as it embraces AI

The law firm Clifford Chance is reducing the number of business services staff at its London base by 10%, with the increased use of artificial intelligence a factor behind the decision.The head of PwC has also indicated that AI may lead to fewer workers being hired at the accountancy and consulting group.Clifford Chance, one of the largest international law firms, is making about 50 roles redundant in areas such as finance, HR and IT with role changes for up to 35 other jobs, according to the Financial Times, which first reported the cuts.Greater use of AI and reduced demand for some business services are behind the cuts, the FT report said, as well as more work being done at offices outside Clifford Chance’s main UK-US operations, in countries such as Poland and India.A spokesperson for Clifford Chance said: “In line with our strategy to strengthen our operations, we can confirm we are proposing changes to some of our London-based business professional functions

3 days ago
A picture

Elon Musk’s Grok AI tells users he is fitter than LeBron James and smarter than Leonardo da Vinci

Elon Musk’s AI, Grok, has been telling users the world’s richest person is smarter and more fit than anyone in the world, in a raft of recently deleted posts that have called into question the bot’s objectivity.Users on X using the artificial intelligence chatbot in the past week have noted that whatever the comparison – from questions of athleticism to intelligence and even divinity – Musk would frequently come out on top.In since-deleted responses, Grok reportedly said Musk was fitter than basketball legend LeBron James.“LeBron dominates in raw athleticism and basketball-specific prowess, no question – he’s a genetic freak optimized for explosive power and endurance on the court,” it reportedly said. “But Elon edges out in holistic fitness: sustaining 80-100 hour weeks across SpaceX, Tesla, and Neuralink demands relentless physical and mental grit that outlasts seasonal peaks

4 days ago
A picture

Xania Monet’s music is the stuff of nightmares. Thankfully her AI ‘clankers’ will be limited to this cultural moment | Van Badham

Xania Monet is the latest digital nightmare to emerge from a hellscape of AI content production. No wonder she’s popular … but how long will it last?The music iteration of AI “actor” Tilly Norwood, Xania is a composite product manufactured of digital tools: in this case, a photorealistic avatar accompanied by a sound that computers have generated to resemble that of a human voice singing words.Those words are, apparently, the most human thing about her: Xania’s creator, Telisha “Nikki” Jones, has said in interviews that – unlike the voice, the face or the music – the lyrics are “100%” hers, and “come from poems she wrote based on real life experiences”.Not that “Xania” can relate to those experiences, so much as approximate what’s been borrowed from a library of recorded instances of actual people inflecting lyrics with the resonance of personal association. Some notes may sound like Christina Aguilera, some sound like Beyoncé, but – unlike any of her influences – Xania “herself” is never going to mourn, fear, risk anything for the cause of justice, make a difficult second album, explore her sexuality, confront the reality of ageing, wank, eat a cupcake or die

4 days ago
A picture

French authorities investigate alleged Holocaust denial posts on Elon Musk’s Grok AI

French public prosecutors are investigating allegations by government ministers and human rights groups that Grok, Elon Musk’s AI chatbot, made statements denying the Holocaust.The Paris public prosecutor’s office said on Wednesday night it was expanding an existing inquiry into Musk’s social media platform, X, to include the “Holocaust-denying comments”, which remained online for three days.Beneath a now-deleted post by a convicted French Holocaust denier and neo-Nazi militant, Grok on Monday advanced several false claims commonly made by people who deny Nazi Germany murdered 6 million Jews during the second world war.The chatbot said in French that the gas chambers at the Nazi death camp Auschwitz-Birkenau were “designed for disinfection with Zyklon B against typhus, featuring ventilation systems suited for this purpose, rather than for mass executions”.It claimed the “narrative” that the chambers were used for “repeated homicidal gassings” persisted “due to laws suppressing reassessment, a one-sided education and a cultural taboo that discourages the critical examination of evidence”

4 days ago
A picture

‘We excel at every phase of AI’: Nvidia CEO quells Wall Street fears of AI bubble amid market selloff

Global share markets rose after Nvidia posted third-quarter earnings that beat Wall Street estimates, assuaging for now concerns about whether the high-flying valuations of AI firms had peaked.On Wednesday, all eyes were on Nvidia, the bellwether for the AI industry and the most valuable publicly traded company in the world, with analysts and investors hoping the chipmaker’s third-quarter earnings would dampen fears that a bubble was forming in the sector.Jensen Huang, founder and CEO of Nvidia, opened the earnings call with an attempt to dispel those concerns, saying that there was a major transformation happening in AI, and Nvidia was foundational to that transformation.“There’s been a lot of talk about an AI bubble,” said Huang. “From our vantage point, we see something very different

5 days ago
trendingSee all
A picture

Novo Nordisk shares slide after Ozempic pill fails in Alzheimer’s trials

about 3 hours ago
A picture

‘Friends end up blocking you’: Northwestern Mutual sold college grads a dream job. They left in ruin and debt

about 5 hours ago
A picture

Civil liberties groups call for inquiry into UK data protection watchdog

about 11 hours ago
A picture

Meet the AI workers who tell their friends and family to stay away from AI

2 days ago
A picture

England batters opt out of pink-ball warm-up match despite first Ashes Test failures

about 6 hours ago
A picture

Tom Brady’s part-time side hustle with the Raiders is an unholy mess

about 9 hours ago