Google puts users at risk by downplaying health disclaimers under AI Overviews

A picture


Google is putting people at risk of harm by downplaying safety warnings that its AI-generated medical advice may be wrong.When answering queries about sensitive topics such as health, the company says its AI Overviews, which appear above search results, prompt users to seek professional help, rather than relying solely on its summaries.“AI Overviews will inform people when it’s important to seek out expert advice or to verify the information presented,” Google has said.But the Guardian found the company does not include any such disclaimers when users are first presented with medical advice.Google only issues a warning if users choose to request additional health information and click on a button called “Show more”.

Even then, safety labels only appear below all of the extra medical advice assembled using generative AI, and in a smaller, lighter font.“This is for informational purposes only,” the disclaimer tells users who click through for further details after seeing the initial summary, and navigate their way to the very end of the AI Overview.“For medical advice or a diagnosis, consult a professional.AI responses may include mistakes.”Google did not deny its disclaimers fail to appear when users are first served medical advice, or that they appear below AI Overviews and in a smaller, lighter font.

AI Overviews “encourage people to seek professional medical advice”, and frequently mention seeking medical attention within the summary itself “when appropriate”, a spokesperson said.AI experts and patient advocates presented with the Guardian’s findings said they were concerned.Disclaimers serve a vital purpose, they said, and should appear prominently when users are first provided with medical advice.“The absence of disclaimers when users are initially served medical information creates several critical dangers,” said Pat Pataranutaporn, an assistant professor, technologist and researcher at the Massachusetts Institute of Technology (MIT) and a world-renowned expert in AI and human-computer interaction.“First, even the most advanced AI models today still hallucinate misinformation or exhibit sycophantic behaviour, prioritising user satisfaction over accuracy.

In healthcare contexts, this can be genuinely dangerous.“Second, the issue isn’t just about AI limitations – it’s about the human side of the equation.Users may not provide all necessary context or may ask the wrong questions by misobserving their symptoms.“Disclaimers serve as a crucial intervention point.They disrupt this automatic trust and prompt users to engage more critically with the information they receive.

”Gina Neff, a professor of responsible AI at Queen Mary University of London, said the “problem with bad AI Overviews is by design” and Google was to blame,“AI Overviews are designed for speed, not accuracy, and that leads to mistakes in health information, which can be dangerous,”In January, a Guardian investigation revealed people were being put at risk of harm by false and misleading health information in Google AI Overviews,Neff said the investigation’s findings showed why prominent disclaimers were essential,“Google makes people click through before they find any disclaimer,” she said.

“People reading quickly may think the information they get from AI Overviews is better than what it is, but we know it can make serious mistakes,”Following the Guardian’s reporting, Google removed AI Overviews for some but not all medical searches,Sonali Sharma, a researcher at Stanford University’s centre for AI in medicine and imaging (AIMI), said: “The major issue is that these Google AI Overviews appear at the very top of the search page and often provide what feels like a complete answer to a user’s question at a time where they are trying to access information and get an answer as quickly as possible,“For many people, because that single summary is there immediately, it basically creates a sense of reassurance that discourages further searching, or scrolling through the full summary and clicking ‘Show more’ where a disclaimer might appear,“What I think can lead to real-world harm is the fact that the AI Overviews can often contain partially correct and partially incorrect information, and it becomes very difficult to tell what is accurate or not, unless you are familiar with the subject matter already.

”A Google spokesperson said: “It’s inaccurate to suggest that AI Overviews don’t encourage people to seek professional medical advice,In addition to a clear disclaimer, AI Overviews frequently mention seeking medical attention directly within the overview itself, when appropriate,”Tom Bishop, head of patient information at Anthony Nolan, a blood cancer charity, called for urgent action,“We know misinformation is a real problem, but when it comes to health misinformation, it’s potentially really dangerous,” said Bishop,“That disclaimer needs to be much more prominent, just to make people step back and think … ‘Is this something I need to check with my medical team rather than acting upon it? Can I take this at face value or do I really need to look into it in more detail and see how this information relates to my own specific medical situation?’ Because that’s the key here.

”He added: “I’d like this disclaimer to be right at the top.I’d like it to be the first thing you see.And ideally it would be the same size font as everything else you’re seeing there, not something that’s small and easy to miss.”
trendingSee all
A picture

UK interest rate cut likely in March as unemployment rate rises; youth joblessness to ‘increase significantly’ in coming months – as it happened

The chances of a cut to UK interest rates next month have risen, following this morning’s data showing a rise in unemployment and a slowdown in wage growth.The City money markets now indicate there’s a near-75% chance that the Bank of England lowers interest rates to 3.5% at its next meeting, in March, up from 69% last night.Investors now fully expect two rate cuts by Christmas, which would bring Bank Rate down to 3.25%

A picture

Surging prediction markets face legal backlash in US: ‘Lines have been blurred’

State lawmakers and gaming regulators across the US are escalating their fight against prediction markets, arguing that the fast-growing platforms are “basically gambling but with another name”.At least 20 federal lawsuits have been filed nationwide, disputing whether companies such as Kalshi and Polymarket should be treated as federally regulated financial exchanges, as they maintain, or as gambling operations that should be regulated like state-licensed sportsbooks.The row escalated this week, when the chair of the US Commodity Futures Trading Commission (CFTC), which oversees these platforms, announced that it was filing a friend-of-the-court brief in defense of “its exclusive jurisdiction over these derivative markets’”.The legal battle comes as the sector surges. More than $1bn was traded on Kalshi alone during Super Bowl Sunday, and Bloomberg reported that Kalshi’s January trading volume reached nearly $10bn, most of it tied to sports

A picture

Claims that AI can help fix climate dismissed as greenwashing

Tech companies are conflating traditional artificial intelligence with generative AI when claiming the energy-hungry technology could help avert climate breakdown, according to a report.Most claims that AI can help avert climate breakdown refer to machine learning and not the energy-hungry chatbots and image generation tools driving the sector’s explosive growth of gas-guzzling datacentres, the analysis of 154 statements found.The research, commissioned by nonprofits including Beyond Fossil Fuels and Climate Action Against Disinformation, did not find a single example where popular tools such as Google’s Gemini or Microsoft’s Copilot were leading to a “material, verifiable, and substantial” reduction in planet-heating emissions.Ketan Joshi, an energy analyst and author of the report, said the industry’s tactics were “diversionary” and relied on tried and tested methods that amount to “greenwashing”.He likened it to fossil fuel companies advertising their modest investments in solar panels and overstating the potential of carbon capture

A picture

TikTok creator ByteDance vows to curb AI video tool after Disney threat

ByteDance, the Chinese technology company behind TikTok, has said it will restrain its AI video-making tool, after threats of legal action from Disney and a backlash from other media businesses, according to reports.The AI video generator Seedance 2.0, released last week, has spooked Hollywood as users create realistic clips of movie stars and superheroes with just a short text prompt.Several big Hollywood studios have accused the tool of copyright infringement.On Friday, Walt Disney reportedly sent a cease-and-desist letter to ByteDance which accused it of supplying Seedance with a “pirated library” of the studio’s characters, including those from Marvel and Star Wars, according to the US news outlet Axios

A picture

Winter Olympics 2026: Team GB lose crunch men’s curling tie, Norway’s Frostad wins big air

And that, my friends, is that. Another Olympic day done. Tomorrow? The ski aerials, which got postponed from today. The snowboard slopestyle, good fun. The women’s and men’s team sprint medals in cross-country

A picture

‘The whole spirit of curling is dead’: meltdown on the ice as ruckus rumbles on

Well hell’s bells, who knew the ice could get so hot? The Olympic curling community is still all in a twist about everything that’s gone on in the sport since a row broke out between the Sweden and Canada sides on Friday. “The whole spirit of curling is dead,” Canada’s Marc Kennedy said on Monday night after his team’s 8-2 victory against Czech Republic, which felt like a bold take coming from the man who started this entire farrago by repeatedly telling his Swedish opponent Oskar Eriksson to “fuck off” after Eriksson accused him of making an illegal double‑touch.On Tuesday, the Canadians were outplaying the British. They beat them handily, 9-5, which means Bruce Mouat’s team have to beat the USA team and hope other results go their way if they’re going to make the semi-finals.“Mouat gave us a couple of misses to work with,” Kennedy said