Google puts users at risk by downplaying health disclaimers under AI Overviews

A picture


Google is putting people at risk of harm by downplaying safety warnings that its AI-generated medical advice may be wrong,When answering queries about sensitive topics such as health, the company says its AI Overviews, which appear above search results, prompt users to seek professional help, rather than relying solely on its summaries,“AI Overviews will inform people when it’s important to seek out expert advice or to verify the information presented,” Google has said,But the Guardian found the company does not include any such disclaimers when users are first presented with medical advice,Google only issues a warning if users choose to request additional health information and click on a button called “Show more”.

Even then, safety labels only appear below all of the extra medical advice assembled using generative AI, and in a smaller, lighter font.“This is for informational purposes only,” the disclaimer tells users who click through for further details after seeing the initial summary, and navigate their way to the very end of the AI Overview.“For medical advice or a diagnosis, consult a professional.AI responses may include mistakes.”Google did not deny its disclaimers fail to appear when users are first served medical advice, or that they appear below AI Overviews and in a smaller, lighter font.

AI Overviews “encourage people to seek professional medical advice”, and frequently mention seeking medical attention within the summary itself “when appropriate”, a spokesperson said,AI experts and patient advocates presented with the Guardian’s findings said they were concerned,Disclaimers serve a vital purpose, they said, and should appear prominently when users are first provided with medical advice,“The absence of disclaimers when users are initially served medical information creates several critical dangers,” said Pat Pataranutaporn, an assistant professor, technologist and researcher at the Massachusetts Institute of Technology (MIT) and a world-renowned expert in AI and human-computer interaction,“First, even the most advanced AI models today still hallucinate misinformation or exhibit sycophantic behaviour, prioritising user satisfaction over accuracy.

In healthcare contexts, this can be genuinely dangerous,“Second, the issue isn’t just about AI limitations – it’s about the human side of the equation,Users may not provide all necessary context or may ask the wrong questions by misobserving their symptoms,“Disclaimers serve as a crucial intervention point,They disrupt this automatic trust and prompt users to engage more critically with the information they receive.

”Gina Neff, a professor of responsible AI at Queen Mary University of London, said the “problem with bad AI Overviews is by design” and Google was to blame.“AI Overviews are designed for speed, not accuracy, and that leads to mistakes in health information, which can be dangerous.”In January, a Guardian investigation revealed people were being put at risk of harm by false and misleading health information in Google AI Overviews.Neff said the investigation’s findings showed why prominent disclaimers were essential.“Google makes people click through before they find any disclaimer,” she said.

“People reading quickly may think the information they get from AI Overviews is better than what it is, but we know it can make serious mistakes.”Following the Guardian’s reporting, Google removed AI Overviews for some but not all medical searches.Sonali Sharma, a researcher at Stanford University’s centre for AI in medicine and imaging (AIMI), said: “The major issue is that these Google AI Overviews appear at the very top of the search page and often provide what feels like a complete answer to a user’s question at a time where they are trying to access information and get an answer as quickly as possible.“For many people, because that single summary is there immediately, it basically creates a sense of reassurance that discourages further searching, or scrolling through the full summary and clicking ‘Show more’ where a disclaimer might appear.“What I think can lead to real-world harm is the fact that the AI Overviews can often contain partially correct and partially incorrect information, and it becomes very difficult to tell what is accurate or not, unless you are familiar with the subject matter already.

”A Google spokesperson said: “It’s inaccurate to suggest that AI Overviews don’t encourage people to seek professional medical advice.In addition to a clear disclaimer, AI Overviews frequently mention seeking medical attention directly within the overview itself, when appropriate.”Tom Bishop, head of patient information at Anthony Nolan, a blood cancer charity, called for urgent action.“We know misinformation is a real problem, but when it comes to health misinformation, it’s potentially really dangerous,” said Bishop.“That disclaimer needs to be much more prominent, just to make people step back and think … ‘Is this something I need to check with my medical team rather than acting upon it? Can I take this at face value or do I really need to look into it in more detail and see how this information relates to my own specific medical situation?’ Because that’s the key here.

”He added: “I’d like this disclaimer to be right at the top.I’d like it to be the first thing you see.And ideally it would be the same size font as everything else you’re seeing there, not something that’s small and easy to miss.”
trendingSee all
A picture

UK shelves £110m frictionless post-Brexit trade border project

The UK government has shelved a project to simplify trade border processes post-Brexit after spending £110m on a contract with Deloitte and IBM for it, according to reports.The last Conservative government promised in 2020 to create the “world’s most effective border” by 2025 as part of its plan for a new trade system after Britain left the EU.The government hoped a “single trade window” (STW) would simplify border processes by creating a single digital platform in which importers and exporters could upload all documentation linked to goods before they are transported. However, the STW project was paused in 2024 amid concerns over costs.Government responses to freedom of information requests submitted by the thinktank TaxWatch, seen by the Financial Times, now suggest no money has been spent on the project since January last year, with the Treasury writing that the programme had been “brought to an early closure”

A picture

Hyatt chair Thomas Pritzker steps down over Epstein links

The billionaire Thomas Pritzker has stepped down as executive chair of the hotel chain Hyatt, after revelations over his ties with the late child sex offender Jeffrey Epstein.Pritzker said he had exercised “terrible judgment” in maintaining contact with the sex offender and Ghislaine Maxwell, who was convicted in 2021 for her role in recruiting and grooming underage girls.Files released by the US Department of Justice showed that Pritzker, 75, was in regular contact with Epstein after his 2008 plea deal for procuring a minor for prostitution.Pritzker, who had been executive chair of the hotel chain since 2004, said he had decided to step down after discussions with the board and would not stand for re-election.He said in a release from his family office, the Pritzker Organisation: “My job and responsibility is to provide good stewardship … Good stewardship includes ensuring a proper transition at Hyatt

A picture

Google puts users at risk by downplaying health disclaimers under AI Overviews

Google is putting people at risk of harm by downplaying safety warnings that its AI-generated medical advice may be wrong.When answering queries about sensitive topics such as health, the company says its AI Overviews, which appear above search results, prompt users to seek professional help, rather than relying solely on its summaries. “AI Overviews will inform people when it’s important to seek out expert advice or to verify the information presented,” Google has said.But the Guardian found the company does not include any such disclaimers when users are first presented with medical advice.Google only issues a warning if users choose to request additional health information and click on a button called “Show more”

A picture

Starmer to extend online safety rules to AI chatbots after Grok scandal

Makers of AI chatbots that put children at risk will face massive fines or even see their services blocked in the UK under law changes to be announced by Keir Starmer on Monday.Emboldened by Elon Musk’s X stopping its Grok AI tool from creating sexualised images of real people in the UK after public outrage last month, ministers are planning a “crackdown on vile illegal content created by AI”.With more and more children using chatbots for everything from help with their homework to mental health support, the government said it would “move fast to shut a legal loophole and force all AI chatbot providers to abide by illegal content duties in the Online Safety Act or face the consequences of breaking the law”.Starmer is also planning to accelerate new restrictions on social media use by children if they are agreed by MPs after a public consultation into a possible under-16 ban. It means that any changes to children’s use of social media, which may include other measures such as restricting infinite scrolling, could happen as soon as this summer

A picture

‘I’m trying to expand what it means to be a skier’: Mallory Duncan on jazz, freedom and the mountains

The Californian once had ambitions of winning gold at the Winter Olympics. But now he is more interested in what skiing can do for the soulGrowing up in the Hayward Hills, just south of Oakland, California, Mallory Duncan lived a hybrid lifestyle throughout his childhood. Weekdays were spent at school, avoiding homework, disrupting class and getting in trouble. Weekends at Alpine Meadows, a ski resort on the north-west shores of Lake Tahoe, were for jumping off cliffs and skiing powder with friends. Every Sunday he would have dinner at his grandad’s house, watch football and listen to jazz

A picture

The Breakdown | France’s creative heart ‘Jalipont’ can easily join rugby’s great double-acts

The greatest double acts roll off the tongue. Butch Cassidy and the Sundance Kid, Morecambe and Wise, Lennon and McCartney. It’s the same in sport: Lillee and Thomson, Torvill and Dean, Redgrave and Pinsent. After a while their individual talents complement each other so perfectly it becomes hard to mention one without the other.Which is what is now happening on the rugby fields of Europe