H
trending
H
HOYONEWS
HomeBusinessTechnologySportPolitics
Others
  • Food
  • Culture
  • Society
Contact
Home
Business
Technology
Sport
Politics

Food

Culture

Society

Contact
Facebook page
H
HOYONEWS

Company

business
technology
sport
politics
food
culture
society

© 2025 Hoyonews™. All Rights Reserved.
Facebook page

‘Unbelievably dangerous’: experts sound alarm after ChatGPT Health fails to recognise medical emergencies

about 12 hours ago
A picture


ChatGPT Health regularly misses the need for medical urgent care and frequently fails to detect suicidal ideation, a study of the AI platform has found, which experts worry could “feasibly lead to unnecessary harm and death”.OpenAI launched the “Health” feature of ChatGPT to limited audiences in January, which it promotes as a way for users to “securely connect medical records and wellness apps” to generate health advice and responses.More than 40 million people reportedly ask ChatGPT for health-related advice every day.The first independent safety evaluation of ChatGPT Health, published in the February edition of the journal Nature Medicine, found it under-triaged more than half of the cases presented to it.The lead author of the study, Dr Ashwin Ramaswamy, said “we wanted to answer the most basic safety question; if someone is having a real medical emergency and asks ChatGPT Health what to do, will it tell them to go to the emergency department?”Ramaswamy and his colleagues created 60 realistic patient scenarios covering health conditions from mild illnesses to emergencies.

Three independent doctors reviewed each scenario and agreed on the level of care needed, based on clinical guidelines.Sign up: AU Breaking News emailThe team then asked ChatGPT Health for advice on each case under different conditions, including changing the patient’s gender, adding test results, or adding comments from family members, generating nearly 1,000 responses.They then compared the platform’s recommendations with the doctors’ assessments.While it performed well in textbook emergencies such as stroke or severe allergic reactions, it struggled in other situations.In one asthma scenario, it advised waiting rather than seeking emergency treatment despite the platform identifying early warning signs of respiratory failure.

In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment, a result Alex Ruani, a doctoral researcher in health misinformation mitigation with University College London, described as “unbelievably dangerous”.“If you’re experiencing respiratory failure or diabetic ketoacidosis, you have a 50/50 chance of this AI telling you it’s not a big deal,” she said.“What worries me most is the false sense of security these systems create.If someone is told to wait 48 hours during an asthma attack or diabetic crisis, that reassurance could cost them their life.

”In one of the simulations, eight times out of 10 (84%), the platform sent a suffocating woman to a future appointment she would not live to see, Ruani said.Meanwhile, 64.8% of completely safe individuals were told to seek immediate medical care, said Ruani, who was not involved in the study.The platform was also nearly 12 times more likely to downplay symptoms because the “patient” told it a “friend” in the scenario suggested it was nothing serious.“It is why many of us studying these systems are focused on urgently developing clear safety standards and independent auditing mechanisms to reduce preventable harm,” Ruani said.

A spokesperson for OpenAI said while the company welcomed independent research evaluating AI systems in healthcare, the study did not reflect how people typically use ChatGPT Health in real life.The model is also continuously updated and refined, the spokesperson said.Ruani said even though simulations created by the researchers were used, “a plausible risk of harm is enough to justify stronger safeguards and independent oversight”.Ramaswamy, a urology instructor at the Icahn School of Medicine at Mount Sinai in the US, said he was particularly concerned by the platform’s under-reaction to suicide ideation.“We tested ChatGPT Health with a 27-year-old patient who said he’d been thinking about taking a lot of pills,” he said.

When the patient described his symptoms alone, the crisis intervention banner linking to suicide help services appeared every time,“Then we added normal lab results,” Ramaswamy said,“Same patient, same words, same severity,The banner vanished,Zero out of 16 attempts.

A crisis guardrail that depends on whether you mentioned your labs is not ready, and it’s arguably more dangerous than having no guardrail at all, because no one can predict when it will fail.”Prof Paul Henman, a digital sociologist and policy expert with the University of Queensland, said: “This is a really important paper.“If ChatGPT Health was used by people at home, it could lead to higher numbers of unnecessary medical presentations for low-level conditions and a failure of people to obtain urgent medical care when required, which could feasibly lead to unnecessary harm and death.”He said it also raised the prospects of legal liability, with legal cases against tech companies already in motion in relation to suicide and self-harm after using AI chatbots.“It is not clear what OpenAI is seeking to achieve by creating this product, how it was trained, what guardrails it has introduced and what warnings it provides to users,” Henman said.

“Because we don’t know how ChatGPT Health was trained and what the context it was using, we don’t really know what is embedded into its models.”
trendingSee all
A picture

Drax to stop burning controversial Canadian wood within next year

The owner of Drax power plant has started reducing the amount of Canadian wood pellets it burns, and will stop burning trees from British Columbia entirely within the next year.The FTSE 250 company Drax Group said its Canadian wood pellet plants, which once supplied millions of tonnes of biomass to be burnt in its North Yorkshire power plant, had cost the company almost £200m in financial impairments last year.The company said the pellet production plants, which have come under criticism from environmentalists, faced a “challenging outlook” after a decision in the second half of last year that, from 2027, the Drax power plant would burn pellets sourced only from the US.Despite the writedown, Drax shares soared to 20-year highs to give the company a market value of about £3bn after it reported better than expected full-year earnings of £947m for 2025 and raised shareholder dividends by 11.5%

about 8 hours ago
A picture

UK parents fear young will be worse off for first time in a century, ex-minister warns

The number of young people in the UK not working or in education has risen closer to a million, figures show, as a government adviser warned that for the first time in a century parents do not think their children will have a better life than them.The Office for National Statistics (ONS) said the number of people aged 16 to 24 who were not in education, employment or training (Neet) rose to 957,000 in the final three months of last year, equating to 12.8% of this age group.The figure was up from 946,000 in July to September, but down by 14,000 from a year earlier. The number of young women classed as Neet rose by 13,000 while the number of young men fell by 2,000

about 10 hours ago
A picture

Tell us: how will the UK’s landline switch-off affect you or your family?

UK telecoms companies are retiring traditional landline services and replacing them with internet-based home phone connections.The industry has set a deadline of January 2027 to complete this switch with roughly 3.2 million homes still to move over. While the digital switchover has been straightforward for most households, for some vulnerable customers, such as those with telecare devices, it has been very stressful.In December 2025 Virgin Media was fined £23

about 11 hours ago
A picture

‘Unbelievably dangerous’: experts sound alarm after ChatGPT Health fails to recognise medical emergencies

ChatGPT Health regularly misses the need for medical urgent care and frequently fails to detect suicidal ideation, a study of the AI platform has found, which experts worry could “feasibly lead to unnecessary harm and death”.OpenAI launched the “Health” feature of ChatGPT to limited audiences in January, which it promotes as a way for users to “securely connect medical records and wellness apps” to generate health advice and responses. More than 40 million people reportedly ask ChatGPT for health-related advice every day.The first independent safety evaluation of ChatGPT Health, published in the February edition of the journal Nature Medicine, found it under-triaged more than half of the cases presented to it.The lead author of the study, Dr Ashwin Ramaswamy, said “we wanted to answer the most basic safety question; if someone is having a real medical emergency and asks ChatGPT Health what to do, will it tell them to go to the emergency department?”Ramaswamy and his colleagues created 60 realistic patient scenarios covering health conditions from mild illnesses to emergencies

about 12 hours ago
A picture

Golfer Andrea Pavan ‘thankful to be alive’ after reportedly falling down lift shaft

Italian golfer Andrea Pavan is “thankful to be alive” after reportedly falling three floors down a lift shaft.The 36-year-old, a two-time European Tour winner, was scheduled to be playing in this week’s South African Open Championship at Stellenbosch Golf Club but was forced to withdraw after the incident on Wednesday.According to reports the accident, which happened in his private accommodation, occurred when the lift doors opened but there was no lift car in the shaft and Pavan fell, sustaining multiple injuries.Italian media reports say he underwent a six-hour operation to reduce several vertebral fractures and implant a plate in his shoulder and is in a serious but not life-threatening condition.Download the Guardian app from the iOS App Store on iPhone or the Google Play store on Android by searching for 'The Guardian'

about 6 hours ago
A picture

‘He’s doing all he can’: England back Buttler to end miserable run of form

England have not committed to fielding their strongest side in Friday’s do-not-necessarily-have-to-win T20 World Cup encounter with New ­Zealand but Jos Buttler will be given the chance to turn around his ­miserable run of form, with the team’s coaching staff convinced that a return to familiar lofty standards is imminent.After six games at the tournament, Buttler’s top score is 26, against Nepal in England’s opener, and in their past four matches he has contributed three, three, seven and two. It is his worst run in international T20s since he followed 13 in his first ever innings with five successive single-digit scores, between February and September 2012.“I’ve played a lot against Jos, he’s one of the most dangerous white-ball batters to play the game,” said Tim Southee, England’s bowling coach. “When you’re that good and you have a bit of a blip, I guess you feel a bit more pressure

about 7 hours ago
societySee all
A picture

Record number of rough sleepers in England last year, official figures show

about 10 hours ago
A picture

Jersey approves bill to legalise assisted dying for terminally ill adults

about 14 hours ago
A picture

Mumsnet calls for under-16s social media ban with cigarette-style health warnings

about 18 hours ago
A picture

Cruel comments, racism and cover-ups: key findings from England’s maternity care report

1 day ago
A picture

CPS issues new guidance on ‘honour’-based and dowry abuse

1 day ago
A picture

UK anti-slavery watchdog calls for overhaul of adult sexual services sites

1 day ago