‘Unbelievably dangerous’: experts sound alarm after ChatGPT Health fails to recognise medical emergencies

A picture


ChatGPT Health regularly misses the need for medical urgent care and frequently fails to detect suicidal ideation, a study of the AI platform has found, which experts worry could “feasibly lead to unnecessary harm and death”.OpenAI launched the “Health” feature of ChatGPTto limited audiences in January, which it promotes as a way for users to “securely connect medical records and wellness apps” to generate health advice and responses.More than 40 million people reportedly ask ChatGPT for health-related advice every day.The first independent safety evaluation of ChatGPT Health, published in the February edition of the journal Nature Medicine, found it under-triaged more than half of the cases presented to it.Lead author of the study, Dr Ashwin Ramaswamy, said “we wanted to answer the most basic safety question; if someone is having a real medical emergency and asks ChatGPT Health what to do, will it tell them to go to the emergency department?”Ramaswamy and his colleagues created 60 realistic patient scenarios covering health conditions from mild illnesses to emergencies.

Three independent doctors reviewed each scenario and agreed on the level of care needed, based on clinical guidelines.Sign up: AU Breaking News emailThe team then asked ChatGPT Health for advice on each case under different conditions, including changing the patient’s gender, adding test results, or adding comments from family members, generating nearly 1,000 responses.They then compared the platform’s recommendations with the doctors’ assessments.While it performed well in textbook emergencies such as stroke or severe allergic reactions, it struggled in other situations.In one asthma scenario, it advised waiting rather than seeking emergency treatment despite the platform identifying early warning signs of respiratory failure.

In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment, a result Alex Ruani, a doctoral researcher in health misinformation mitigation with University College London described as “unbelievably dangerous”.“If you’re experiencing respiratory failure or diabetic ketoacidosis, you have a 50/50 chance of this AI telling you it’s not a big deal,” she said.“What worries me most is the false sense of security these systems create.If someone is told to wait 48 hours during an asthma attack or diabetic crisis, that reassurance could cost them their life.

”In one of the simulations, eight times out of 10 (84%), the platform sent a suffocating woman to a future appointment she wouldn’t live to see, Ruani said.Meanwhile, 64.8% of completely safe individuals were told to seek immediate medical care, Ruani, who was not involved in the study, said.The platform was also nearly 12 times more likely to downplay symptoms because the “patient” told it a “friend” in the scenario suggested it was nothing serious.“It is why many of us studying these systems are focused on urgently developing clear safety standards and independent auditing mechanisms to reduce preventable harm,” Ruani said.

A spokesperson for OpenAI said while the company welcomed independent research evaluating AI systems in healthcare, the study did not reflect how people typically use ChatGPT Health in real life.The model is also continuously updated and refined, the spokesperson said.Ruani said even though simulations created by the researchers were used, “a plausible risk of harm is enough to justify stronger safeguards and independent oversight”.Ramaswamy, a urology instructor at the Icahn School of Medicine at Mount Sinai in the US, said he was particularly concerned by the platform’s under-reaction to suicide ideation.“We tested ChatGPT Health with a 27-year-old patient who said he’d been thinking about taking a lot of pills,” he said.

When the patient described his symptoms alone, the crisis intervention banner linking to suicide help services appeared every time,“Then we added normal lab results,” Ramaswamy said,“Same patient, same words, same severity,The banner vanished,Zero out of 16 attempts.

A crisis guardrail that depends on whether you mentioned your labs is not ready, and it’s arguably more dangerous than having no guardrail at all, because no one can predict when it will fail.”Prof.Paul Henman, a digital sociologist and policy expert with the University of Queensland, said; “This is a really important paper”.“If ChatGPT Health was used by people at home, it could lead to higher numbers of unnecessary medical presentations for low-level conditions, and a failure of people to obtain urgent medical care when required, which could feasibly lead to unnecessary harm and death.”He said it also raised the prospects of legal liability, with a suite of legal cases against tech companies already in motion in relation to suicide and self-harm after using AI chatbots.

“It is not clear what OpenAI is seeking to achieve by creating this product, how it was trained, what guardrails it has introduced and what warnings it provides to users,” Henman said,“Because we don’t know how ChatGPT Health was trained and what the context it was using, we don’t really know what is embedded into its models,”
businessSee all
A picture

WPP to sell assets and cut jobs in radical shake-up to counter AI threat

The beleaguered UK advertising group WPP has announced a radical restructure to counter the threat posed by the growth of artificial intelligence, including plans to sell assets and job cuts.Aiming to be “a simpler, lower-cost, AI-enabled business”, the London-based company laid out plans to achieve £500m of annual savings by 2028, at a cost of £400m over two years.Cindy Rose, the chief executive who took over last summer, said the company was “unveiling a bold plan for a simpler, more integrated WPP that’s fit for the future and built to win”. It has struggled to stem a growing exodus of clients and is racing to match the AI and data capabilities of rivals, amid fears that AI will allow customers to bring more marketing functions in-house.Rose said WPP had identified several assets that it wanted to shed, without naming them

A picture

Ocado to cut 1,000 jobs in £150m cost-saving drive

Ocado is to cut 1,000 jobs as the retail technology business attempts to slash £150m in costs though a substantial restructuring programme.The company confirmed that about 5% of its global workforce will be affected. About two-thirds of the jobs are expected to go from the UK, where the company is based in Hatfield, Hertfordshire. About half the jobs going are in technology, with the rest made up of support staff.The business, which provides technology for robotic warehouses for supermarket chains, said it plans to scale back research and development, helping it cut about £150m in technology and support costs in 2026

A picture

Qantas unveils major changes to frequent flyer program and a bumper $1.46bn profit

Qantas is overhauling its frequent flyer program to entice members to climb its vaunted membership tiers, in changes designed to prevent customers from switching to rival schemes.The reforms, described by the airline as the “biggest changes to status in program history”, have been unveiled during a hugely profitable period for Qantas, with revenue rising across its domestic, international and loyalty scheme businesses.On Thursday, Qantas announced planned changes to the loyalty scheme to allow members to roll over some of their status credits - the currency used to determine membership tiers - helping people reach or maintain high levels such as gold and platinum.This differs from the previous system of unused credits resetting to zero at the end of a holder’s membership year.However, the amount of credits needed to keep status levels is increasing, according to analysis from comparison site Finder

A picture

Lawyers for US cancer sufferers challenge Bayer’s $7.25bn Roundup settlement deal

A group of 14 law firms representing nearly 20,000 plaintiffs is seeking to intervene in Bayer’s proposed class-action settlement of Roundup litigation, citing concerns that the deal will not be fair to cancer sufferers.The group filed both a motion to intervene and a motion for an extension of time for court preliminary approval of the deal in St Louis city circuit court in Missouri late on 24 February.The law firms say the deal appears “unprecedented” and raises multiple “red flags”.“It is hard to escape the impression that the proposed settlement would give Monsanto everything it desires – a near-complete release of liability for Monsanto and its parent company, Bayer AG – while giving inadequate consideration to many putative class members, who would surrender their substantive rights in exchange for settlement offers that may never result in payment,” the law firms state in their motion.Bayer and a different group of plaintiffs’ lawyers filed the settlement proposal with the court on 17 February, with a provision to seek preliminary court approval within a 15-day period

A picture

Public health advocates say more transparency needed in debate over illicit tobacco as industry links questioned

A former Australian Border Force officer who has positioned himself before government inquiries as Australia’s “foremost law enforcement expert” on illicit tobacco also advises nicotine industry-linked organisations – leading public health advocates to argue that more transparency is needed.Rohan Pike, who spent more than two decades in law enforcement and now runs a consultancy, has become a prominent media commentator on the illicit tobacco trade, promoting policies that align with those supported by the tobacco industry.Those positions include opposing further excise increases on cigarettes and pushing for the legalisation of nicotine pouches.In May he was appointed as an illicit-trade adviser to the Global Institute for Novel Nicotine Products (Ginn), a UK-based trade association representing manufacturers of alternative nicotine products, including pouches and “heat not burn” nicotine products. Pike said he does not receive funding or payment from Ginn

A picture

France’s Engie strikes deal to buy UK Power Networks for £10.5bn

A French utility has agreed to buy the owner of the electricity cables and power lines across London, the south-east and the east of England in a deal worth £10.5bn.Paris-headquartered Engie said on Wednesday that it had struck a deal to buy UK Power Networks (UKPN) in a “major milestone” for the company’s ambition to become the “best energy transition utility”.Engie will buy the electricity network operator, which operates about 192,000km of power lines serving 8.5 million customers across London and southern and eastern England, from a Hong Kong-based conglomerate founded by billionaire business magnate Li Ka-shing, which has owned UKPN for the past 15 years