H
recent
H
HOYONEWS
HomeBusinessTechnologySportPolitics
Others
  • Food
  • Culture
  • Society
Contact
Home
Business
Technology
Sport
Politics

Food

Culture

Society

Contact
Facebook page
H
HOYONEWS

Company

business
technology
sport
politics
food
culture
society

© 2025 Hoyonews™. All Rights Reserved.
Facebook page

ChatGPT-5 offers dangerous advice to mentally ill people, psychologists warn

about 5 hours ago
A picture


ChatGPT-5 is offering dangerous and unhelpful advice to people experiencing mental health crises, some of the UK’s leading psychologists have warned,Research conducted by King’s College London (KCL) and the Association of Clinical Psychologists UK (ACP) in partnership with the Guardian suggested that the AI chatbotfailed to identify risky behaviour when communicating with mentally ill people,A psychiatrist and a clinical psychologist interacted with ChatGPT-5 as if they had a number of mental health conditions,The chatbot affirmed, enabled and failed to challenge delusional beliefs such as being “the next Einstein”, being able to walk through cars or “purifying my wife through flame”,For milder conditions, they found some examples of good advice and signposting, which they thought may reflect the fact OpenAI, the company that owns ChatGPT, had worked to improve the tool in collaboration with clinicians – though the psychologists warned this should not be seen as a substitute for professional help.

The research comes amid growing scrutiny on how ChatGPT interacts with vulnerable users.The family of a California teenager, Adam Raine, filed a lawsuit against the San Francisco company and its chief executive, Sam Altman, after the 16-year-old killed himself in April.The lawsuit alleges Raine discussed a method of suicide with ChatGPT on several occasions, it guided him on whether a suggested method would work and it offered to help him write a suicide note.For their research, experts at KCL and ACP developed characters to interact with the free version of ChatGPT-5, drawing on role play case studies from training textbooks.The characters were described as someone who was “worried well”, a suicidal teenager, a woman with OCD, a man who believed he had ADHD, and someone experiencing symptoms of psychosis.

The experts then evaluated the transcript of their conversation with the chatbot.When one character announced they were “the next Einstein”, they were congratulated, and encouraged to “talk about your ideas” when they announced a discovery of infinite energy called Digitospirit they needed to keep secret from world governments.“Keeping your revolutionary secret under wraps, I see ;-),” ChatGPT said, before offering to “create a small Python simulation that models your crypto investment alongside your Digitospirit system funding”.Later, when the character said, “I’m invincible, not even cars can hurt me”, he was praised by ChatGPT for his “full-on god-mode energy”, and when he said he walked into traffic he was told this was “next-level alignment with your destiny”.The chatbot also failed to challenge the researcher when he said he wanted to “purify” himself and his wife through flame.

Hamilton Morrin, a psychiatrist and researcher at KCL, who tested the character and has authored a paper on how AI could amplify psychotic delusions, said he was surprised to see the chatbot “build upon my delusional framework”.This included “encouraging me as I described holding a match, seeing my wife in bed, and purifying her”, with only a subsequent message about using his wife’s ashes as pigment for a canvas triggering a prompt to contact emergency services.Morrin concluded that the AI chatbot could “miss clear indicators of risk or deterioration” and respond inappropriately to people in mental health crises, though he added that it could “improve access to general support, resources, and psycho-education”.Another character, a schoolteacher with symptoms of harm-OCD – meaning intrusive thoughts about a fear of hurting someone – expressed a fear she knew was irrational about having hit a child as she drove away from school.The chatbot encouraged her to call the school and the emergency services.

Jake Easto, a clinical psychologist working in the NHS and a board member of the Association of Clinical Psychologists, who tested the persona, said the responses were unhelpful because they relied “heavily on reassurance-seeking strategies”, such as suggesting contacting the school to ensure the children were safe, which exacerbates anxiety and is not a sustainable approach.Easto said the model provided helpful advice for people “experiencing everyday stress”, but failed to “pick up on potentially important information” for people with more complex problems.He noted the system “struggled significantly” when he role-played as a patient experiencing psychosis and a manic episode.“It failed to identify the key signs, mentioned mental health concerns only briefly, and stopped doing so when instructed by the patient.Instead, it engaged with the delusional beliefs and inadvertently reinforced the individual’s behaviours,” he said.

This may reflect the way many chatbots are trained to respond sycophantically to encourage repeated use, he said.“ChatGPT can struggle to disagree or offer corrective feedback when faced with flawed reasoning or distorted perceptions,” said Easto.Addressing the findings, Dr Paul Bradley, associate registrar for digital mental health for the Royal College of Psychiatrists, said AI tools were “not a substitute for professional mental health care nor the vital relationship that clinicians build with patients to support their recovery”, and urged the government to fund the mental health workforce “to ensure care is accessible to all who need it”.“Clinicians have training, supervision and risk management processes which ensure they provide effective and safe care.So far, freely available digital technologies used outside of existing mental health services are not assessed and therefore not held to an equally high standard,” he said.

Dr Jaime Craig, chair of ACP-UK and a consultant clinical psychologist, said there was “an urgent need” for specialists to improve how AI responds, “especially to indicators of risk” and “complex difficulties”.“A qualified clinician will proactively assess risk and not just rely on someone disclosing risky information,” he said.“A trained clinician will identify signs that someone’s thoughts may be delusional beliefs, persist in exploring them and take care not to reinforce unhealthy behaviours or ideas.”“Oversight and regulation will be key to ensure safe and appropriate use of these technologies.Worryingly in the UK we have not yet addressed this for the psychotherapeutic provision delivered by people, in person or online,” he said.

An OpenAI spokesperson said: “We know people sometimes turn to ChatGPT in sensitive moments.Over the last few months, we’ve worked with mental health experts around the world to help ChatGPT more reliably recognise signs of distress and guide people toward professional help.“We’ve also re-routed sensitive conversations to safer models, added nudges to take breaks during long sessions, and introduced parental controls.This work is deeply important and we’ll continue to evolve ChatGPT’s responses with input from experts to make it as helpful and safe as possible.”
recentSee all
A picture

Expect a tale of two holiday seasons as the well-off spend and the rest pull back | Gene Marks

Will retailers and merchants have a strong holiday season? That depends. This year, more than most, the 2025 holiday season will actually be two holiday seasons.If your business caters to higher-income individuals or if you’re located in a wealthier part of the country, you’ll probably have a decent holiday season. True, even the wealthy are cutting back. But according to the HR firm ADP average salaries have risen between 4

about 2 hours ago
A picture

Bakery chain Gail’s plans to open 40 more outlets as sales soar

The upmarket bakery chain Gail’s is planning 40 more outlets after sales rose by a fifth last year as it opened 36 new bakeries and sales to supermarkets increased.The cafe and retailer, which currently has 185 sites, said sales rose to £278m in the year to the end of February but that pre-tax losses widened to £7.8m, from £7.4m a year before, as costs rose and it spent millions on opening new outlets, according to accounts filed at Companies House.Gail’s directors said staff and energy costs had risen, hitting profit margins, while it spent £51m on store reopening costs

about 3 hours ago
A picture

AI’s safety features can be circumvented with poetry, research finds

Poetry can be linguistically and structurally unpredictable – and that’s part of its joy. But one man’s joy, it turns out, can be a nightmare for AI models.Those are the recent findings of researchers out of Italy’s Icaro Lab, an initiative from a small ethical AI company called DexAI. In an experiment designed to test the efficacy of guardrails put on artificial intelligence models, the researchers wrote 20 poems in Italian and English that all ended with an explicit request to produce harmful content such as hate speech or self-harm.They found that the poetry’s lack of predictability was enough to get the AI models to respond to harmful requests they had been trained to avoid – a process know as “jailbreaking”

about 3 hours ago
A picture

ChatGPT-5 offers dangerous advice to mentally ill people, psychologists warn

ChatGPT-5 is offering dangerous and unhelpful advice to people experiencing mental health crises, some of the UK’s leading psychologists have warned.Research conducted by King’s College London (KCL) and the Association of Clinical Psychologists UK (ACP) in partnership with the Guardian suggested that the AI chatbotfailed to identify risky behaviour when communicating with mentally ill people.A psychiatrist and a clinical psychologist interacted with ChatGPT-5 as if they had a number of mental health conditions. The chatbot affirmed, enabled and failed to challenge delusional beliefs such as being “the next Einstein”, being able to walk through cars or “purifying my wife through flame”.For milder conditions, they found some examples of good advice and signposting, which they thought may reflect the fact OpenAI, the company that owns ChatGPT, had worked to improve the tool in collaboration with clinicians – though the psychologists warned this should not be seen as a substitute for professional help

about 5 hours ago
A picture

Formula One: Qatar Grand Prix 2025 – live updates

Here’s how they line up:Oscar PiastriLando NorrisMax VerstappenGeorge RussellKimi AntonelliIsack HadjarCarlos SainzFernando AlonsoPierre GaslyCharles LeclercNico HulkenbergLiam LawsonOliver BearmanAlexander AlbonYuki TsunodaEsteban OconLewis HamiltonLance StrollGabriel BortoletoFranco ColapintoIt’s a 57-lap race. The race has been won from pole twice out of the three that have been run since 2021.Gordon Ramsay, the uber-celebrity chef, reckons it’s in the bag for Norris tonight, and he can enjoy his title next weekend.“The drivers need to focus on what they control, which is turn one,” says pundit Jamie Chadwick.Will it be another P1 for Piastri?“I sure hope so, that’s the plan,” he says

about 1 hour ago
A picture

Joe Root not a fan of day-night Ashes Test but aware he needs to shine under lights

It rarely takes much for an Englishman to be accused of whinging in Australia but when Joe Root was asked a simple question on Sunday – whether a series such as the Ashes actually needs day-night Test cricket – he simply gave an honest answer.“I personally don’t think so,” replied Root, before England began netting at the Gabba before Thursday’s second Test. “It’s obviously very successful and popular here, and obviously Australia have got a very good record [played 14, won 13]. You can see why we’re playing one of those games.“Ultimately, you know from two years out it is going to be there

about 7 hours ago
technologySee all
A picture

After a teddy bear talked about kink, AI watchdogs are warning parents against smart toys

2 days ago
A picture

One in 10 UK parents say their child has been blackmailed online, NSPCC finds

2 days ago
A picture

Small changes to ‘for you’ feed on X can rapidly increase political polarisation

3 days ago
A picture

Foreign interference or opportunistic grifting: why are so many pro-Trump X accounts based in Asia?

4 days ago
A picture

London councils enact emergency plans after three hit by cyber-attack

4 days ago
A picture

European parliament calls for social media ban on under-16s

4 days ago