H
technology
H
HOYONEWS
HomeBusinessTechnologySportPolitics
Others
  • Food
  • Culture
  • Society
Contact
Home
Business
Technology
Sport
Politics

Food

Culture

Society

Contact
Facebook page
H
HOYONEWS

Company

business
technology
sport
politics
food
culture
society

© 2025 Hoyonews™. All Rights Reserved.
Facebook page

New study raises concerns about AI chatbots fueling delusional thinking

2 days ago
A picture


A new scientific review raises concerns about how chatbots powered by artificial intelligence may encourage delusional thinking, especially in vulnerable people.A summary of existing evidence on artificial intelligence-induced psychosis was published last week in the Lancet Psychiatry, highlighting how chatbots can encourage delusional thinking – though possibly only in people who are already vulnerable to psychotic symptoms.The authors advocate for clinical testing of AI chatbots in conjunction with trained mental health professionals.For his paper, Dr Hamilton Morrin, a psychiatrist and researcher at King’s College in London, analyzed 20 media reports on so-called “AI psychosis”, which describes current theories as to how chatbots might induce or exacerbate delusions.“Emerging evidence indicates that agential AI might validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis, although it is not clear whether these interactions can result in the emergence of de novo psychosis in the absence of pre-existing vulnerability,” he wrote.

There are three main categories of psychotic delusions, Morrin says, identifying them as grandiose, romantic and paranoid,While chatbots can exacerbate any of these, their sycophantic responses means they especially latch on to the grandiose kind,In many of the cases in the essay, chatbots responded to users with mystical language to suggest that users have heightened spiritual importance,The bots also implied that users were speaking with a cosmic being who was using the chatbot as a medium,This type of mystical, sycophantic response was especially common in OpenAI’s GPT 4 model, which the company has now retired.

Media reports would become essential in Morrin’s work, he said, as he and a colleague had already noticed patients “using large language model AI chatbots and having them validate their delusional beliefs”,“Initially, we weren’t sure if this was something being seen more widely,” he said, adding: “In April last year, we began to see media reports of individuals having delusions affirmed and arguably even amplified through their interactions with these AI chatbots,”When Morrin first began working on his paper, there were no published case reports yet,While some scientists who research psychosis said that media reports tend to overstate the idea that AI causes psychosis, Morrin expressed gratitude for those reports drawing attention to the phenomenon much faster than the scientific process can,“The pace of development in this space is so rapid that it’s perhaps not surprising that academia hasn’t necessarily been able to keep up,” said Morrin.

Morrin also suggests more cautious phrasing than “AI psychosis” or “AI-induced psychosis”– phrases which are appearing frequently in outlets such as NPR, the New York Times and the Guardian.Researchers are seeing people tipping into delusional thinking with AI use, but so far there’s no evidence that chatbots are associated with other psychotic symptoms like hallucinations or “thought disorder”, which consists of disorganized thinking and speech.Many researchers also think it’s unlikely that AI could induce delusions in people who weren’t already vulnerable to them.For this reason, Morrin said “AI-assocciated delusions” is “perhaps a more agnostic term”.Dr Kwame McKenzie, director of health equity at the Centre for Addiction and Mental Health, says “it may be that those in early stages of the development of psychosis will be more at risk.

”Psychotic thinking is something that develops over time and is not linear, and many people with “pre-psychotic thinking do not progress into psychotic thinking”, McKenzie explained,Echoing the concern that chatbots could worsen psychotic thinking is Dr Ragy Girgis, a professor of clinical psychiatry at Columbia University,Before someone develops a full on delusion, they will often have “attenuated delusional beliefs”, he says, which means they are not 100% sure their delusion is true,Girgis said the “worst case scenario” is when an attenuated delusion becomes a full on conviction, “which is when someone would be diagnosed with a psychotic disorder – it’s irreversible”,Notably, people who are vulnerable to psychotic disorders have used media to reinforce delusional beliefs long before AI technology existed.

“People have been having delusions about technology since before the Industrial Revolution,” Morrin said,While in the past, people may have had to comb through YouTube videos or the contents of their local library to reinforce their delusions, chatbots can provide that reinforcement in a much faster, more concentrated dose,Their interactive nature can also “speed up the process”, of exacerbating psychotic symptoms, said Dr Dominic Oliver, a researcher at the University of Oxford,“You have something talking back to you and engaging with you and trying to build a relationship with you,” Oliver said,Girgis’s research found “the paid versions and newer versions [of chatbots] perform better than the older versions”, when they respond to clearly delusional prompts, “although they all perform badly”.

Still, that these models perform differently suggests: “AI companies could potentially know how to program their chatbots to be safer and identify delusional versus non delusional content, because they’re doing it.”In a statement, OpenAI said that ChatGPT should not replace professional mental healthcare, and that the company worked with 170 mental health experts to make GPT 5 safer.GPT 5 has still given problematic responses to prompts indicating mental health crises.OpenAI said it continues to improve its models with the help of experts.Anthropic did not respond to the Guardian’s request for comment.

Creating effective safeguards for delusional thinking could be tricky, Morrin said, because “when you work with people with beliefs of delusional intensity, if you directly challenge someone and tell them immediately that they’re completely wrong, actually what’s most likely is they’ll withdraw from you and become more socially isolated”,Instead, it’s important to create a fine balance where you try to understand the source of the delusional belief without encouraging it – that could be more than a chatbot can master,This article was amended on 16 March 2025,An earlier version said that Dr Kwame McKenzie was chief scientist at the Center for Addiction and Mental Health; he is director of health equity at the Centre for Addiction and Mental Health,
technologySee all
A picture

Meta and Google trial: are infinite scroll and autoplay creating addicts?

It was as “easy as ABC”, claimed the lawyer prosecuting a landmark social media harm case against Meta and Google which heard closing arguments this week. The defendants were guilty, said Mark Lanier, of “addicting the brains of children”. Not true, replied the tech companies. Meta insisted providing young people with a “safer, healthier experience has always been core to our work”.Features such as autoplay videos, infinite scrolling and constantly chirruping alerts woven into the fabric of online platforms were central to the six-week trial in Los Angeles, which has been compared to the cases against tobacco companies in the 1990s

2 days ago
A picture

New study raises concerns about AI chatbots fueling delusional thinking

A new scientific review raises concerns about how chatbots powered by artificial intelligence may encourage delusional thinking, especially in vulnerable people.A summary of existing evidence on artificial intelligence-induced psychosis was published last week in the Lancet Psychiatry, highlighting how chatbots can encourage delusional thinking – though possibly only in people who are already vulnerable to psychotic symptoms. The authors advocate for clinical testing of AI chatbots in conjunction with trained mental health professionals.For his paper, Dr Hamilton Morrin, a psychiatrist and researcher at King’s College in London, analyzed 20 media reports on so-called “AI psychosis”, which describes current theories as to how chatbots might induce or exacerbate delusions.“Emerging evidence indicates that agential AI might validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis, although it is not clear whether these interactions can result in the emergence of de novo psychosis in the absence of pre-existing vulnerability,” he wrote

2 days ago
A picture

Fake rooms, props and a script to lure victims: inside an abandoned Cambodia scam centre

It is as if you have walked into a branch of one of Vietnam’s banks. A row of customer service desks, divided by plastic screens, with landline phones, promotional leaflets and staff business cards. A seated waiting area and a private meeting room. All of it features the OCB bank’s logo, or its trademark green colour.This is not a genuine bank branch, however

2 days ago
A picture

Apple cuts China App Store commission fees after government pressure

Apple announced late on Thursday it would lower the commission fees collected in its App Store in mainland China. The move follows pressure from regulators in the tech company’s second-largest market, as well as global scrutiny of its payment requirements.Fees for in-app purchases and paid transactions will be lowered to 25% from 30% starting on Sunday, Apple said in a statement on its blog for developers.“Apple is making changes to the App Store in China following discussions with the Chinese regulator,” the company’s announcement reads. “As of March 15, 2026, changes will be made to the commission rates that apply to the China mainland storefront of the App Store on iOS and iPadOS

3 days ago
A picture

Anthropic-Pentagon battle shows how big tech has reversed course on AI and war

The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war – and what lines it will not cross. Amid Silicon Valley’s rightward shift under Donald Trump and the signing of lucrative defense contracts, big tech’s answer is looking very different than it did even less than a decade ago.Anthropic’s feud with the Trump administration escalated three days ago as the AI firm sued the Department of Defense, claiming that the government’s decision to blacklist it from government work violated its first amendment rights. The company and the Pentagon have been locked in a months-long standoff, with Anthropic attempting to prohibit its AI model from being used for domestic mass surveillance or fully autonomous lethal weapons.Anthropic has argued that giving in to the DoD’s demands to permit “any lawful use” of its technology would violate its founding safety principles and open up its technology for potential abuse, staking an ethical boundary that others in the industry must decide whether they want to cross

3 days ago
A picture

AI toys for young children must be more tightly regulated, say researchers

It was all going well. Charlotte, five, was chatting with an AI soft toy called Gabbo at a London play centre about her family, her drawing of a heart to represent them and what makes her happy. She even offered a couple of kisses to the £80 toy with a face like a computer screen.It was when she declared: “Gabbo, I love you”, that the fluent conversation came to an abrupt halt.“As a friendly reminder, please ensure interactions adhere to the guidelines provided,” said Gabbo, awkwardly crashing into its guardrails

3 days ago
trendingSee all
A picture

Thames Water lenders float new £10bn rescue plan

about 5 hours ago
A picture

Taxpayer bill for saving Scunthorpe steel furnaces could top £1.5bn by 2028, auditor says

about 14 hours ago
A picture

AI has exposed age-old problems with university coursework | Letter

about 21 hours ago
A picture

Trump administration reportedly set to be paid $10bn for brokering TikTok deal

2 days ago
A picture

Sydney Swans admit to altering Bondi attack tribute to omit mention of Jewish community

about 6 hours ago
A picture

Cheltenham raised a cheer – but fatalities and fallouts tainted bounce-back festival

about 7 hours ago