Signs of psychosis seen in Australian users’ interactions with AI chatbots, expert warns

A picture


A leading AI expert has warned some Australians are showing signs of psychosis or mania in their interactions with chatbots, arguing Silicon Valley is being “careless” with the technology amid a pursuit of profit.During an address at the National Press Club on Wednesday, Toby Walsh, scientia professor of artificial intelligence at the University of New South Wales, said he believed the AI race will be both “boom and doom”, with some benefits.But his speech – a copy of which was provided to Guardian Australia – also warned about dangers he said had outraged him since the technology began maturing in recent years.“My childhood dreams are turning into a reality that is both good and bad,” he said in his prepared remarks.Sign up: AU Breaking News emailWalsh’s speech highlighted the legal case against OpenAI by the family of US teenager Adam Raine – along with its data that showed more than a million of its users each week send messages that include “explicit indicators of potential suicidal planning or intent”.

OpenAI has also said 560,000 of its touted 800 million weekly users have shown signs of psychosis or mania, and another 1.2 million have developed potentially unhealthy bonds to the chatbot.Walsh said some of those captured by the data were in Australia.“I know because some of them or their loved ones are contacting me by email,” his prepared speech said.“They tell me how the chatbot confirms their wild theories.

That the chatbot tells them, to quote one email, that they’ve ‘cracked the code’.That they’re ‘the only one that could’.”The chatbots have been designed that way, Walsh said.“They’re designed to be sycophantic.They’re designed to confirm what the user says.

And they’re designed to draw the user in.They always end with an open question, prompting you to continue the conversation and buy more tokens.”He said it was not in the interests of the companies responsible for the chatbots to tell users to log off instead.“There’s no reason that they couldn’t be designed that way.Except the careless people in Silicon Valley would make less money if they were.

”OpenAI has claimed a GPT-5 update reduced the number of undesirable behaviours from its product and improved user safety.Walsh also expressed anger over the “large-scale theft” of creative works being used to train AI, and over the summaries for news articles in search taking away traffic from news sites.“Legally you can’t call it fair use when you’re competing with the owner of the IP,” he said.“I refuse to accept an AI revolution that enriches founders in Silicon Valley by impoverishing Australian artists, writers and musicians.”Walsh took aim at companies he said were disregarding laws, particularly around scams.

In November, Reuters reported that Meta’s internal documents from late 2024 stated that Meta was projected to earn about 10% of its overall annual revenue – about $16bn – from illicit advertising that year.Meta responded saying it had reduced scam ads by 58% in the past 18 months.Walsh said AI is being used to generate these scam ads, and Meta allowed advertisers to use AI to manage these ad campaigns, while AI decides which ads people see.He said if a retailer in Australia had 10% of its goods being counterfeit or illegal, it would be shut down by the weekend.“So I don’t understand how we continue to let Meta trade in Australia,” he said.

Walsh said he despaired that the Australian government was not doing more to regulate AI.“I fear that we’re repeating the mistakes of social media,” he said.“Social media should have been a wake-up call about the harms of unregulated AI.“We’re about to supercharge the sort of harms we saw with social media with an even more powerful and persuasive technology.“What I fear most is that I’ll be back here in three or four years’ time saying: ‘We tried to warn you.

But another generation of young Australians has now been sacrificed for the profits of big tech’.”
technologySee all
A picture

Treasury calls in Blair thinktank to advise on using AI across public services

Ministers have called in Tony Blair’s thinktank and private tech companies to guide them on deploying AI across the UK government in a move campaigners compared to “inviting in foxes to consult on the future of the henhouse”.James Murray, chief secretary to the Treasury, chaired a meeting on Wednesday with the director of AI at the Tony Blair Institute for Global Change (TBI), the chair of IBM and senior executives at AI companies including Faculty AI, now part of Accenture, and Dex Hunter-Torricke, a former communications adviser at Google, Facebook and Elon Musk’s SpaceX.“These people are exactly who can help us create change across the public sector – giving us the hard truths on our approach to AI and advising where we need to prioritise our investment to support real efficiencies,” said Murray, who added that their advice will “feed into efficiency processes ahead of the next spending review”.The move came after the technology secretary, Liz Kendall, last month said the government’s goal was to “make Britain the fastest AI adoption country in the G7”.The Treasury said it showed it was committing “to private sector engagement on the deployment of artificial intelligence across the public sector so it can improve efficiency and productivity”

A picture

Facial recognition error prompts police to arrest Asian man for burglary 100 miles away

Police arrested a man for a burglary in a city he had never visited after face scanning software deployed across the UK confused him with another person of south Asian heritage.Alvi Choudhury, 26, a software engineer, was working at the home he shares with his parents in Southampton in January when police knocked on his door, handcuffed him and held him in custody for nearly 10 hours before releasing him at 2am.Thames Valley police had used automated facial recognition software which matched him with footage of a suspect of a £3,000 burglary 100 miles away in Milton Keynes, according to documents shared with the Guardian by Liberty Investigates.But the CCTV footage showed a noticeably younger man with different features apart from similar curly hair, said Choudhury, who was left confused about why he had been arrested.“I was very angry, because the kid looked about 10 years younger than me,” said Choudhury, who wears a beard

A picture

Tech legend Stewart Brand on Musk, Bezos and his extraordinary life: ‘We don’t need to passively accept our fate’

He was at the heart of 1960s counterculture, then paved the way for the libertarian mindset of Silicon Valley. At 87, Brand is still keen to ensure the world is maintained properly – not just today, but for the next 10,000 yearsThe Guardian’s journalism is independent. We will earn a commission if you buy something through an affiliate link. Learn more.Stewart Brand thinks big and long

A picture

Signs of psychosis seen in Australian users’ interactions with AI chatbots, expert warns

A leading AI expert has warned some Australians are showing signs of psychosis or mania in their interactions with chatbots, arguing Silicon Valley is being “careless” with the technology amid a pursuit of profit.During an address at the National Press Club on Wednesday, Toby Walsh, scientia professor of artificial intelligence at the University of New South Wales, said he believed the AI race will be both “boom and doom”, with some benefits.But his speech – a copy of which was provided to Guardian Australia – also warned about dangers he said had outraged him since the technology began maturing in recent years.“My childhood dreams are turning into a reality that is both good and bad,” he said in his prepared remarks.Sign up: AU Breaking News emailWalsh’s speech highlighted the legal case against OpenAI by the family of US teenager Adam Raine – along with its data that showed more than a million of its users each week send messages that include “explicit indicators of potential suicidal planning or intent”

A picture

Reddit fined £14.5m in UK over use of under-13s’ data

The UK information regulator has fined the social news service Reddit £14.5m for using the data of children under the age of 13 unlawfully and potentially exposing them to inappropriate and harmful content.The hefty punishment from the Information Commissioner’s Office (ICO) is the largest fine yet for a breach of children’s privacy and comes after the US-based company introduced age checks in July, including age verification to access mature content. Prior to this, the ICO said, there were “a large number of children under 13 on the platform and Reddit did not have a lawful basis for processing their personal information”.Reddit asks users to declare their age when opening an account but the ICO said relying on self-declaration presented risks to children as it was easy to bypass

A picture

‘A feedback loop with no brake’: how an AI doomsday report shook US markets

US stock markets have been hit by a further wave of AI jitters, this time from yet another viral – and completely speculative – warning about the impact of the technology on the world’s largest economy.The latest foreboding is from Citrini Research, a little-known US firm that provides insights on “transformative ‘megatrends’”. Its post on Substack, which it called a “scenario, not a prediction”, rattled investors by portraying a near future in which autonomous AI systems – or agents – upend the entire US economy, from jobs to markets and mortgages.Citrini’s scenario begins now and ends in June 2028, with US unemployment cresting over 10% and an Occupy Silicon Valley movement setting up camp outside OpenAI and Anthropic’s offices. In the interim, a series of events triggered by the widespread use of AI agents guts software companies and ripples outwards, hitting private credit and mortgages, and leading to an unchecked downward spiral