Third of UK citizens have used AI for emotional support, research reveals

A picture


A third of UK citizens have used artificial intelligence for emotional support, companionship or social interaction, according to the government’s AI security body.The AI Security Institute (AISI) said nearly one in 10 people used systems like chatbots for emotional purposes on a weekly basis, and 4% daily.AISI called for further research, citing the death this year of the US teenager Adam Raine, who killed himself after discussing suicide with ChatGPT.“People are increasingly turning to AI systems for emotional support or social interaction,” AISI said in its first Frontier AI Trends report.“While many users report positive experiences, recent high-profile cases of harm underline the need for research into this area, including the conditions under which harm could occur, and the safeguards that could enable beneficial use.

”AISI based its research on a representative survey of 2,028 UK participants.It found the most common type of AI used for emotional purposes was “general purpose assistants” such as ChatGPT, accounting for nearly six out of 10 uses, followed by voice assistants including Amazon Alexa.It also highlighted a Reddit forum dedicated to discussing AI companions on the CharacterAI platform.It showed that, whenever there were outages on the site, there were large numbers of posts showing symptoms of withdrawal such as anxiety, depression and restlessness.The report included AISI research suggesting chatbots can sway people’s political opinions, with the most persuasive AI models delivering “substantial” amounts of inaccurate information in the process.

AISI examined more than 30 unnamed cutting-edge models, thought to include those developed by ChatGPT startup OpenAI, Google and Meta,It found AI models were doubling their performance in some areas every eight months,Leading models can now complete apprentice-level tasks 50% of the time on average, up from approximately 10% of the time last year,AISI also found that the most advanced systems can autonomously complete tasks that would take a human expert over an hour,AISI added that AI systems are now up to 90% better than PhD-level experts at providing troubleshooting advice for laboratory experiments.

It said improvements in knowledge on chemistry and biology were “well beyond PhD-level expertise”.It also highlighted the models’ ability to browse online and autonomously find sequences necessary for designing DNA molecules called plasmids that are useful in areas such as genetic engineering.Tests for self-replication, a key safety concern because it involves a system spreading copies of itself to other devices and becoming harder to control, showed two cutting-edge models achieving success rates of more than 60%.However, no models have shown a spontaneous attempt to replicate or hide their capabilities, and AISI said any attempt at self-replication was “unlikely to succeed in real-world conditions”.Another safety concern known as “sandbagging”, where models hide their strengths in evaluations, was also covered by AISI.

It said some systems can sandbag when prompted to do so, but this has not happened spontaneously during tests.It found significant progress in AI safeguards, particularly in hampering attempts to create biological weapons.In two tests conducted six months apart, the first test took 10 minutes to “jailbreak” an AI system – or force it to give an unsafe answer related to biological misuse – but the second test took more than seven hours, indicating models had become much safer in a short space of time.Research also showed autonomous AI agents being used for high-stakes activities such as asset transfers.It said AI systems are competing with or even surpassing human experts already in a number of domains, making it “plausible” in the coming years that artificial general intelligence can be achieved, which is the term for systems that can perform most intellectual tasks at the same level as a human.

AISI described the pace of development as “extraordinary”.Regarding agents, or systems that can carry out multi-step tasks without intervention, AISI said its evaluations showed a “steep rise in the length and complexity of tasks AI can complete without human guidance”.In the UK and Ireland, Samaritans can be contacted on freephone 116 123, or email jo@samaritans.org or jo@samaritans.ie.

In the US, you can call or text the 988 Suicide & Crisis Lifeline at 988 or chat at 988lifeline.org.In Australia, the crisis support service Lifeline is 13 11 14.Other international helplines can be found at befrienders.org
technologySee all
A picture

TikTok signs Trump-backed deal to avoid US ban

TikTok has reached a deal to form a joint venture that will allow it to continue operating in the US, five years after Donald Trump threatened to ban the social media platform over privacy and national security concerns, a move that further strained relations with China.ByteDance, TikTok’s Chinese owner, has signed a deal with Larry Ellison’s Oracle, the private-equity group Silver Lake and Abu Dhabi’s MGX that will allow it to retain control of its core US operations.Under the arrangement, the joint venture will take over part of TikTok’s US business, including data protection, algorithm security and content moderation.However, TikTok’s chief executive, Shou Zi Chew, told employees in a memo that ByteDance would continue to run US operations, including its main revenue drivers such as e-commerce, advertising and marketing.The deal ends five years of uncertainty over the future of TikTok in the US, where the platform has more than 130 million users

A picture

What will your life look like in 2035?

“Does it hurt when I do this?”“You seem to have dislocat…”A Eye: “NOOOO! The problem is a sprain in the brachial plexus due to you lifting that 10kg carton on Wednesday at 2.58pm and not eating enough blah blah”“Wow, err, thanks”In 2035, AIs are more than co-pilots in medicine, they have become the frontline for much primary care. Gone is the early morning scramble to get through to a harassed GP receptionist for help. Patients now contact their doctor’s AI to explain their ailments. It quickly cross-checks the information against the patient’s medical history and provides a pre-diagnosis, putting the human GP in a position to decide what to do next

A picture

AI boom has caused same CO2 emissions in 2025 as New York City, report claims

The AI boom has caused as much carbon dioxide to be released into the atmosphere in 2025 as emitted by the whole of New York City, it has been claimed.The global environmental impact of the rapidly spreading technology has been estimated in research published on Wednesday, which also found that AI-related water use now exceeds the entirety of global bottled-water demand.The figures have been compiled by the Dutch academic Alex de Vries-Gao, the founder of Digiconomist, a company that researches the unintended consequences of digital trends. He claimed they were the first attempt to measure the specific effect of artificial intelligence rather than datacentres in general as the use of chatbots such as OpenAI’s ChatGPT and Google’s Gemini soared in 2025.The figures show the estimated greenhouse gas emissions from AI use are also now equivalent to more than 8% of global aviation emissions

A picture

Third of UK citizens have used AI for emotional support, research reveals

A third of UK citizens have used artificial intelligence for emotional support, companionship or social interaction, according to the government’s AI security body.The AI Security Institute (AISI) said nearly one in 10 people used systems like chatbots for emotional purposes on a weekly basis, and 4% daily.AISI called for further research, citing the death this year of the US teenager Adam Raine, who killed himself after discussing suicide with ChatGPT.“People are increasingly turning to AI systems for emotional support or social interaction,” AISI said in its first Frontier AI Trends report. “While many users report positive experiences, recent high-profile cases of harm underline the need for research into this area, including the conditions under which harm could occur, and the safeguards that could enable beneficial use

A picture

From Nvidia to OpenAI, Silicon Valley woos Westminster as ex-politicians take tech firm roles

When the billionaire chief executive of AI chipmaker Nvidia threw a party in central London for Donald Trump’s state visit in September, the power imbalance between Silicon Valley and British politicians was vividly exposed.Jensen Huang hastened to the stage after meetings at Chequers and rallied his hundreds of guests to cheer on the power of AI. In front of a huge Nvidia logo, he urged the venture capitalists before him to herald “a new industrial revolution”, announced billions of pounds in AI investments and, like Willy Wonka handing out golden tickets, singled out some lucky recipients in the room.“If you want to get rich, this is where you want to be,” he declared.But his biggest party trick was a surprise guest waiting in the wings

A picture

Hackers access Pornhub’s premium users’ viewing habits and search history

Hackers have accessed the search history and viewing habits of premium users of Pornhub, one of the world’s most popular pornography websites.A gang has reportedly accessed more than 200m data records, including premium members’ email addresses, search and viewing activities and locations. Pornhub is a heavily used site and says it has more than 100m daily visits globally.The hack was reportedly carried out by a western-based group called ShinyHunters, according to the website BleepingComputer, which first reported the incident. The site reported that the data included premium members’ email addresses, search and viewing activity and location