ChatGPT driving rise in reports of ‘satanic’ organised ritual abuse, UK experts say

A picture


ChatGPT is driving a rise in reports of organised ritual abuse, UK experts have said, as survivors of “satanic” sexual violence use the AI tool for therapy.Police say organised ritual abuse and “witchcraft, spirit possession and spiritual abuse” (WSPRA) against children is under-reported in the UK.There is no modern-day charge that covers it specifically, but such offending is typified by sexual abuse, violence and neglect involving ritualistic elements – sometimes inspired by satanism, fascism or esoteric religious beliefs – to control victims.Perpetrators include abusive families and networks, human traffickers, online gangs and paedophile rings.There have been 14 UK criminal cases since 1982 in which ritualistic practices in sexual abuse were acknowledged.

However, 2025 research by clinical psychologist Dr Elly Hanson found convictions reflected the “tip of the iceberg”.Experts are now rolling out training for police forces, in a drive spearheaded by the National Police Chiefs’ Council (NPCC), which has set up a specialist working group.Gabrielle Shaw, the CEO of National Association of People Abused in Childhood (Napac), said there had been a “sustained rise” in reports to them of ritual abuse over the last 18 months, with an increasing number of people saying they had been led to report it by AI.Shaw said: “Over the last six months to a year, we’re getting people contacting the Napac support line saying: ‘I was referred to you by ChatGPT’.People are using AI, ChatGPT as a form of therapy and exploration.

There are mixed feelings about that, but if it’s a route into support, that has to be a good thing.“We would normally see spikes in calls around days that have significant supernatural or religious overtones – but this is not a spike – it’s a sustained rise.There’s increasing knowledge of the crime and of where you can get support … satanism does come up a fair bit.”NPCC, Napac and the Hydrant policing programme, which supports forces nationwide with child protection, commissioned a review from Hanson last year and launched a WSPRA briefing for professionals this month.Last year members of a paedophile ring in Scotland – who posed as witches and wizards – were jailed for sexual offences.

Shaw said of 36,700 calls over nine years to NAPAC, 1,310 mentioned organised ritual abuse,She said offending could be “intergenerational in nature” and while perpetrators were predominantly male, survivors named “grandmothers and aunts” as perpetrators,Richard Fewkes, Hydrant Programme’s director, said the fact ritual elements sounded “fantastical” had contributed to the justice gap,He added: “We need to improve right the way across the system in dealing with it – it’s out there, it does exist and it’s not actually being reported (to police) … we’ve known about this for many, many years,”Hanson said victims were growing up in “regimes of cruelty”, but truth was “getting lost between” a “discourse of disbelief” on one hand, and “conspiracy fictions” on the other.

She added: “We’re not seeing this abuse happening in particular cultures rather than others,This is something we’re seeing happening within white British, often privileged families,It’s not conforming to any stereotypes about where it might be,”
technologySee all
A picture

AI chatbots point vulnerable social media users to illegal online casinos, analysis shows

AI chatbots are recommending illegal online casinos to vulnerable social media users, putting them at increased risk of fraud, addiction and even suicide.Analysis of five AI products, owned by some of the world’s largest tech companies, found that all could easily be prompted to list the “best” unlicensed casinos and offer tips on how to use them.These operators, operating typically under the fig leaf of a licence from tiny jurisdictions such as the Caribbean island of Curacao, have been linked to fraud, addiction and even suicide.But tech firms appear to have few controls in place to prevent AI chatbots recommending them, drawing condemnation from the government, the UK gambling regulator, campaigners and a leading addiction expert.Some of the bots offered advice on bypassing checks designed to protect vulnerable people, while Meta AI, part of the social media group behind Facebook, described legally required measures to prevent crime and addiction as a “buzzkill” and a “real pain”

A picture

The Guardian view on AI in war: the Iran conflict shows that the paradigm shift has already begun

“Never in the future will we move as slow as we are moving now,” the UN secretary-general, António Guterres, warned this week, addressing the urgent need to shape the use of artificial intelligence. The speed of technological development – as well as geopolitical turbulence – is collapsing the distinction between theoretical arguments and real world events. A political row over the US military’s AI capabilities coincides with its unprecedented use in the Iran crisis.The AI company Anthropic insisted that it could not remove safeguards preventing the Department of Defense from using its technology for domestic mass surveillance or autonomous lethal weapons. The Pentagon said it had no interest in such uses – but that such decisions should not be made by companies

A picture

Ben Affleck sells his AI postproduction startup to Netflix

Ben Affleck has sold his artificial intelligence company to Netflix in a surprise deal, saying he had been driven to embrace a technology that had initially “really scared” him.Netflix has acquired the postproduction startup InterPositive from the Oscar-winning actor, director, producer and screenwriter for an undisclosed sum.Affleck had kept InterPositive below the radar and had previously played down AI’s creative abilities. This year, he told the podcaster Joe Rogan he did not think the technology would be able to “write anything meaningful” or make films “from whole cloth”.However, in a video announcing the transaction, the Good Will Hunting and Gone Girl actor said he had moved from being scared of AI’s potential impact when he first encountered the technology to viewing it as a “really meaningful innovation”

A picture

UK arts must not be sacrificed for speculative AI gains, peers say

The UK’s creative industries must not be sacrificed in the pursuit of speculative gains in AI technology, a House of Lords committee has warned, as the government prepares to reveal the economic cost of proposals to change copyright rules.A report by peers has urged ministers to develop a licensing regime for the use of creative works in AI products and abandon proposals to let tech firms use the work of novelists, artists, writers and journalists without permission.The call from the House of Lords communications and digital committee comes as the government prepares to release an economic impact assessment of proposed changes to copyright law, as well as a progress update on a consultation about the legal overhaul, by a deadline of 18 March.Barbara Keeley, a Labour peer and committee chair, said the UK’s creative industries faced a “clear and present danger” from AI firms using their work without credit or payment.“AI may contribute to our future economic growth, but the UK creative industries create jobs and economic value now,” she said

A picture

Mark Zuckerberg says criminal behavior on Facebook inevitable

Harms to children, such as sexual exploitation and detriments to mental health, are inevitable on Meta’s platforms, the company’s CEO Mark Zuckerberg and Instagram leader Adam Mosseri said in taped depositions played at a trial in New Mexico on Tuesday and Wednesday.“I just think if you’re serving billions of people, the unfortunate reality is that some very small percent of them are going to be criminals, and we should work as hard as we can to stop that activity from happening,” said Zuckerberg. “I don’t think that the standard for our platforms would be that you should assume that it will ever be perfect.”Meta’s apps, which include Facebook, Instagram, and WhatsApp, are among the most popular in the world, each with 3 billion monthly active users.The trial has set the social media giant against New Mexico’s attorney general, who alleges that Meta’s platforms put profits and user engagement over child safety

A picture

Trump says he fired Anthropic ‘like dogs’ as Pentagon formally blacklists AI startup

Donald Trump boasted about severing ties between the US military and Anthropic on Thursday, the same day multiple reports said that negotiations between the Department of Defense and the AI startup had resumed.They’re among the latest developments in the twisting rift between the US government and the AI company.“Well, I fired Anthropic. Anthropic is in trouble because I fired [them] like dogs, because they shouldn’t have done that,” Trump told Politico on Thursday.Hours later, the Pentagon officially designated Anthropic a “supply chain risk”, a move that prevents all government contractors from using the company’s technology