Two dead and 11 seriously ill in meningitis outbreak at University of Kent


Meta and Google trial: are infinite scroll and autoplay creating addicts?
It was as “easy as ABC”, claimed the lawyer prosecuting a landmark social media harm case against Meta and Google which heard closing arguments this week. The defendants were guilty, said Mark Lanier, of “addicting the brains of children”. Not true, replied the tech companies. Meta insisted providing young people with a “safer, healthier experience has always been core to our work”.Features such as autoplay videos, infinite scrolling and constantly chirruping alerts woven into the fabric of online platforms were central to the six-week trial in Los Angeles, which has been compared to the cases against tobacco companies in the 1990s

New study raises concerns about AI chatbots fueling delusional thinking
A new scientific review raises concerns about how chatbots powered by artificial intelligence may encourage delusional thinking, especially in vulnerable people.A summary of existing evidence on artificial intelligence-induced psychosis was published last week in the Lancet Psychiatry, highlighting how chatbots can encourage delusional thinking – though possibly only in people who are already vulnerable to psychotic symptoms. The authors advocate for clinical testing of AI chatbots in conjunction with trained mental health professionals.For his paper, Dr Hamilton Morrin, a psychiatrist and researcher at King’s College in London, analyzed 20 media reports on so-called “AI psychosis”, which describes current theories as to how chatbots might induce or exacerbate delusions.“Emerging evidence indicates that agential AI might validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis, although it is not clear whether these interactions can result in the emergence of de novo psychosis in the absence of pre-existing vulnerability,” he wrote

Fake rooms, props and a script to lure victims: inside an abandoned Cambodia scam centre
It is as if you have walked into a branch of one of Vietnam’s banks. A row of customer service desks, divided by plastic screens, with landline phones, promotional leaflets and staff business cards. A seated waiting area and a private meeting room. All of it features the OCB bank’s logo, or its trademark green colour.This is not a genuine bank branch, however

Apple cuts China App Store commission fees after government pressure
Apple announced late on Thursday it would lower the commission fees collected in its App Store in mainland China. The move follows pressure from regulators in the tech company’s second-largest market, as well as global scrutiny of its payment requirements.Fees for in-app purchases and paid transactions will be lowered to 25% from 30% starting on Sunday, Apple said in a statement on its blog for developers.“Apple is making changes to the App Store in China following discussions with the Chinese regulator,” the company’s announcement reads. “As of March 15, 2026, changes will be made to the commission rates that apply to the China mainland storefront of the App Store on iOS and iPadOS

Anthropic-Pentagon battle shows how big tech has reversed course on AI and war
The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war – and what lines it will not cross. Amid Silicon Valley’s rightward shift under Donald Trump and the signing of lucrative defense contracts, big tech’s answer is looking very different than it did even less than a decade ago.Anthropic’s feud with the Trump administration escalated three days ago as the AI firm sued the Department of Defense, claiming that the government’s decision to blacklist it from government work violated its first amendment rights. The company and the Pentagon have been locked in a months-long standoff, with Anthropic attempting to prohibit its AI model from being used for domestic mass surveillance or fully autonomous lethal weapons.Anthropic has argued that giving in to the DoD’s demands to permit “any lawful use” of its technology would violate its founding safety principles and open up its technology for potential abuse, staking an ethical boundary that others in the industry must decide whether they want to cross

AI toys for young children must be more tightly regulated, say researchers
It was all going well. Charlotte, five, was chatting with an AI soft toy called Gabbo at a London play centre about her family, her drawing of a heart to represent them and what makes her happy. She even offered a couple of kisses to the £80 toy with a face like a computer screen.It was when she declared: “Gabbo, I love you”, that the fluent conversation came to an abrupt halt.“As a friendly reminder, please ensure interactions adhere to the guidelines provided,” said Gabbo, awkwardly crashing into its guardrails

Oil company shares soar to all-time highs as Middle East war turbocharges price per barrel

Beyond the strait: why attacks on Kharg Island could keep oil prices high

AI could give us our lives back – if we don’t blow it

‘Cruel hoax’ or ‘work-life balance nirvana’: whatever happened to the four-day work week?

Stout clobber? Guinness tie-up features £1,295 ‘pub carpet’ jumper

Relief for some of Britain’s poorest lands at right moment to cushion Iran aftershocks | Heather Stewart