NEWS NOT FOUND

Datacentre developers face calls to disclose effect on UK’s net emissions
Datacentre developers are facing pressure to reveal whether their projects will increase the UK’s net greenhouse gas emissions, amid concerns the sites could double national electricity demand.Campaign groups have written to the UK technology secretary, Liz Kendall, warning that the energy required by new AI infrastructure poses a “serious threat to efforts to decarbonise the electricity grid”.Developers should demonstrate that their projects will not cause an increase in the UK’s overall CO2 emissions or local water scarcity, as part of a forthcoming national policy statement (NPS) on datacentres, the letter says.“Without these commitments, such vast electricity use will inevitably generate vast climate emissions,” the campaigners write.The letter is signed by Foxglove, a group that campaigns against big tech dominance, and five other non-governmental organisations including the environmental campaign group Friends of the Earth

OpenAI to work with Pentagon after Anthropic dropped by Trump over company’s ethics concerns
OpenAI said it had struck a deal with the Pentagon to supply AI to classified US military networks, hours after Donald Trump ordered the government to stop using the services of one of the company’s main competitors.Sam Altman, OpenAI’s CEO, announced the move on Friday night. It came after an agreement between Anthropic, a rival AI company that runs the Claude system, and the Trump administration broke down after Anthropic sought assurances its technology would not be used for mass surveillance – nor for autonomous weapons systems that can kill people without human input.Announcing the deal, Altman insisted that OpenAI’s agreement with the government included assurances that it would not be used to those ends.“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote on X

Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.
On 7 August, Kate Fox received a phone call that upended her life. A medical examiner said that her husband, Joe Ceccanti – who had been missing for several hours – had jumped from a railway overpass and died. He was 48.Fox couldn’t believe it. Ceccanti had no history of depression, she said, nor was he suicidal – he was the “most hopeful person” she had ever known

Suicide forum found to be in breach of Online Safety Act after failing to block UK users
A suicide forum linked to deaths in Britain has been ruled provisionally in breach of the Online Safety Act after it failed to properly block access to UK users when ordered to do so last year.Ofcom, the online regulator, said it could now apply to the courts to demand internet service providers block access to the site in the UK. This will depend on how the site, which also faces fines, responds over the next 10 days.Coroners had been raising concerns about the links between the forum and suicides in the UK since at least 2019, campaigners said. The family of 17-year-old Vlad Nikolin-Caisley, from Southampton, said he took his own life in 2024 after using the site, which Ofcom is not naming

OpenAI announces $110bn funding round that would value firm at $840bn
OpenAI said on Friday it is raising $110bn in a blockbuster funding round that would value the ChatGPT maker at $840bn, in a deal that signals the feverish pace of investment in artificial intelligence.It’s more than double the amount the company raised last year, when it racked up $40bn in the largest private tech deal on record.This year’s funding round, which is still open, includes a $30bn investment from SoftBank, $30bn from Nvidia, and $50bn from Amazon, and comes ahead of the AI startup’s expected mega-IPO later this year. Even more investors are expected to join.“We’re super excited about this deal,” OpenAI CEO Sam Altman told CNBC on Friday

Instagram to alert parents if teens repeatedly search self-harm terms
Instagram will start alerting parents if their kids repeatedly search for terms clearly associated with suicide or self-harm.The announcement on Thursday comes as Instagram’s parent company, Meta, is in the midst of two trials over harms to children.A trial under way in Los Angeles questions whether Meta’s platforms deliberately addict and harm minors. Another in New Mexico seeks to determine whether Meta failed to protect kids from sexual exploitation on its platforms.The alerts will only go to parents who are enrolled in Instagram’s parental supervision program

Most senior council officers in England say building work hit by delays

Three in four women unaware menopause can trigger new mental illness, poll finds

The decline in healthy life expectancy in Britain should shock us all | Letters

UK health official recused from puberty blockers trial after bias claims

‘Viruses don’t know borders’: US anti-vaccine rhetoric could impact global measles crisis

Poorly regulated clinics in England are putting children with ADHD at risk, warn doctors