NEWS NOT FOUND

AI slop tops Billboard and Spotify charts as synthetic music spreads
Three songs generated by artificial intelligence topped music charts this week, reaching the highest spots on Spotify and Billboard charts.Walk My Walk and Livin’ on Borrowed Time by the outfit Breaking Rust topped Spotify’s “Viral 50” songs in the US, which documents the “most viral tracks right now” on a daily basis, according to the streaming service. A Dutch song, We Say No, No, No to an Asylum Center, an anti-migrant anthem by JW “Broken Veteran” that protests against the creation of new asylum centers, took the top position in Spotify’s global version of the viral chart around the same time. Breaking Rust also appeared in the top five on the global chart.“You can kick rocks if you don’t like how I talk,” reads a lyric from Walk My Walk, a seeming double entendre challenging those opposed to AI-generated music

EU investigates Google over ‘demotion’ of commercial content from news media
The EU has opened an investigation into Google Search over concerns the US tech company has been “demoting” commercial content from news media sites.The bloc’s executive arm announced the move after monitoring found that certain content created with advertisers and sponsors was being given such a low priority by Google that it was in effect no longer visible in search results.European Commission officials said this potentially unfair “loss of visibility and of revenue” to media owners could be a result of an anti-spam policy Google operates.Under the rules of the Digital Market Act (DMA), which governs competition in the tech sectors, Google must apply “fair, reasonable and non-discriminatory conditions of access to publishers’ websites on Google Search”.Commission officials said the investigation was not into the overall indexing of newspapers or their reporting on Google Search, just into commercial content provided by third parties

Tech companies and UK child safety agencies to test AI tools’ ability to create abuse images
Tech companies and child protection agencies will be given the power to test whether artificial intelligence tools can produce child abuse images under a new UK law.The announcement was made as a safety watchdog revealed that reports of AI-generated child sexual abuse material [CSAM] have more than doubled in the past year from 199 in 2024 to 426 in 2025.Under the change, the government will give designated AI companies and child safety organisations permission to examine AI models – the underlying technology for chatbots such as ChatGPT and image generators such as Google’s Veo 3 – and ensure they have safeguards to prevent them from creating images of child sexual abuse.Kanishka Narayan, the minister for AI and online safety, said the move was “ultimately about stopping abuse before it happens”, adding: “Experts, under strict conditions, can now spot the risk in AI models early.”The changes have been introduced because it is illegal to create and possess CSAM, meaning that AI developers and others cannot create such images as part of a testing regime

ChatGPT violated copyright law by ‘learning’ from song lyrics, German court rules
A court in Munich has ruled that OpenAI’s chatbot ChatGPT violated German copyright laws by using hits from top-selling musicians to train its language models in what creative industry advocates described as a landmark European ruling.The Munich regional court sided in favour of Germany’s music rights society GEMA, which said ChatGPT had harvested protected lyrics by popular artists to “learn” from them.The collecting society GEMA, which manages the rights of composers, lyricists and music publishers and has approximately 100,000 members, filed the case against OpenAI in November 2024.The lawsuit was seen as a key European test case in a campaign to stop AI scraping of creative output. OpenAI can appeal against the decision

Can OpenAI keep pace with industry’s soaring costs?
It is the $1.4tn (£1.1tn) question. How can a loss-making startup such as OpenAI afford such a staggering spending commitment?Answer that positively and it will go a long way to easing investor concerns over bubble warnings in the artificial intelligence boom, from lofty tech company valuations to a mooted $3tn global spend on datacentres.The company behind ChatGPT needs a vast amount of computing power – or compute, in tech jargon – to train its models, produce their responses and build even more powerful systems in the future

Google plans to put datacentres in space to meet demand for AI
Google is hatching plans to put artificial intelligence datacentres into space, with its first trial equipment sent into orbit in early 2027.Its scientists and engineers believe tightly packed constellations of about 80 solar-powered satellites could be arranged in orbit about 400 miles above the Earth’s surface equipped with the powerful processors required to meet rising demand for AI.Prices of space launches are falling so quickly that by the middle of the 2030s the running costs of a space-based datacentre could be comparable to one on Earth, according to Google research released on Tuesday.Using satellites could also minimise the impact on the land and water resources needed to cool existing datacentres.Once in orbit, the datacentres would be powered by solar panels that can be up to eight times more productive than those on Earth

‘They all think Keir is done’: how push to protect Starmer’s job backfired spectacularly

Britons living abroad: tell us your views on UK politics today

Your Party receives ‘small portion’ of withheld supporters’ donations

Labour faces questions over Starmer aide who holds shares in lobbying firm

Briefing war spotlights relationships between three of Labour’s most senior figures

Nigel Farage is today’s Enoch Powell and his appeal down to slow economy, says minister