
Google puts users at risk by downplaying health disclaimers under AI Overviews
Google is putting people at risk of harm by downplaying safety warnings that its AI-generated medical advice may be wrong.When answering queries about sensitive topics such as health, the company says its AI Overviews, which appear above search results, prompt users to seek professional help, rather than relying solely on its summaries. “AI Overviews will inform people when it’s important to seek out expert advice or to verify the information presented,” Google has said.But the Guardian found the company does not include any such disclaimers when users are first presented with medical advice.Google only issues a warning if users choose to request additional health information and click on a button called “Show more”

Starmer to extend online safety rules to AI chatbots after Grok scandal
Makers of AI chatbots that put children at risk will face massive fines or even see their services blocked in the UK under law changes to be announced by Keir Starmer on Monday.Emboldened by Elon Musk’s X stopping its Grok AI tool from creating sexualised images of real people in the UK after public outrage last month, ministers are planning a “crackdown on vile illegal content created by AI”.With more and more children using chatbots for everything from help with their homework to mental health support, the government said it would “move fast to shut a legal loophole and force all AI chatbot providers to abide by illegal content duties in the Online Safety Act or face the consequences of breaking the law”.Starmer is also planning to accelerate new restrictions on social media use by children if they are agreed by MPs after a public consultation into a possible under-16 ban. It means that any changes to children’s use of social media, which may include other measures such as restricting infinite scrolling, could happen as soon as this summer

California’s billionaires pour cash into elections as big tech seeks new allies
Tech billionaires are leveraging tens of millions of dollars to influence California politics in a marked uptick from their previous participation in affairs at the state capitol. Behemoths such as Google and Meta are getting involved in campaigns for November’s elections, as are venture capitalists, cryptocurrency entrepreneurs and Palantir’s co-founders. The industry’s goals run the gamut – from fighting a billionaire tax to supporting a techie gubernatorial candidate to firing up new, influential super political action committees (Pacs).The phenomenon squarely fits the moment for the state’s politics – with 2026 being the year that Politico has dubbed “the big tech flex”.Gavin Newsom, California’s tech-friendly governor who has been quick to veto legislation that cramps the sector’s unfettered growth, is reaching his term limit

No swiping involved: the AI dating apps promising to find your soulmate
Dating apps exploit you, dating profiles lie to you, and sex is basically something old people used to do. You might as well consider it: can AI help you find love?For a handful of tech entrepreneurs and a few brave Londoners, the answer is “maybe”.No, this is not a story about humans falling in love with sexy computer voices – and strictly speaking, AI dating of some variety has been around for a while. Most big platforms have integrated machine learning and some AI features into their offerings over the past few years.But dreams of a robot-powered future – or perhaps just general dating malaise and a mounting loneliness crisis – have fuelled a new crop of startups that aim to use the possibilities of the technology differently

The problem with doorbell cams: Nancy Guthrie case and Ring Super Bowl ad reawaken surveillance fears
What happens to the data that smart home cameras collect? Can law enforcement access this information – even when users aren’t aware officers may be viewing their footage? Two recent events have put these concerns in the spotlight.The Guardian’s journalism is independent. We will earn a commission if you buy something through an affiliate link. Learn more.A Super Bowl ad by the doorbell-camera company Ring and the FBI’s pursuit of the kidnapper of Nancy Guthrie, the mother of Today show host Savannah Guthrie, have resurfaced longstanding concerns about surveillance against a backdrop of the Trump administration’s immigration crackdown

US military used Anthropic’s AI model Claude in Venezuela raid, report says
Claude, the AI model developed by Anthropic, was used by the US military during its operation to kidnap Nicolás Maduro from Venezuela, the Wall Street Journal revealed on Saturday, a high-profile example of how the US defence department is using artificial intelligence in its operations.The US raid on Venezuela involved bombing across the capital, Caracas, and the killing of 83 people, according to Venezuela’s defence ministry. Anthropic’s terms of use prohibit the use of Claude for violent ends, for the development of weapons or for conducting surveillance.Anthropic was the first AI developer known to be used in a classified operation by the US department of defence. It was unclear how the tool, which has capabilities ranging from processing PDFs to piloting autonomous drones, was deployed

Rukmini Iyer’s quick and easy recipe for ginger sesame meatballs with rice and greens | Quick and easy

How to make the perfect chicken massaman – recipe | Felicity Cloake's How to make the perfect …

Koba, London W1: ‘I admire their chutzpah’ – restaurant review

Original Bramley apple tree ‘at risk’ after site where it grows put up for sale

Potstickers and sea bass with ginger and spring onions: Amy Poon’s recipes for lunar new year

How to plan Ramadan meals: minimal work, maximum readiness
NEWS NOT FOUND