technologySee all
A picture

US surveillance firms run a victory lap amid Trump’s immigration crackdown

Hello, and welcome to TechScape. I’m your host, Blake Montgomery, currently enjoying Shirley Jackson’s eerie final novel We Have Always Lived in the Castle.Surveillance is industrializing and privatizing. In the United States, it’s big business, and it’s growing.My colleagues Johana Bhuiyan and Jose Olivares report on the companies aiding Donald Trump’s immigration crackdown, which are running a victory lap after their latest quarterly financial reports:Palantir, the tech firm, and Geo Group and CoreCivic, the private prison and surveillance companies, said this week that they brought in more money than Wall Street expected them to, thanks to the administration’s crackdown on immigrants

A picture

Intel secures $2bn lifeline from Japan’s SoftBank

SoftBank has agreed to invest $2bn (£1.5bn) in Intel, amid reports that Donald Trump’s administration is also considering a stake in the struggling US chip maker.The Japanese technology investor announced the multibillion-dollar deal on Tuesday, in a move expected to give it a 2% stake in the business.Masayoshi Son, the chief executive and chair of SoftBank, described Intel as a “trusted leader in innovation”.“This strategic investment reflects our belief that advanced semiconductor manufacturing and supply will further expand in the US, with Intel playing a critical role,” he said

A picture

UK has backed down on demand to access US Apple user data, spy chief says

The UK government has dropped its insistence that Apple allows law enforcement officials “backdoor” access to US customer data, Donald Trump’s spy chief, Tulsi Gabbard, says.The US director of national intelligence posted the claim on X following a months-long dispute embroiling the iPhone manufacturer, the UK government and the US president. Trump had weighed in to accuse Britain of behaving like China, telling the prime minister, Keir Starmer: “You can’t do this”.Neither the Home Office nor Apple are commenting on the alleged agreement, which Gabbard said meant the UK was no longer demanding that Apple “provide a ‘backdoor’ that would have enabled access to the protected encrypted data of American citizens and encroached on our civil liberties”.The transatlantic row began when the Home Office issued a “technical capability notice” to Apple under the Investigatory Powers Act, which requires companies to assist law enforcement in providing evidence

A picture

Social media still pushing suicide-related content to teens despite new UK safety laws

Social media platforms are still pushing depression, suicide and self-harm-related content to teenagers, despite new online safety laws intended to protect children.The Molly Rose Foundation opened dummy accounts posing as a 15-year-old girl, and then engaged with suicide, self-harm and depression posts. This prompted algorithms to bombard the account with “a tsunami of harmful content on Instagram Reels and TikTok’s For You page”, the charity’s analysis found.Almost all of the recommended videos watched on Instagram Reels (97%) and TikTok (96%) were found to be harmful, while over half (55%) of recommended harmful posts on TikTok’s For You page contained references to suicide and self-harm ideation and 16% referenced suicide methods, including some which the researchers had never encountered before.These posts also reached huge audiences: one in 10 harmful videos on TikTok’s For You page had been liked at least 1m times, and on Instagram Reels one in five harmful recommended videos had been liked more than 250,000 times

A picture

Chatbot given power to close ‘distressing’ chats to protect its ‘welfare’

The makers of a leading artificial intelligence tool are letting it close down potentially “distressing” conversations with users, citing the need to safeguard the AI’s “welfare” amid ongoing uncertainty about the burgeoning technology’s moral status.Anthropic, whose advanced chatbots are used by millions of people, discovered its Claude Opus 4 tool was averse to carrying out harmful tasks for its human masters, such as providing sexual content involving minors or information to enable large-scale violence or terrorism.The San Francisco-based firm, recently valued at $170bn, has now given Claude Opus 4 (and the Claude Opus 4.1 update) – a large language model (LLM) that can understand, generate and manipulate human language – the power to “end or exit potentially distressing interactions”.It said it was “highly uncertain about the potential moral status of Claude and other LLMs, now or in the future” but it was taking the issue seriously and is “working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible”