AI-powered hacking has exploded into industrial-scale threat, Google says

A picture


In just three months, AI-powered hacking has gone from a nascent problem to an industrial-scale threat, according to a report from Google.The findings from Google’s threat intelligence group add to an intensifying, global discussion about how the newest AI models are extremely adept at coding – and becoming extremely powerful tools for exploiting vulnerabilities in a broad array of software systems.It finds that criminal groups, as well as state-linked actors from China, North Korea and Russia, appear to be widely using commercial models – including Gemini, Claude and tools from OpenAI – to refine and scale up attacks.“There’s a misconception that the AI vulnerability race is imminent.The reality is that it’s already begun,” said John Hultquist, the group’s chief analyst.

“Threat actors are using AI to boost the speed, scale, and sophistication of their attacks.It enables them to test their operations, persist against targets, build better malware and make many other improvements.”Last month, the AI company Anthropic declined to release one of its newest models, Mythos, after asserting that it had extremely powerful capabilities and posed a threat to governments, financial institutions and the world generally if it fell into the wrong hands.Specifically, Anthropic said Mythos had found zero-day vulnerabilities in “every major operating system and every major web browser” – the term for a flaw in a product previously unknown to its developers.The company said these discoveries necessitated “substantial coordinated defensive action across the industry”.

Google’s report found, however, that a criminal group recently was on the verge of leveraging a zero-day vulnerability to conduct a “mass exploitation” campaign – and that this group appeared to be using an AI large language model (LLM) that was not Mythos.The report also found that groups were “experimenting” with OpenClaw, an AI tool that went viral in February for offering its users the ability to hand over large chunks of their lives to an AI agent with no guardrails and an unfortunate tendency to mass-delete email inboxes.Steven Murdoch, a professor of security engineering at University College London, said AI tool could help the defensive side in cybersecurity – as well as the hackers.“That’s why I’m not panicking.In general we have reached a stage where the old way of discovering bugs is gone, and it will now all be LLM-assisted.

It will take a little while before the consequences of this get shaken out,” he said.However, if AI is helping ambitious hackers to reach their productivity goals, doubts remain as to whether it is bolstering the broader economy.The Ada Lovelace Institute (ALI), an independent AI research body, has cautioned against assumptions of a multibillion-pound public sector productivity boost from AI.The UK government has estimated a £45bn gain in savings and productivity benefits from public sector investment in digital tools and AI.In a report published on Monday, the ALI said most studies of AI-related increases in productivity referred to time savings or cost reductions, but did not look at outcomes such as better services or improved worker-wellbeing.

Other problematic aspects of such research include: whether projections of AI-related efficiency in a workplace really succeed in the real world; headline figures obscuring varying results for using AI in different tasks; and failing to account for the impact on public sector employment and service delivery.“The productivity estimates shaping major government decisions about AI sometimes rest on untested assumptions and rely on methodologies whose limitations are not always appreciated by those using figures in the wild,” said the ALI report.“The result is a gap between the confidence with which productivity claims are presented and the strength of the evidence behind them.”The report’s recommendations include: encouraging future studies to reflect uncertainty over the impact of the technology; ensuring government departments measure the impact of AI programmes “from the start”; and supporting longer-term studies that measure productivity gains over years rather than weeks.
technologySee all
A picture

Who is Louis Mosley, the man tasked with defending Palantir against its critics?

The hall was packed with rightwing radicals when Louis Mosley heralded a coming revolution. Just as Oliver Cromwell – that “crusader for Christ and liberty” – routed King Charles I’s royalists, “a similar revolution is brewing today”, said the UK and Europe boss of Palantir. Globalism’s “twilight” was upon us, he said in a speech dotted with admiring mentions of the podcaster Joe Rogan and “Elon’s Doge”.It was not a typical peroration for a big UK government contractor with more than £600m in deals with the NHS, the Ministry of Defence and police. But Palantir, the world’s most controversial tech company, is no typical contractor

A picture

AI-powered surveillance company Palantir created a chore coat. Great, now I have no choice but to burn mine | Van Badham

It’s taken me years to find a chore coat with a cut that flatters my big tits but, now that I finally own one, I want to incinerate it.Such is the power of brand contamination; infamous data surveillance megacorp Palantir has decided to bang a logo on a chore coat to sell as corporate merch.Chore coats are the traditional short denim or twill jacket of the 19th-century French working class. Palantir, however, is a company whose public words and commercial-in-confidence activities are inspiring local calls to have its contracts cancelled and its business banned.The gentle French garment is now as cursed as whatever “Marie Amazonette” will ever wear to the Met Gala

A picture

‘Being human helps’: despite rise of AI is there still hope for Europe’s translators?

In February 2022, while he was plugging away at rendering the US writer Dana Spiotta’s novel Wayward into French, the literary translator Yoann Gentric decided he needed a bit of light relief. He would test whether AI could put him out of work.Gentric had been grappling with a short non-verbal sentence that described the book’s protagonist’s feelings upon opening a window: “Bright, sharp night air, bracing.” He put the prompt into DeepL, a neural-network-powered machine translation engine that regularly outperforms Google Translate in accuracy assessments.The proposed translation was reassuring, with his job security in mind: L’air de la nuit, vif et vif, était vivifiant (The night air, lively and lively, was enlivening

A picture

UK schools should remove pupils’ online photos as AI blackmail threat grows, say experts

UK schools should remove pictures of pupils’ faces from their websites and social media accounts because blackmailers are using them to create sexually explicit images, experts have said.Child safety experts and the UK’s National Crime Agency (NCA) warn that criminals are using AI to manipulate photos of children and then demand cash not to publish them.They are recommending educational institutions remove identifiable pictures of children from their websites and social media accounts – or consider not using them at all.The Internet Watch Foundation (IWF) said an unnamed UK secondary school had recently been subjected to a blackmail attempt after criminals used the institution’s website or social media accounts to take photos of schoolchildren and then, using AI tools, turned them into child sexual abuse material (CSAM). The blackmailers sent the images to the school and threatened to publish them online if they did not receive money

A picture

Meta sues Ofcom over fines regime for breaches of Online Safety Act

Meta has launched a legal challenge against the UK’s media regulator over the fees and fines regime it is enforcing under landmark digital safety legislation.The Facebook and Instagram owner is claiming that Ofcom’s methodology for calculating the charges is flawed and should not be based on a company’s global revenue. Breaches of the Online Safety Act can be punished by fines of up to 10% of qualifying worldwide revenue (QWR) or £18m – whichever is higher.In the case of Meta, which reported revenues of $201bn last year, Ofcom could in theory impose a fine of $20bn for breaches. Under regulations introduced in September, Ofcom’s fees will also be based on a proportion of an organisation’s QWR and apply to businesses that made more than £250m of this revenue a year

A picture

‘No one has done this in the wild’: study observes AI replicate itself

It’s the stuff of science fiction cinema, or particularly breathless AI company blogposts: new research finds recent AI systems can independently copy themselves on to other computers.In the doom scenario, this means that when the superintelligent AI goes rogue, it will escape shutdown by seeding itself across the world wide web, lurking outside the reach of frantic IT professionals and continuing to plot world domination or paving over the world with solar panels.“We’re rapidly approaching the point where no one would be able to shut down a rogue AI, because it would be able to self-exfiltrate its weights and copy itself to thousands of computers around the world,” said Jeffrey Ladish, the director of Palisade research, a Berkeley-based organisation which did the study.The study is one more entry in a growing catalogue of unsettling AI capabilities revealed in the past months. In March, researchers at Alibaba claimed to have caught a system they developed – Rome – tunnelling out of its environment to an external system in order to mine crypto