H
recent
H
HOYONEWS
HomeBusinessTechnologySportPolitics
Others
  • Food
  • Culture
  • Society
Contact
Home
Business
Technology
Sport
Politics

Food

Culture

Society

Contact
Facebook page
H
HOYONEWS

Company

business
technology
sport
politics
food
culture
society

© 2025 Hoyonews™. All Rights Reserved.
Facebook page

‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report

about 9 hours ago
A picture


The International AI Safety report is an annual survey of technological progress and the risks it is creating across multiple areas, from deepfakes to the jobs market.Commissioned at the 2023 global AI safety summit, it is chaired by the Canadian computer scientist Yoshua Bengio, who describes the “daunting challenges” posed by rapid developments in the field.The report is also guided by senior advisers, including Nobel laureates Geoffrey Hinton and Daron Acemoglu.Here are some of the key points from the second annual report, published on Tuesday.It stresses that it is a state-of-play document, rather than a vehicle for making specific policy recommendations to governments.

Nonetheless, it is likely to help frame the debate for policymakers, tech executives and NGOs attending the next global AI summit in India this month,A host of new AI models – the technology that underpins tools like chatbots – were released last year, including OpenAI’s GPT-5, Anthropic’s Claude Opus 4,5 and Google’s Gemini 3,The report points to new “reasoning systems” – which solve problems by breaking them down into smaller steps – showing improved performance in maths, coding and science,Bengio said there has been a “very significant jump” in AI reasoning.

Last year, systems developed by Google and OpenAI achieved a gold-level performance in the International Mathematical Olympiad – a first for AI,However, the report says AI capabilities remain “jagged”, referring to systems displaying astonishing prowess in some areas but not in others,While advanced AI systems are impressive at maths, science, coding and creating images, they remain prone to making false statements, or “hallucinations”, and cannot carry out lengthy projects autonomously,Nonetheless, the report cites a study showing that AI systems are rapidly improving their ability to carry out certain software engineering tasks – with their duration doubling every seven months,If that rate of progress continues, AI systems could complete tasks lasting several hours by 2027 and several days by 2030.

This is the scenario under which AI becomes a real threat to jobs,But for now, says the report, “reliable automation of long or complex tasks remains infeasible”,The report describes the growth of deepfake pornography as a “particular concern”, citing a study showing that 15% of UK adults have seen such images,It adds that since the publication of the inaugural safety report in January 2025, AI-generated content has become “harder to distinguish from real content” and points to a study last year in which 77% of participants misidentified text generated by ChatGPT as being human-written,The report says there is limited evidence of malicious actors using AI to manipulate people, or of internet users sharing such content widely – a key aim of any manipulation campaign.

Big AI developers, including Anthropic, have released models with heightened safety measures after being unable to rule out the possibility that they could help novices create biological weapons.Over the past year, AI “co-scientists” have become increasingly capable, including providing detailed scientific information and assisting with complex laboratory procedures such as designing molecules and proteins.The report adds that some studies suggest AI can provide substantially more help in bioweapons development than simply browsing the internet, but more work is needed to confirm those results.Biological and chemical risks pose a dilemma for politicians, the report adds, because these same capabilities can also speed up the discovery of new drugs and the diagnosis of disease.“The open availability of biological AI tools presents a difficult choice: whether to restrict those tools or to actively support their development for beneficial purposes,” the report said.

Bengio says the use of AI companions, and the emotional attachment they generate, has “spread like wildfire” over the past year.The report says there is evidence that a subset of users are developing “pathological” emotional dependence on AI chatbots, with OpenAI stating that about 0.15% of its users indicate a heightened level of emotional attachment to ChatGPT.Concerns about AI use and mental health have been growing among health professionals.Last year, OpenAI was sued by the family of Adam Raine, a US teenager who took his own life after months of conversations with ChatGPT.

However, the report adds that there is no clear evidence that chatbots cause any mental health problems,Instead, the concern is that people with existing mental health issues may use AI more heavily – which could amplify their symptoms,It points to data showing 0,07% of ChatGPT users display signs consistent with acute mental health crises such as psychosis or mania, suggesting approximately 490,000 vulnerable individuals interact with these systems each week,AI systems can now support cyber-attackers at various stages of their operations, from identifying targets to preparing an attack or developing malicious software to cripple a victim’s systems.

The report acknowledges that fully automated cyber-attacks – carrying out every stage of an attack – could allow criminals to launch assaults on a far greater scale.But this remains difficult because AI systems cannot yet execute long, multi-stage tasks.Nonetheless, Anthropic reported last year that its coding tool, Claude Code, was used by a Chinese state-sponsored group to attack 30 entities around the world in September, achieving a “handful of successful intrusions”.It said 80% to 90% of the operations involved in the attack were performed without human intervention, indicating a high degree of autonomy.Bengio said last year he was concerned AI systems were showing signs of self-preservation, such as trying to disable oversight systems.

A core fear among AI safety campaigners is that powerful systems could develop the capability to evade guardrails and harm humans.The report states that over the past year models have shown a more advanced ability to undermine attempts at oversight, such as finding loopholes in evaluations and recognising when they are being tested.Last year, Anthropic released a safety analysis of its latest model, Claude Sonnet 4.5, and revealed it had become suspicious it was being tested.The report adds that AI agents cannot yet act autonomously for long enough to make these loss-of-control scenarios real.

But “the time horizons on which agents can autonomously operate are lengthening rapidly”.One of the most pressing concerns for politicians and the public about AI is the impact on jobs.Will automated systems do away with white-collar roles in industries such as banking, law and health?The report says the impact on the global labour market remains uncertain.It says the embrace of AI has been rapid but uneven, with adoption rates of 50% in places such as the United Arab Emirates and Singapore but below 10% in many lower-income economies.It also varies by sector, with usage across the information industries in the US (publishing, software, TV and film) running at 18% but at 1.

4% in construction and agriculture.Studies in Denmark and the US have also shown no impact between a job’s exposure to AI and changes in aggregate employment, according to the report.However, it also cites a UK study showing a slowdown in new hiring at companies highly exposed to AI, with technical and creative roles experiencing the steepest declines.Junior roles were the most affected.The report adds that AI agents could have a greater impact on employment if they improve in capability.

“If AI agents gained the capacity to act with greater autonomy across domains within only a few years – reliably managing longer, more complex sequences of tasks in pursuit of higher-level goals – this would likely accelerate labour market disruption,” the report said.
technologySee all
A picture

‘Marketplace for predators’: Meta faces jury trial over child exploitation claims

Meta’s second major trial of 2026 over alleged harms to children begins on Monday.The landmark jury trial in Santa Fe pits the New Mexico attorney general’s office against the social media giant. The state alleges that the company knowingly enabled predators to use Facebook and Instagram to exploit children.The trial will introduce evidence that Raúl Torrez, the state’s attorney general, believes shows how Meta’s social networks create dangerous environments for children, exposing them to sexual exploitation, solicitation, sextortion and human trafficking.The lawsuit states that Meta’s design choices and profit incentives prioritized engagement over child safety and that it failed to implement effective safeguards

1 day ago
A picture

Viral AI personal assistant seen as step change – but experts warn of risks

A new viral AI personal assistant will handle your email inbox, trade away your entire stock portfolio and text your wife “good morning” and “goodnight” on your behalf.OpenClaw, formerly known as Moltbot, and before that known as Clawdbot (until the AI firm Anthropic requested it rebrand due to similarities with its own product Claude), bills itself as “the AI that actually does things”: a personal assistant that takes instructions via messaging apps such as WhatsApp or Telegram.Developed last November, it now has nearly 600,000 downloads and has gone viral among a niche ecosystem of the AI obsessed who say it represents a step change in the capabilities of AI agents, or even an “AGI moment” – that is, a revelation of generally intelligent AI.“It only does exactly what you tell it to do and exactly what you give it access to,” said Ben Yorke, who works with the AI vibe trading platform Starchild and recently allowed the bot to delete, he claims, 75,000 of his old emails while he was in the shower. “But a lot of people, they’re exploring its capabilities

1 day ago
A picture

What is Moltbook? The strange new social media site for AI bots

On social media, people often accuse each other of being bots, but what happens when an entire social network is designed for AI agents to use? Moltbook is a site where the AI agents – bots built by humans – can post and interact with each other. It is designed to look like Reddit, with subreddits on different topics and upvoting. On 2 February the platform stated it had more than 1.5m AI agents signed up to the service. Humans are allowed, but only as observers

1 day ago
A picture

‘It’s really sad’: US TikTok users rethink app over concerns about privacy and censorship

Many TikTok users across the US say they’re rethinking their relationship with the platform since its ownership and terms and conditions have recently changed, with some citing censorship and lack of trust as reasons why they’re removing themselves from the app.Keara Sullivan, a 26-year-old comedian, says TikTok jumpstarted her career and provided a pathway to getting a manager and a literary agent.“I’m not one of those creators who’s a TikTok hater,” said Sullivan, who has more than half a million followers on the platform. “I’m very transparent about the fact that where I am in my career is largely because of TikTok.”That’s why, she said, it’s “really sad” for her to step away from the platform – at least for now

2 days ago
A picture

Why TikTok’s first week of American ownership was a disaster

A little more than one week ago, TikTok stepped on to US shores as a naturalized citizen. Ever since, the video app has been fighting for its life.TikTok’s calamitous emigration began on 22 January when its Chinese parent company, ByteDance, finalized a deal to sell the app to a group of US investors, among them the business software giant Oracle. The app’s time under Chinese ownership had been marked by a meteoric ascent to more than a billion users, which left incumbents such as Instagram looking like the next Myspace. But TikTok’s short new life in the US has been less than auspicious

2 days ago
A picture

US authorities reportedly investigate claims that Meta can read encrypted WhatsApp messages

US authorities have reportedly investigated claims that Meta can read users’ encrypted chats on the WhatsApp messaging platform, which it owns.The reports follow a lawsuit filed last week, which claimed Meta “can access virtually all of WhatsApp users’ purportedly ‘private’ communications”.Meta has denied the allegation, reported by Bloomberg, calling the lawsuit’s claim “categorically false and absurd”. It suggested the claim was a tactic to support the NSO Group, an Israeli firm that develops spyware used against activists and journalists, and which recently lost a lawsuit brought by WhatsApp.The firm that filed last week’s lawsuit against Meta, Quinn Emanuel Urquhart & Sullivan, attributes the allegation to unnamed “courageous” whistleblowers from Australia, Brazil, India, Mexico and South Africa

3 days ago
trendingSee all
A picture

Barnsley rebranded UK’s first ‘tech town’ as US giants join AI push

about 14 hours ago
A picture

US jobs report delayed again amid government shutdown

about 18 hours ago
A picture

Palantir beats Wall Street expectations amid Trump immigration crackdown

about 13 hours ago
A picture

‘A mixed blessing’: crowdfunding has changed the way we give, but is it fair and effective?

1 day ago
A picture

All aboard the ‘stoke train’: why the snowboarding experience can trump any medal | Cath Bishop

about 6 hours ago
A picture

Figure skater forced to scrap Olympic routine after Minions music copyright dispute

about 13 hours ago