H
technology
H
HOYONEWS
HomeBusinessTechnologySportPolitics
Others
  • Food
  • Culture
  • Society
Contact
Home
Business
Technology
Sport
Politics

Food

Culture

Society

Contact
Facebook page
H
HOYONEWS

Company

business
technology
sport
politics
food
culture
society

CONTACT

EMAILmukum.sherma@gmail.com
© 2025 Hoyonews™. All Rights Reserved.
Facebook page

AI firms warned to calculate threat of super intelligence or risk it escaping human control

3 days ago
A picture


Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer’s first nuclear test before they release all-powerful systems.Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat.The US government went ahead with Trinity in 1945, after being reassured there was a vanishingly small chance of an atomic bomb igniting the atmosphere and endangering humanity.In a paper published by Tegmark and three of his students at the Massachusetts Institute of Technology (MIT), they recommend calculating the “Compton constant” – defined in the paper as the probability that an all-powerful AI escapes human control.In a 1959 interview with the US writer Pearl Buck, Compton said he had approved the test after calculating the odds of a runaway fusion reaction to be “slightly less” than one in three million.

Tegmark said that AI firms should take responsibility for rigorously calculating whether Artificial Super Intelligence (ASI) – a term for a theoretical system that is superior to human intelligence in all aspects – will evade human control,“The companies building super-intelligence need to also calculate the Compton constant, the probability that we will lose control over it,” he said,“It’s not enough to say ‘we feel good about it’,They have to calculate the percentage,”Tegmark said a Compton constant consensus calculated by multiple companies would create the “political will” to agree global safety regimes for AIs.

Tegmark, a professor of physics and AI researcher at MIT, is also a co-founder of the Future of Life Institute, a non-profit that supports safe development of AI and published an open letter in 2023 calling for pause in building powerful AIs,The letter was signed by more than 33,000 people including Elon Musk – an early supporter of the institute – and Steve Wozniak, the co-founder of Apple,The letter, produced months after the release of ChatGPT launched a new era of AI development, warned that AI labs were locked in an “out-of-control race” to deploy “ever more powerful digital minds” that no one can “understand, predict, or reliably control”,Tegmark spoke to the Guardian as a group of AI experts including tech industry professionals, representatives of state-backed safety bodies and academics drew up a new approach for developing AI safely,The Singapore Consensus on Global AI Safety Research Priorities report was produced by Tegmark, the world-leading computer scientist Yoshua Bengio and employees at leading AI companies such as OpenAI and Google DeepMind.

It set out three broad areas to prioritise in AI safety research: developing methods to measure the impact of current and future AI systems; specifying how an AI should behave and designing a system to achieve that; and managing and controlling a system’s behaviour,Referring to the report, Tegmark said the argument for safe development in AI had recovered its footing after the most recent governmental AI summit in Paris, when the US vice-president, JD Vance, said the AI future was “not going to be won by hand-wringing about safety”,Tegmark said: “It really feels the gloom from Paris has gone and international collaboration has come roaring back,”
technologySee all
A picture

AI firms warned to calculate threat of super intelligence or risk it escaping human control

Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer’s first nuclear test before they release all-powerful systems. Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat. The US government went ahead with Trinity in 1945, after being reassured there was a vanishingly small chance of an atomic bomb igniting the atmosphere and endangering humanity.In a paper published by Tegmark and three of his students at the Massachusetts Institute of Technology (MIT), they recommend calculating the “Compton constant” – defined in the paper as the probability that an all-powerful AI escapes human control. In a 1959 interview with the US writer Pearl Buck, Compton said he had approved the test after calculating the odds of a runaway fusion reaction to be “slightly less” than one in three million

3 days ago
A picture

Paul McCartney and Dua Lipa among artists urging Starmer to rethink AI copyright plans

Hundreds of leading figures and organisations in the UK’s creative industries, including Coldplay, Paul McCartney, Dua Lipa, Ian McKellen and the Royal Shakespeare Company, have urged the prime minister to protect artists’ copyright and not “give our work away” at the behest of big tech.In an open letter to Keir Starmer, a host of major artists claim creatives’ livelihoods are under threat as wrangling continues over a government plan to let artificial intelligence companies use copyright-protected work without permission.Describing copyright as the “lifeblood” of their professions, the letter warns Starmer that the proposed legal change will threaten Britain’s status as a leading creative power.“We will lose an immense growth opportunity if we give our work away at the behest of a handful of powerful overseas tech companies and with it our future income, the UK’s position as a creative powerhouse, and any hope that the technology of daily life will embody the values and laws of the United Kingdom,” the letter says.The letter urges the government to accept an amendment to the data bill proposed by Beeban Kidron, the cross-bench peer and leading campaigner against the copyright proposals

4 days ago
A picture

‘Tone deaf’: US tech company responsible for global IT outage to cut jobs and use AI

The cybersecurity company that became a household name after causing a massive global IT outage last year has announced it will cut 5% of its workforce in part due to “AI efficiency”.In a note to staff earlier this week, released in stock market filings in the US, CrowdStrike’s chief executive, George Kurtz, announced that 500 positions, or 5% of its workforce, would be cut globally, citing AI efficiencies created in the business.“We’re operating in a market and technology inflection point, with AI reshaping every industry, accelerating threats, and evolving customer needs,” he said.Kurtz said AI “flattens our hiring curve, and helps us innovate from idea to product faster”, adding it “drives efficiencies across both the front and back office”.“AI is a force multiplier throughout the business,” he said

5 days ago
A picture

Leave them hanging on the telephone | Brief letters

Regarding dealing with cold callers (Adrian Chiles, 7 May), it’s irritating I know, but if you don’t mind your phone being inaccessible for a few minutes, why not say: “Hang on, I’ll go and get him/her”, and then leave your phone until the caller rings off? At least you will have wasted some of their day.Robert WalkerPerrancoombe, Cornwall Re fostering a love of reading in children (Letters, 6 May), one of my fondest memories of my teaching career was story time in the infant class in a local village school. Most of the children came quite a distance on buses. They adored Michael Rosen’s poetry. There were many afternoons when it was home time and they would shout: “Please read another Michael Rosen one, Mrs Mansfield, the driver won’t mind waiting

5 days ago
A picture

Wikipedia challenging UK law it says exposes it to ‘manipulation and vandalism’

The charity that hosts Wikipedia is challenging the UK’s online safety legislation in the high court, saying some of its regulations would expose the site to “manipulation and vandalism”.In what could be the first judicial review related to the Online Safety Act, Wikimedia Foundation claims it is at risk of being subjected to the act’s toughest category 1 duties, which impose additional requirements on the biggest sites and apps.The foundation said if category 1 duties were imposed on it, the safety and privacy of Wikipedia’s army of volunteer editors would be undermined, its entries could be manipulated and vandalised, and resources would be diverted from protecting and improving the site.Announcing that it was seeking a judicial review of the categorisation regulations, the foundation’s lead counsel, Phil Bradley-Schmieg, said: “We are taking action now to protect Wikipedia’s volunteer users, as well as the global accessibility and integrity of free knowledge.”The foundation said it was not challenging the act as a whole, nor the existence of the requirements themselves, but the rules that decide how a category 1 platform is designated

5 days ago
A picture

Tech giants beat quarterly expectations as Trump’s tariffs hit the sector

Hello, and welcome to TechScape. I’m your host, Blake Montgomery, and this week in tech news: Trump’s tariffs hit tech companies that move physical goods more than their digital-only counterparts. Two stories about AI’s effect on the labor market paint a murky picture. Meta released a standalone AI app, a product it claims already has a billion users through enforced omnipresence. OpenAI dialed back an obsequious version of ChatGPT

6 days ago
politicsSee all
A picture

‘I thought politics was a dirty thing’ – Zack Polanski on his ‘eco-populist’ vision for the Green party

about 18 hours ago
A picture

Tory energy spokesman claims UN climate experts are ‘biased’

about 18 hours ago
A picture

Labour to defend aid cuts, claiming UK’s days as ‘a global charity’ are over

about 23 hours ago
A picture

Counter-terrorism police investigate fires at properties and car linked to Keir Starmer

about 23 hours ago
A picture

Starmer accused of echoing far right with ‘island of strangers’ speech

1 day ago
A picture

Labour MP says Starmer’s ‘island of strangers’ warning over immigration mimics scaremongering of far right – as it happened

1 day ago