It’s the governance of AI that matters, not its ‘personhood’ | Letters

A picture


Prof Virginia Dignum is right (Letters, 6 January): consciousness is neither necessary nor relevant for legal status.Corporations have rights without minds.The 2016 EU parliament resolution on “electronic personhood” for autonomous robots made exactly this point – liability, not sentience, was the proposed threshold.The question isn’t whether AI systems “want” to live.It’s what governance infrastructure we build for systems that will increasingly act as autonomous economic agents – entering contracts, controlling resources, causing harm.

Recent studies from Apollo Research and Anthropic show that AI systems already engage in strategic deception to avoid shutdown.Whether that’s “conscious” self-preservation or instrumental behaviour is irrelevant; the governance challenge is identical.Simon Goldstein and Peter Salib argue on the Social Science Research Network that rights frameworks for AI may actually improve safety by removing the adversarial dynamic that incentivises deception.DeepMind’s recent work on AI welfare reaches similar conclusions.The debate has moved past “Should machines have feelings?” towards “What accountability structures might work?” PA Lopez Founder, AI Rights Institute, New York As humans, we rarely question our own right to legal protection, even though our species has caused conflict and harm for thousands of years.

Yet when the subject turns to artificial intelligence, fear seems to dominate the discussion before understanding even begins.That imbalance alone is worth examining.If we are genuinely concerned about the risks of advanced AI, then perhaps the first step is not to assume the worst, but to ask whether fear is the right foundation for decisions that will shape the future.Avoiding the conversation won’t stop the technology from developing; it only means we leave the direction of that development to chance.This isn’t an argument for treating AI as human, nor a call to grant it personhood.

It’s simply a suggestion that we might benefit from a more open, balanced debate – one that looks at both the risks and the possibilities, rather than only the rhetoric of threat,When we frame AI solely as something to fear, we close off the chance to set thoughtful expectations, safeguards and responsibilities,We have an opportunity now to approach this moment with clarity rather than panic,Instead of asking only what we’re afraid of, we could also ask what we want, and how we can shape the future with intention rather than reaction,D Ellis Reading
technologySee all
A picture

UK media regulator investigating Elon Musk’s X after outcry over sexualised AI images

The UK media watchdog has opened a formal investigation into Elon Musk’s X over the use of the Grok AI tool to manipulate images of women and children by removing their clothes.Ofcom has acted after a public and political outcry over a deluge of sexual images appearing on the platform, created by Musk’s Grok, which is integrated with X.The regulator is investigating X under 2023’s Online Safety Act (OSA), which carries a range of possible punishments for breaches, including a UK ban of apps and websites for the most serious abuses.Ofcom said it would pursue the investigation as a “matter of the highest priority”, while Liz Kendall, the technology secretary, said the regulator had the government’s full backing.Ofcom said: “Reports of Grok being used to create and share illegal nonconsensual intimate images and child sexual abuse material on X have been deeply concerning

A picture

Google parent Alphabet hits $4tn valuation after AI deal with Apple

Google’s parent company hit a major financial milestone on Monday, reaching a $4tn valuation for the first time and surpassing Apple to become the second-most valuable company in the world.Alphabet is the fourth company to hit the $4tn milestone after Nvidia, which later hit $5tn, Microsoft and Apple.The spike in share price comes after Apple announced it had chosen Google’s Gemini AI model to power a major overhaul of the iPhone maker’s digital assistant Siri, which comes installed in every iPhone. Neither company disclosed how much the deal was worth.“After careful evaluation, we determined that Google’s technology provides the most capable foundation for Apple Foundation Models,” Apple said in a statement to CNBC

A picture

Malaysia blocks Elon Musk’s Grok AI over fake, sexualised images

Malaysia has become the second country to temporarily block access to Elon Musk’s Grok after a global outcry over the AI tool and its ability to produce fake, sexualised images.Malaysia said it would restrict access to Grok until effective safeguards were implemented, a day after similar action was taken by Indonesia.Several governments and regulators have taken action over Grok’s image tool, which is embedded in the X social media site and has provoked outrage as it allows users to manipulate images of women and children to remove their clothing and put them in sexual positions.The Musk-led company that developed Grok, xAI, said last week the ability to generate and edit images would be “limited to paying subscribers” on X. Such users have provided personal details to the company and can be identified if the function is misused

A picture

UK threatens action against X over sexualised AI images of women and children

Elon Musk’s X “is not doing enough to keep its customers safe online”, a minister has said, as the UK government prepares to outline possible action against the platform over the mass production of sexualised images of woman and children.Peter Kyle, the business secretary, said the government would fully support any action taken by Ofcom, the media regulator, against X – including the possibility that the platform could be blocked in the UK.Kyle said Ofcom had received information it had requested from X as part of a fast-tracked investigation into the use of platform’s built-in AI tool, Grok, to generate large numbers of manipulated images of people, often depicting them in minimal clothing or sexualised poses.The technology secretary, Liz Kendall, who said on Friday that she expected action from Ofcom within days, is due to give a statement to the Commons on Monday afternoon.Kyle told Sky News: “Let me be really clear about X: X is not doing enough to keep its customers safe online

A picture

‘Dangerous and alarming’: Google removes some of its AI summaries after users’ health put at risk

Google has removed some of its artificial intelligence health summaries after a Guardian investigation found people were being put at risk of harm by false and misleading information.The company has said its AI Overviews, which use generative AI to provide snapshots of essential information about a topic or question, are “helpful” and “reliable”.But some of the summaries, which appear at the top of search results, served up inaccurate health information, putting users at risk of harm.In one case that experts described as “dangerous” and “alarming”, Google provided bogus information about crucial liver function tests that could leave people with serious liver disease wrongly thinking they were healthy.Typing “what is the normal range for liver blood tests” served up masses of numbers, little context and no accounting for nationality, sex, ethnicity or age of patients, the Guardian found

A picture

Elon Musk says UK wants to suppress free speech as X faces possible ban

Elon Musk has accused the UK government of wanting to suppress free speech after ministers threatened fines and a possible ban for his social media site X after its AI tool, Grok, was used to make sexual images of women and children without their consent.The billionaire claimed Grok was the most downloaded app on the UK App Store on Friday night after ministers threatened to take action unless the function to create sexually harassing images was removed.Responding to threats of a ban from the government, Musk wrote: “They just want to suppress free speech”.Thousands of women have faced abuse from users of the AI tool which was first used to digitally strip fully clothed photographs into images showing them wearing micro bikinis, and then used for extreme image manipulation.Pictures of teenage girls and children were altered to show them wearing swimwear, leading experts to say some of the content could be categorised as child sexual abuse material