Mother of one of Elon Musk’s sons sues over Grok-generated explicit images

A picture


The mother of one of Elon Musk’s children is suing his company – alleging explicit images were generated of her by his Grok AI tool, including one in which she was underage,Ashley St Clair has filed a lawsuit with the supreme court of the state of New York against xAI, alleging that Grok, which is used on the social media platform X, promised to stop generating explicit images but continued to do so,She is seeking punitive and compensatory damages, claiming dozens of sexually explicit and degrading deepfake images were created by Grok,After two weeks of public outcry at the tool being used to create sexualised images of women and children, the company said on Wednesday it would “geoblock” the ability of users “to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X” in countries where it was illegal,St Clair, 27, who is estranged from Musk, is a rightwing influencer, author and political commentator.

She and Musk are the parents of a son born in 2024.She is being represented by Carrie Goldberg, a victims’ rights lawyer who specialises in holding tech companies accountable and has previously represented women who were victims of sexual harassment and abuse.Goldberg told the Guardian: “xAI is not a reasonably safe product and is a public nuisance.Nobody has born the brunt more than Ashley St Clair.Ashley filed suit because Grok was harassing her by creating and distributing nonconsensual, abusive, and degrading images of her and publishing them on X.

“This harm flowed directly from deliberate design choices that enabled Grok to be used as a tool of harassment and humiliation.Companies should not be able to escape responsibility when the products they build predictably cause this kind of harm.We intend to hold Grok accountable and to help establish clear legal boundaries for the entire public’s benefit to prevent AI from being weaponised for abuse.”The filing alleges the social media company “retaliated against her, demonetising her X account and generating multitudes more images of her, including unlawful images of her in sex positions, covered in semen, virtually nude, and images of her as a child naked”.Images generated by Grok, according to the filing, include one of her as a 14-year-old stripped into a string bikini plus sexualised content of St Clair as an adult, including a request to “put the girl in a bikini made out of floss”.

Not only were the images de facto nonconsensual, the filing states, but “Grok and xAI also had explicit knowledge that St Clair was not consenting to the creation or dissemination of these images because of her requests for removal”.Grok also responded to user requests to add tattoos on to St Clair’s body – including the words “Elon’s whore”, the filing said.St Clair, who is Jewish, alleges Grok digitally dressed her in a bikini decorated with swastikas.The lawsuit states that X “financially benefited from the creation and dissemination of nonconsensual, realistic, sexualised deepfake content depicting Plaintiff as a minor and adult”.The filing states that “xAI is directly liable for the harassment and explicit images created by its own chatbot, Grok”.

Musk has posted on X that the users of his app are responsible for the images they create.He said recently: “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”He added: “Obviously, Grok does not spontaneously generate images, it does so only according to user requests.”X said on Thursday it had “zero tolerance for any forms of child sexual exploitation, nonconsensual nudity, and unwanted sexual content”.The company has filed a countersuit claiming that according to the terms of service of X, St Clair cannot sue the company in New York but has to do so in Texas.

St Clair previously told the Guardian that she felt “horrified and violated”, adding: “It’s another tool of harassment,Consent is the whole issue,”Acolytes of Musk had disliked her since she went public about his desire to build a “legion” of children, she said,Musk is the father of 13 other children, with three other women,X has been contacted for comment.

technologySee all
A picture

Mother of one of Elon Musk’s sons sues over Grok-generated explicit images

The mother of one of Elon Musk’s children is suing his company – alleging explicit images were generated of her by his Grok AI tool, including one in which she was underage.Ashley St Clair has filed a lawsuit with the supreme court of the state of New York against xAI, alleging that Grok, which is used on the social media platform X, promised to stop generating explicit images but continued to do so.She is seeking punitive and compensatory damages, claiming dozens of sexually explicit and degrading deepfake images were created by Grok.After two weeks of public outcry at the tool being used to create sexualised images of women and children, the company said on Wednesday it would “geoblock” the ability of users “to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X” in countries where it was illegal.St Clair, 27, who is estranged from Musk, is a rightwing influencer, author and political commentator

A picture

Grok AI: what do limits on tool mean for X, its users and UK media watchdog?

Elon Musk’s X has announced it will stop the Grok AI tool from allowing users to manipulate images of people to show them in revealing clothing such as bikinis.The furore over Grok, which is integrated with the X platform, has sparked a public and political backlash as well as a formal investigation by Ofcom, the UK’s communications watchdog.Here is a guide to what X’s announcement means for the social media platform, its users and Ofcom.The social media platform said on Wednesday it had implemented “technical measures” to stop the @Grok account on X from allowing the editing of images of real people so that they appear to be in revealing clothing such as bikinis. Before this, users had been able to ask @Grok to manipulate images, with the results being published on the platform

A picture

‘Not regulated’: launch of ChatGPT Health in Australia causes concern among experts

A 60-year-old man with no history of mental illness presented at a hospital emergency department insisting that his neighbour was poisoning him. Over the next 24 hours he had worsening hallucinations, and tried to escape the hospital.Doctors eventually discovered the man was on a daily diet of sodium bromide, an inorganic salt mainly used for industrial and laboratory purposes including cleaning and water treatment.He bought it over the internet after ChatGPT told him he could use it in place of table salt because he was worried about the health impacts of salt in his diet. Sodium bromide can accumulate in the body causing a condition called bromism, with symptoms including hallucinations, stupor and impaired coordination

A picture

Grok scandal highlights how AI industry is ‘too unconstrained’, tech pioneer says

The scandal over the flood of intimate images on Elon Musk’s X created non-consensually by its Grok AI tool has underlined how the artificial intelligence industry is “too unconstrained”, according to a pioneer of the technology.Yoshua Bengio, a computer scientist described as one of the modern “godfathers of AI”, said tech companies were building systems without appropriate technical and societal guardrails.Bengio spoke to the Guardian as he appointed the historian Yuval Noah Harari and the former Rolls-Royce chief executive Sir John Rose to the board of his AI safety lab.X has announced it is stopping Grok from manipulating pictures of real people to show them in revealing clothes such as bikinis, including for premium subscribers, after a public and political backlash.Asked what the furore showed about the state of the AI industry, Bengio said the situation across the sector was “not completely a free for all” but needed to be addressed

A picture

Musk’s X to block Grok AI tool from creating sexualised images of real people

The UK government has claimed “vindication” after Elon Musk’s X announced it had stopped its AI-powered Grok feature from editing pictures of real people to show them in revealing clothes such as bikinis, including for premium subscribers.After a fortnight of public outcry at the tool embedded into X being used to create sexualised images of women and children, the company said it would “geoblock” the ability of users “to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X”, in countries where it was illegal.It said it would do this in the UK in line with law changes ministers have pledged to introduce. X also said it had “zero tolerance for any forms of child sexual exploitation, nonconsensual nudity, and unwanted sexual content”. It did not specify whether people would still be able to create such images on the standalone Grok app

A picture

California attorney general investigates Musk’s Grok AI over lewd fake images

California authorities have announced an investigation into the output of Elon Musk’s Grok.The state’s top attorney said Grok, an AI tool and image generator made by Musk’s company xAI, appears to be making it easy to harass women and girls with deepfake images on X and elsewhere online.“The avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking,” California attorney general, Rob Bonta, said in a statement. “I urge xAI to take immediate action to ensure this goes no further.”Bonta’s office is investigating whether and how xAI violated state law