Gavin Newsom pushes back on Trump AI executive order preempting state laws

A picture


The ink was barely dry on Donald Trump’s artificial intelligence executive order when Gavin Newsom came out swinging.Just hours after the order went public Thursday evening, the California governor issued a statement saying the presidential dictum, which seeks to block states from regulating AI of their own accord, advances “grift and corruption” instead of innovation.“President Trump and David Sacks aren’t making policy – they’re running a con,” Newsom said, referencing Trump’s AI adviser and crypto “czar”.“Every day, they push the limits to see how far they can take it.”Trump’s executive order is a major victory for tech companies that have campaigned against legislative barriers to developing and deploying their AI products.

It also sets up a clash between state governments and the White House over the future of AI regulation.The immediate backlash from groups including child safety organizations, unions and state officials has highlighted the deeply contentious nature of the order and diverse range of interests it affects.Several officials and organizations have already questioned the legality of the executive order, stating that Trump does not have the power to undermine state legislation on AI and denouncing the decree as the result of tech industry lobbying.California, home to some of the world’s most prominent AI companies and one of the most active states legislating AI, has been a locus for pushback against the order.“This executive order is deeply misguided, wildly corrupt, and will actually hinder innovation and weaken public trust in the long run,” California Democratic representative Sara Jacobs said in a statement.

“We will explore all avenues – from the courts to Congress – to reverse this decision.”After a draft version of Trump’s order leaked in November, state attorney general Rob Bonta said that his office would “take steps to examine the legality or potential illegality of such an executive order”, teeing up a precedent-setting duel between California and the White House.In September, Newsom signed a landmark AI law that would compel developers of large, powerful AI models known as “frontier models” to provide transparency reports and promptly report safety incidents or face fines up to $1m.The governor touted the Transparency in Frontier Artificial Intelligence act as an example for how to regulate AI companies nationwide.“Our state’s status as a global leader in technology allows us a unique opportunity to provide a blueprint for well-balanced AI policies beyond our borders,” Newsom said in an address to the California state senate.

“Especially in the absence of a comprehensive federal AI policy framework and national AI safety standards,”The September bill and more California legislation could be in Trump’s crosshairs,Thursday’s executive order calls for an AI litigation taskforce that would review state laws that do not “enhance the United States’ global AI dominance” and then pursue legal action or potentially withhold federal broadband funding,The taskforce will also consult with the administration’s AI and crypto “czar” to determine which laws to target,Although Trump has framed the executive order as a means of streamlining legislation and removing onerously patchwork regulation, critics have alleged that the government has never provided any comprehensive federal framework for regulating AI to replace state laws.

The order follows attempts to include similar AI moratoriums in bills earlier this year, which failed due to bipartisan backlash.Instead, opponents view the order as a gift to major tech companies that have cozied up to the administration over the course of the year.“President Trump’s unlawful executive order is nothing more than a brazen effort to upend AI safety and give tech billionaires unchecked power over working people’s jobs, rights and freedoms,” AFL-CIO president, Liz Shuler, said in a statement.Within hours of Trump signing the order, opposition loudened among lawmakers, labor leaders, children’s advocacy groups and civil liberties organizations that decried the policy.Other California Democratic leaders said the executive order was an assault on state rights and the administration should instead focus on federal agencies and academic research to boost innovation.

“No place in America knows the promise of artificial intelligence technologies better than California,” said Alex Padilla, a senator for California,“But with today’s executive order, the Trump administration is attacking state leadership and basic safeguards in one fell swoop,”Similarly, Adam Schiff, another California senator, emphasized: “Trump is seeking to preempt state laws that are establishing meaningful safeguards around AI and replace them with … nothing,”Lawmakers from Colorado to Virginia to New York also took issue with the order,Don Beyer, a Virginia congressmember called it a “terrible idea” and said that it would “create a lawless Wild West environment for AI companies”.

Likewise, Alex Bores, a New York state assemblymember, called the order a “massive windfall” for AI companies, adding that “a handful of AI oligarchs bribed Donald Trump into selling out America’s future”.Even Steve Bannon, Trump loyalist and former adviser, criticized the policy.In a text message to Axios, Bannon said Sacks had “completely misled the President on preemption”.Mike Kubzansky, the CEO of Omidyar Network, a philanthropic tech investment firm that funds AI companies, similarly said “the solution is not to preempt state and local laws” and that ignoring AI’s impact on the country “through a blanket moratorium is an abdication of what elected officials owe their constituents”.Blowback against the order has also included child protection organizations that have long expressed concerns over the effects of AI on children.

The debate over child safety has intensified this year in the wake of multiple lawsuits against AI companies over children who died by suicide after interacting with popular chatbots,“The AI industry’s relentless race for engagement already has a body count, and, in issuing this order, the administration has made clear it is content to let it grow,” said James Steyer, the CEO of child advocacy group Common Sense Media,“Americans deserve better than tech industry handouts at the expense of their wellbeing,”A group of bereaved parents and child advocacy organizations have also spoken out,They have been working to pass legislation to better protect children from harmful social media and AI chatbots and released a national public service announcement on Thursday opposing the AI preemption policy.

Separately, Sarah Gardner, the CEO of Heat Initiative, one of the groups in the coalition, called the order “unacceptable”.“Parents will not roll over and allow our children to remain lab rats in big tech’s deadly AI experiment that puts profits over the safety of our kids,” Gardner said.“We need strong protections at the federal and state level, not amnesty for big tech billionaires.”
technologySee all
A picture

Elon Musk teams with El Salvador to bring Grok chatbot to public schools

Elon Musk is partnering with the government of El Salvador to bring his artificial intelligence company’s chatbot, Grok, to more than 1 million students across the country, according to a Thursday announcement by xAI. Over the next two years, the plan is to “deploy” the chatbot to more than 5,000 public schools in an “AI-powered education program”.xAI’s Grok is more known for referring to itself as “MechaHitler” and espousing far-right conspiracy theories than it is for public education. Over the past year, the chatbot has spewed various antisemitic content, decried “white genocide” and claimed Donald Trump won the 2020 election.Nayib Bukele, El Salvador’s president, is now entrusting the chatbot to create curricula in classrooms across the country

A picture

Disney wants you to AI-generate yourself into your favorite Marvel movie

Users of OpenAI’s video generation app will soon be able to see their own faces alongside characters from Marvel, Pixar, Star Wars and Disney’s animated films, according to a joint announcement from the startup and Disney on Thursday. Perhaps you, Lightning McQueen and Iron Man are all dancing together in the Mos Eisley Cantina.Sora is an app made by OpenAI, the firm behind ChatGPT, which allows users to generate videos of up to 20 seconds through short text prompts. The startup previously attempted to steer Sora’s output away from unlicensed copyrighted material, though with little success, which prompted threats of lawsuits by rights holders.Disney announced that it would invest $1bn in OpenAI and, under a three-year deal perhaps worth even more than that large sum, that it would license about 200 of its iconic characters – from R2-D2 to Stitch – for users to play with in OpenAI’s video generation app

A picture

Musk calls Doge only ‘somewhat successful’ and says he would not do it again

Elon Musk has said the aggressive federal job-cutting program he headed early in Donald Trump’s second term, known as the “department of government efficiency” (Doge), was only “a little bit successful” and he would not lead the project again.Musk said he wouldn’t want to repeat the exercise, talking on the podcast hosted by Katie Miller, a rightwing personality with a rising profile who was a Doge adviser and who is married to Stephen Miller, Donald Trump’s hardline anti-immigration deputy chief of staff.Asked whether Doge had achieved what he’d hoped, Musk said: “We were a little bit successful. We were somewhat successful.”Doge created chaos and distress in the government machine in Washington DC, and by May more than 200,000 federal workers had been laid off and roughly 75,000 had accepted buyouts as a result of purges by Musk’s external team of often-young zealots

A picture

ICE is using smartwatches to track pregnant women, even during labor: ‘She was so afraid they would take her baby’

Pregnant immigrants in ICE monitoring programs are avoiding care, fearing detention during labor and deliveryIn early September, a woman, nine months pregnant, walked into the emergency obstetrics unit of a Colorado hospital. Though the labor and delivery staff caring for her expected her to have a smooth delivery, her case presented complications almost immediately.The woman, who was born in central Asia, checked into the hospital with a smartwatch on her wrist, said two hospital workers who cared for her during her labor, and whom the Guardian is not identifying to avoid exposing their hospital or patients to retaliation.The device was not an ordinary smartwatch made by Apple or Samsung, but a special type that US Immigration and Customs Enforcement (ICE) had mandated the woman wear at all times, allowing the agency to track her. The device was beeping when she entered the hospital, indicating she needed to charge it, and she worried that if the battery died, ICE agents would think she was trying to disappear, the hospital workers recalled

A picture

From ‘glacier aesthetic’ to ‘poetcore’: Pinterest predicts the visual trends of 2026 based on its search data

Next year, we’ll mostly be indulging in maximalist circus decor, working on our poetcore, hunting for the ethereal or eating cabbage in a bid for “individuality and self-preservation”, according to Pinterest.The organisation’s predictions for Australian trends in 2026 have landed, which – according to the platform used by interior decorators, fashion lovers and creatives of all stripes – includes 1980s, aliens, vampires and “forest magic”.Among the Pinterest 2026 trends report’s top 21 themes are “Afrohemian” decor (searches for the term are on the rise by baby boomers and Gen X); “glitchy glam” (asymmetric haircuts and mismatching nails); and “cool blue” (drinks, wedding dresses and makeup with a “glacier aesthetic”).Pinterest compared English-language search data from September 2024 to August 2025 with those of the year before and claims it has an 88% accuracy rate. More than 9 million Australians use Pinterest each month

A picture

UK police forces lobbied to use biased facial recognition technology

Police forces successfully lobbied to use a facial recognition system known to be biased against women, young people, and members of ethnic minority groups, after complaining that another version produced fewer potential suspects.UK forces use the police national database (PND) to conduct retrospective facial recognition searches, whereby a “probe image” of a suspect is compared to a database of more than 19 million custody photos for potential matches.The Home Office admitted last week that the technology was biased, after a review by the National Physical Laboratory (NPL) found it misidentified Black and Asian people and women at significantly higher rates than white men, and said it “had acted on the findings”.Documents seen by the Guardian and Liberty Investigates reveal that the bias has been known about for more than a year – and that police forces argued to overturn an initial decision designed to address it.Police bosses were told the system was biased in September 2024, after a Home Office-commissioned review by the NPL found the system was more likely to suggest incorrect matches for probe images depicting women, Black people, and those aged 40 and under