Disappointing Oracle results knock $80bn off value amid AI bubble fears

A picture


Oracle’s shares tumbled 15% on Thursday in response to the company’s quarterly financial results, disclosed the day before,Roughly $80bn vanish from the value of the business software company co-founded by Donald Trump ally Larry Ellison, falling from $630bn (£470bn) to $550bn and fuelling fears of a bubble in artificial intelligence-related stocks,Shares in the chipmaker Nvidia, seen as a bellwether for the AI boom, fell after Oracle’s,The drop extended a 11,5% fall during after-hours trading that followed results shwoing a lower-than-expected 14% rise in revenues to $16bn in the latest quarter.

Investors were also spooked by Oracle raising forecasts for its already-enormous investment in AI.It expects capital expenditure to jump by 40% to $50bn, with the bulk of the increase aimed at building datacentres.The company is managing a growing debt pile.Oracle’s long-term debt has surged by 25% over the past 12 months to $99.9bn.

Even the cost of insuring its debt rose Thursday as investor confidence in the company waned,The business posted weaker-than-expected quarterly revenues for the three months to the end of November, as sales at its cloud computing business grew at a slower pace than forecast at 34%,Investors were also disappointed by a slower than expected 68% growth in revenues from its infrastructure business,“Frankly, the report was not dramatically bad, but it came to confirm concerns around heavy AI spending, financed by debt, with an unknown timeline for revenue generation,” Ipek Ozkardeskaya, a senior analyst at Swissquote, said,Continued optimism about the potential for AI technology has led to a leap in company valuations in recent months, but there has been a growing spate of warnings from policymakers and business leaders who say stock market valuations could tumble if investors end up being disappointed by the progress or adoption of AI technology.

Oracle became an important tech player creating software for Fortune 500 firms around the world.More recently, it found strength in cloud computing, having become the fastest-growing competitor to Amazon, Microsoft and Google.The surge in AI has also been a boon to Oracle, which has entered lucrative deals with the likes of OpenAI, the maker of ChatGPT.However, there are also growing concerns about how reliant companies are becoming on each other’s financing within the AI ecosystem.Oracle said overnight that its measure of revenue from customer contracts rose by 440% over the past year, but analysts were wary when it emerged that the contracts were driven by new commitments from Meta and Amazon.

“Although these are two solid customers, it will not placate fears that big tech’s AI investments are becoming circular, which leaves it vulnerable to a loss of investor confidence,” Kathleen Brooks, a research director at XTB, said.“Overall, strong contract growth was not enough to placate fears about AI and the huge amount of [capital expenditure] spending required by companies to build AI infrastructure.”
technologySee all
A picture

ICE is using smartwatches to track pregnant women, even during labor: ‘She was so afraid they would take her baby’

Pregnant immigrants in ICE monitoring programs are avoiding care, fearing detention during labor and deliveryIn early September, a woman, nine months pregnant, walked into the emergency obstetrics unit of a Colorado hospital. Though the labor and delivery staff caring for her expected her to have a smooth delivery, her case presented complications almost immediately.The woman, who was born in central Asia, checked into the hospital with a smartwatch on her wrist, said two hospital workers who cared for her during her labor, and whom the Guardian is not identifying to avoid exposing their hospital or patients to retaliation.The device was not an ordinary smartwatch made by Apple or Samsung, but a special type that US Immigration and Customs Enforcement (ICE) had mandated the woman wear at all times, allowing the agency to track her. The device was beeping when she entered the hospital, indicating she needed to charge it, and she worried that if the battery died, ICE agents would think she was trying to disappear, the hospital workers recalled

A picture

From ‘glacier aesthetic’ to ‘poetcore’: Pinterest predicts the visual trends of 2026 based on its search data

Next year, we’ll mostly be indulging in maximalist circus decor, working on our poetcore, hunting for the ethereal or eating cabbage in a bid for “individuality and self-preservation”, according to Pinterest.The organisation’s predictions for Australian trends in 2026 have landed, which – according to the platform used by interior decorators, fashion lovers and creatives of all stripes – includes 1980s, aliens, vampires and “forest magic”.Among the Pinterest 2026 trends report’s top 21 themes are “Afrohemian” decor (searches for the term are on the rise by baby boomers and Gen X); “glitchy glam” (asymmetric haircuts and mismatching nails); and “cool blue” (drinks, wedding dresses and makeup with a “glacier aesthetic”).Pinterest compared English-language search data from September 2024 to August 2025 with those of the year before and claims it has an 88% accuracy rate. More than 9 million Australians use Pinterest each month

A picture

UK police forces lobbied to use biased facial recognition technology

Police forces successfully lobbied to use a facial recognition system known to be biased against women, young people, and members of ethnic minority groups, after complaining that another version produced fewer potential suspects.UK forces use the police national database (PND) to conduct retrospective facial recognition searches, whereby a “probe image” of a suspect is compared to a database of more than 19 million custody photos for potential matches.The Home Office admitted last week that the technology was biased, after a review by the National Physical Laboratory (NPL) found it misidentified Black and Asian people and women at significantly higher rates than white men, and said it “had acted on the findings”.Documents seen by the Guardian and Liberty Investigates reveal that the bias has been known about for more than a year – and that police forces argued to overturn an initial decision designed to address it.Police bosses were told the system was biased in September 2024, after a Home Office-commissioned review by the NPL found the system was more likely to suggest incorrect matches for probe images depicting women, Black people, and those aged 40 and under

A picture

Trump clears way for Nvidia to sell powerful AI chips to China

Donald Trump has cleared the way for Nvidia to begin selling its powerful AI computer chips to China, marking a win for the chip maker and its CEO, Jensen Huang, who has spent months lobbying the White House to open up sales in the country.Before Monday’s announcement, the US had prohibited sales of Nvidia’s most advanced chips to China over national security concerns.Trump posted to Truth Social on Monday: “I have informed President Xi, of China, that the United States will allow NVIDIA to ship its H200 products to approved customers in China, and other Countries, under conditions that allow for continued strong National Security. President Xi responded positively!”Trump said the Department of Commerce was finalising the details and that he was planning to make the same offer to other chip companies, including Advanced Micro Devices (AMD) and Intel. Nvidia’s H200 chips are the company’s second most powerful, and far more advanced than the H20, which was originally designed as a lower-powered model for the Chinese market that would not breach restrictions, but which the US banned anyway in April

A picture

AI researchers are to blame for serving up slop | Letter

I’m not surprised to read that the field of artificial intelligence research is complaining about being overwhelmed by the very slop that it has pioneered (Artificial intelligence research has a slop problem, academics say: ‘It’s a mess’, 6 December). But this is a bit like bears getting indignant about all the shit in the woods.It serves AI researchers right for the irresponsible innovations that they’ve unleashed on the world, without ever bothering to ask the rest of us whether we wanted it.But what about the rest of us? The problem is not restricted to AI research – their slop generators have flooded other disciplines that bear no blame for this revolution. As a peer reviewer for top ethics journals, I’ve had to point out that submissions are AI-generated slop

A picture

EU opens investigation into Google’s use of online content for AI models

The EU has opened an investigation to assess whether Google is breaching European competition rules in its use of online content from publishers and YouTube creators for artificial intelligence.The European Commission said on Tuesday it would examine whether the US tech company, which runs the Gemini AI model and is owned by Alphabet, was putting rival AI owners at a “disadvantage”.The commission said: “The investigation will notably examine whether Google is distorting competition by imposing unfair terms and conditions on publishers and content creators, or by granting itself privileged access to such content, thereby placing developers of rival AI models at a disadvantage.”It said it was concerned that Google may have used content from web publishers to generate AI-powered services on its search results pages without appropriate compensation to publishers and without offering them the possibility to refuse such use of their content.The commission said it was also concerned as to whether Google had used content uploaded to YouTube to train its own generative AI models without offering creators compensation or the possibility to refuse