How AI firm Anthropic wound up in the Pentagon’s crosshairs

A picture


Until recently, Anthropic was one of the quieter names in the artificial intelligence boom,Despite being valued at about $350bn, it rarely generated the flashy headlines or public backlash associated with Sam Altman’s OpenAI or Elon Musk’s xAI,Its CEO and co-founder Dario Amodei was an industry fixture but hardly a household name outside of Silicon Valley, and its chatbot Claude lagged in popularity behind ChatGPT,That perception has shifted as Anthropic has become the central actor in a high-profile fight with the Department of Defense over the company’s refusal to allow Claude to be used for domestic mass surveillance and autonomous weapons systems that can kill people without human input,Amid tense negotiations, the AI firm rejected a Pentagon deadline for a deal last week, in a move that led Pete Hegseth, the defense secretary, to accuse Anthropic of “arrogance and betrayal” of its home country while demanding that any companies that work with the US government cease all business with the AI firm.

The week since has brought more chaos,OpenAI announced it had struck its own deal with the DoD, resulting in employee pushback and Amodei accusing rival CEO Sam Altman of giving “dictator-style praise” to Donald Trump, for which Amodei later apologized,Trump meanwhile denounced Anthropic in an interview with Politico, saying he “fired them like dogs”,On Thursday, the DoD formally declared Anthropic a supply-chain risk and demanded other businesses cut ties – the first time an American company has ever been targeted with the designation – which poses grave financial consequences for the company if fully enacted,The feud has intensified an unsettled debate over how AI will be used in warfare and who will be accountable for the result, while also representing one of the most dramatic disagreements so far between the tech industry and the Trump administration.

As the military rapidly adopts the technology for its operations, including in the war with Iran, it has turned previously hypothetical situations into real-world ethical tests for AI companies.Anthropic’s standoff with the DoD is also the culmination of what researchers see as some of the AI firm’s inherent contradictions.It is a company founded on the premise of creating a safe future for AI, which has nevertheless struck major partnerships for classified work with the Pentagon and surveillance tech giant Palantir.Its leadership says it is deeply worried about the existential risks of AI, though they recently dropped a founding safety pledge, citing the speed of industry competition.It has pledged transparency, but like other AI companies has developed its models through a rapacious demand for proprietary data, with court records documenting how it led a secretive effort to scan and destroy millions of physical books to train Claude.

Yet recent weeks have shown that there are some red lines which it appears Anthropic will not cross, a rarity within a tech industry that has largely made itself subservient to the Trump administration and to a fear of falling behind industry rivals.The fallout from its resistance to the Pentagon’s demands has so far been a public relations victory for Anthropic, with Claude surging in popularity after the deal fell apart and OpenAI left bandaging its reputation.Anthropic did not respond to a request for comment on a set of questions related to this article.The longer term implications for Anthropic are less clear, with some defense contractors as well as the US state and treasury departments already stepping away from using its AI models and the Trump administration intent on punishing Anthropic for its dissent.Anthropic has said that it will challenge its supply chain risk designation in court, while Amodei has also reportedly reopened negotiations with the DoD in recent days to try to come to a resolution.

Before he was sparring with Sam Altman and the Pentagon, Dario Amodei was one of OpenAI’s leading researchers.Amodei joined Altman’s firm in 2016 after a stint at Google, taking on a prominent role in developing OpenAI’sGPT models and eventually becoming vice-president of research.His younger sister Daniela, meanwhile, served as vice-president of safety and policy, helping oversee the ethical development of OpenAI’s models.As OpenAI rapidly advanced its technology and Altman divisively consolidated his authority over the company, however, the Amodeis broke away in 2021, prior to the release of ChatGPT, to found Anthropic – taking several other OpenAI employees along with them.They branded Anthropic as an “AI safety and research company”, and central to their new firm was a vow to build safer AI systems that would follow detailed sets of principles they describe as a constitution.

In 2024, Amodei published a lengthy essay titled “Machines of Loving Grace” that outlined some of his utopian vision for the future of AI.He argued that AI could eliminate most cancers, prevent nearly all forms of infectious disease and reduce economic inequality.He also presented vague ideas for how AI would integrate into everything from decision-making in the justice system to how the government could provide services such as health benefits.On democracy, however, Amodei was more skeptical.“I see no strong reason to believe AI will preferentially or structurally advance democracy and peace,” he wrote.

Amodei, who received a doctorate in biophysics at Princeton University before becoming enthralled with the potential of artificial intelligence, had for years been concerned about the existential risks of developing AI and seen parallels to the creation of nuclear weapons,One of his favorite books is The Making of the Atomic Bomb by Richard Rhodes, a nearly 900-page Pulitzer-winning account of how nuclear scientists ushered in a new and dangerous world through the technology they created,While a mix of discomfort and pride about becoming the new Robert Oppenheimer is common among CEOs of AI companies, part of the Amodeis’ focus on existential risk has ties with a utilitarian movement known as “effective altruism”, which became popular in Silicon Valley throughout the 2010s and advocated for projects that would maximize global good,The movement, which has since fallen out of vogue after a series of scandals such as its close association with the disgraced crypto billionaire Sam Bankman-Fried, also featured a subset of people concerned with AI safety – the idea that one of the biggest global threats is the development of AI that could turn against humanity,Although the Amodeis have denied being adherents of effective altruism, many of the company’s core principles echo its language, such as vows to “maximize positive outcomes for humanity in the long run”.

Some of Anthropic’s earliest investors, such as Facebook co-founder Dustin Moskovitz, also had connections to the effective altruism movement.Daniela Amodei’s husband, Holden Karnofsky, meanwhile co-founded and for years was CEO of one of the largest effective-altruism based philanthropic funding organizations, Open Philanthropy.When Hegseth declared Anthropic a supply-chain risk this past week, he also criticized Anthropic as being “cloaked in the sanctimonious rhetoric of ‘effective altruism’”.The AI safety movement has its critics outside the Pentagon as well, including researchers who believe that concerns about existential threats from artificial intelligence are often a distraction from the more tangible, mundane harms and biases of AI.“They would talk about these existential risks and the misappropriation of AI for bioterrorism.

I always thought that those were either too distant or too out of reach,” said Sarah Kreps, director of the Tech Policy Institute at Cornell University.“That it didn’t quite fully understand risk.”The differences between the concerns of the capital S “AI Safety” movement versus the broader field of safety and ethics in AI is a long-running schism within the industry.It also offers an explanation for some of the dissonance about how Anthropic could be so worried about developing AI to benefit humanity while at the same time allowing its models to be used by intelligence and defense agencies for lethal purposes.“There seems to be a little bit of a misunderstanding in the discourse – that because Anthropic have clearly put themselves out as accountable, then they are against the use of their systems in warfare,” said Margaret Mitchell, an AI ethics researcher and chief ethics scientist at the tech company Hugging Face.

“But that’s not true.”“It’s not that they don’t want to kill people.It’s that they want to make sure to kill the right people,” Mitchell said.“And who the right people are is decided by the government.”While Anthropic vowed to build a safer AI, it pursued a different sector of the AI market than its rivals.

If OpenAI’s ChatGPT is presented as a consumer-forward chatbot that many people treat like a search engine or AI companion, Anthropic has geared Claude more toward enterprise software solutions and integration into the organizational infrastructure of workplaces.The distinction, though boring on its face, has made Claude the preferred choice at many organizations and helped make it the first model permitted for classified use in military systems.Anthropic’s integration into the military began with a 2024 deal with Palantir to allow Claude to be used within its systems, which already operated in classified environments.The two companies touted the agreement as a way to drastically reduce the resources and time needed for military operations and intelligence gathering.The following year, Anthropic, along with several other major AI companies, struck a $200m deal with the DoD to use their AI tools for military operations.

What has since become apparent is that these deals did not include permanent agreements on how the government could use Anthropic’s AI or what safety guardrails would be fixed on its models,With the military’s indirect access via Palantir’s system, Anthropic had less direct control over its technology’s use than it would with Claude’s website,That discrepancy came to a head in recent months as the government requested that Anthropic loosen its safety restrictions to allow a wider range of use, kicking off the current dispute between the company and the Pentagon,Anthropic’s hiring in recent years of former Biden staffers, Amodei’s political opposition to Trump and Hegseth’s desire to eradicate “wokeness” from the military have all added a political dimension to the standoff,The Pentagon’s chief technical officer Emil Michael also appears to hold a personal distaste for Amodei, publicly accusing him of being a “liar” and having a “God-complex”.

Giving a sense of urgency to the negotiations is the US military’s use of Claude for a wide range of operations, including its mission to capture Venezuelan leader Nicolás Maduro and in its war with Iran,The Washington Post reported that the military is using Palantir’s Maven smart system, which has Claude embedded into it, to determine which sites in Iran to bomb and provide analysis on its strikes,While the dispute Anthropic has run into with the Pentagon has elements unique to AI, it is also emblematic of problems around dual-use technologies, according to experts, meaning products that have both civilian and military applications,A technology that is developed for a broad consumer base and then adapted for use in classified military systems is bound to hit fault lines, since the technology is not tailor-made for specific use cases or built with parameters specifically for military use,Companies can find that their product is being repurposed in ways they may ethically oppose, but have little ability to prevent.

“The same technology that underlies finding a bird in a picture underlies finding a civilian fleeing from their home,” Mitchell gave as an example.“That’s the same type of model, just very slightly different fine tuning.”Another issue is that tech companies do not have a perfect window into how their technologies will be used in classified systems, while at the same time the military does not have knowledge of exactly how proprietary technologies like Anthropic’s Claude actually work – an issue which law professor Ashley Deeks has called the “double black box”.Even contracts on agreed-upon use can be fuzzy, especially given the Trump administration’s distaste for legal oversight.“There is an expectation, generally, that parties to a contract are supposed to comply with the contract.

” said Deeks, a professor at the University of Virginia Law School.“But, of course, contracts need to be interpreted and the military might interpret a phrase one way where the company intended it to mean something else.”Hanging over the feud is also the broader question of who should decide what AI is used for and a lack of detailed regulation from Congress on autonomous weapons systems.Although neither Anthropic nor the Pentagon believe that a private company should have decision-making power over AI’s military applications, right now the company is functioning as one of the only checks on what appears to be the military’s expansive desires for weaponizing AI.“Do we want the DoD to be using AI for autonomous weapon systems, and if so, in what settings, with what restrictions, at what level of confidence, what level of risk are we willing to take on?” Deeks said.

“It’s hard for us to have a sense out in the public about how the DoD is thinking about all this.”
societySee all
A picture

Cancer death rate in Britain down by almost a third since 1980s

The rate of people dying from cancer in the UK has fallen by almost a third since the 1980s amid seismic progress in prevention, diagnosis and treatment, a report has found.About 247 in every 100,000 people die from cancer each year, a 29% drop from the peak in 1989 of about 355 per 100,000, according to an analysis by Cancer Research UK (CRUK).Cancer remains Britain’s biggest killer, causing about one in four deaths, and survival rates lag behind a number of European countries, including Romania and Poland.However, in the past decade alone, the rate of people dying from cancer has fallen by 11%. The death rate for ovarian cancer dropped by 19% between 2012-2014 and 2022-2024, stomach cancer fell by 34% and lung cancer 22%

A picture

NHS England pauses new referrals for masculinising or feminising hormone treatment in under-18s

The NHS is pausing new referrals for masculinising or feminising hormone treatment for 16 and 17-year-olds after an in-depth review found there was insufficient evidence to support its continued use.Prescriptions for hormones had been available in England for under-18s with a diagnosis of gender incongruence or dysphoria who met certain criteria.But after the Cass review, NHS England commissioned its own review of all the available clinical evidence. That review has now concluded and found the evidence did not back the continued use of the treatment for 16 and 17-year-olds.In her review of children’s gender care, Hilary Cass had recommended “extreme caution” in providing such treatment and a “clear clinical rationale for providing hormones at this stage rather than waiting until an individual reaches 18”

A picture

Recreational drugs can more than double risk of stroke, study suggests

Recreational drugs can more than double the risk of stroke, with some of the most concerning impacts seen among younger people, a major review suggests.Scientists analysed medical data from more than 100 million people and found that the risk of stroke was 122% higher for amphetamine users and 96% higher for cocaine users compared with those who did not take the drugs.Cannabis users were also at greater risk, suffering 37% more strokes than non-users, the review found, though researchers saw no evidence that opioids, a highly addictive painkiller, added to a person’s risk of stroke.The rise in strokes observed in connection with some drugs was not confined to older people. When researchers focused on under-55s, they saw a near tripling in stroke risk among amphetamine users

A picture

Martha’s rule may have saved 400 lives so far in England, figures show

More than 400 lives may have been saved as a result of Martha’s rule, which lets NHS patients request a review of their care, official figures reveal.Helplines received more than 10,000 calls in the first 16 months of the scheme after its introduction in England in 2024, according to data seen by the Guardian. Thousands of patients were either moved to intensive care, received drugs they needed or benefited from other changes as a direct result of the calls.The system is named after Martha Mills, 13, who died in 2021 from sepsis after a bicycle accident. A coroner found she would probably have survived if she had been moved to the intensive care unit at King’s College hospital in London when she began deteriorating

A picture

How ADHD diagnosis helped my mental health | Letters

In suggesting there is a possibility that we all lie somewhere on an ADHD continuum, your correspondent (Letters, 27 February) is missing the point.ADHD – and autism – are neurodiversities, meaning that the brains of individuals with ADHD and/or autism are “wired” differently from those of people with “typical” brains. In other words, you either have it or you don’t. To suggest that everyone is a bit ADHD or a bit autistic is insulting to those of us who actually are ADHD/autistic, and diminishes our lived experience. Yes, self-help tools can be useful

A picture

Labour should aim to end sexual exploitation, not just curb its visibility | Letter

Your editorial on adult services websites (4 March) rightly raises urgent questions about platform harm and the government’s responsibility to act.Unseen’s modern slavery helpline indicated 799 potential victims of sexual exploitation in 2025. Reports of child sexual exploitation more than doubled in 2024 – from 53 to 110. These are not projections. They are cases reported directly to us by victims and frontline workers with nowhere else to call