THE ALGORITHMIC REVOLUTION

The Point of No Return for Military AI

written by Francesco D'Isa
The Point of No Return for Military AI

Last week, the United States government labeled the AI company Anthropic a “national security threat,” a designation previously reserved for foreign companies such as Huawei and Kaspersky. Anthropic’s fault was refusing to remove from its contract with the Pentagon two clauses prohibiting the use of its AI model, Claude, for mass surveillance of American citizens and for fully autonomous weapons systems. Just hours later, OpenAI signed an agreement with the Pentagon stating that it had obtained the same guarantees. Clearly, something does not add up.

Anthropic was the only major AI company whose models had been approved for use within classified Pentagon systems through a partnership with Palantir. The contract, signed in July 2025 and worth roughly $200 million, provided for Claude to be integrated into the most sensitive environments of the American defense apparatus: intelligence, weapons development, and field operations. From the outset, Anthropic had included two restrictive clauses: a ban on using Claude for mass surveillance of American citizens and for weapons systems operating without human supervision.

In the final weeks of February, the Pentagon demanded that Anthropic remove these restrictions, requesting that the models be available for “all lawful purposes.” Defense Secretary Pete Hegseth summoned CEO Dario Amodei to the Pentagon on February 24, issuing an ultimatum: comply by 5:01 p.m. on Friday, February 27, or face consequences, including the possibility of designating the company a “supply chain risk” or invoking the Defense Production Act to compel compliance with Pentagon demands. On February 26, Anthropic stated that the proposals received overnight were unacceptable because the contract included clauses that would allow safeguards to be bypassed. Amodei responded publicly: “We cannot in good conscience accept their request.”

On Friday, February 27, roughly an hour before the deadline, Trump wrote on his social platform Truth that “Anthropic’s left-wing lunatics have made a disastrous mistake” and ordered every federal agency to immediately cease using Anthropic products, allowing a six-month transition period. Shortly afterward, Hegseth formally labeled Anthropic a “supply chain risk,” an unprecedented classification for an American company. The designation implies that any company working with the Pentagon must demonstrate that it has no commercial relationship with Anthropic, potentially eroding a significant portion of its customer base.

As noted by the Center for American Progress, the Pentagon’s threats contained a contradiction: if Anthropic represents a security risk, it should be removed from military systems; if Claude is so essential as to justify invoking the Defense Production Act to force the company to provide it, then it cannot simultaneously be a risk. The two claims cannot both be true. The “supply chain risk” designation therefore appears to function as a political weapon used as negotiating leverage, far beyond any proportional response.

Only hours after Anthropic’s ban, Sam Altman announced that OpenAI had reached an agreement with the Pentagon. According to OpenAI’s official statement of February 28, the deal establishes three “red lines”: no domestic mass surveillance, no autonomous weapons systems, and no high-risk automated decision-making. The difference is that OpenAI framed these as technical safeguards embedded in the model and as references to existing legislation, rather than explicit contractual prohibitions.

But if OpenAI’s red lines are the same as Anthropic’s, why did the Pentagon reject one and accept the other? The answer lies in the nature of the constraint.

Anthropic sought explicit contractual guarantees — clauses specifically banning mass surveillance and autonomous weapons, with binding legal force. Any attempt by the Pentagon to circumvent those prohibitions would constitute a contractual violation, with legal consequences and the possibility for Anthropic to revoke access or pursue litigation.

OpenAI accepted the formula of “all lawful purposes,” embedding the same limitations as technical safeguards within the model and as references to existing law in the contract text. The agreement states that the Pentagon may use the system for “all lawful purposes consistent with applicable law and established oversight protocols.” Existing regulations, however, do not impose absolute bans on autonomous weapons — and laws can change.

The difference is substantial. If a Pentagon operator finds a way to bypass OpenAI’s technical classifiers, they are not technically violating any contractual clause: they are using the model for a lawful purpose, and the model, due to technical limitations, failed to block the request. Under Anthropic’s contract, the same action would have constituted a legal violation regardless of how filters functioned.

In military environments, operational network control also belongs to the government. OpenAI may send its engineers, but in practice the Pentagon determines who has access to what and under which conditions. The informational asymmetry is enormous, and OpenAI may not even know how its models are being used in certain contexts. A prompt requesting analysis of population movement patterns may constitute legitimate intelligence or domestic surveillance; the distinction depends on information OpenAI engineers may not have access to.

A detail reported by Axios makes the picture even more troubling. While Hegseth was publicly announcing Anthropic’s designation as a security risk, Emil Michael was reportedly offering Anthropic an agreement allowing the collection and analysis of Americans’ data: geolocation, web browsing data, and financial information purchased from brokers. The Pentagon clearly had specific use cases in mind that Anthropic’s clauses would have unequivocally blocked. Under OpenAI’s architecture, those same uses fall into a gray area.

What has occurred sets a clear precedent: companies that impose legal constraints on military uses of their products are punished. Enforcing contractual limits risks being labeled a national security threat. Faced with this precedent, few companies will choose Anthropic’s path.

Anthropic, however, is far from “pure,” both because of its prior military involvement and because its stance may ultimately reflect commercial strategy as much as ethical positioning. OpenAI has in fact suffered significant reputational damage from this decision, which will likely translate into lost users — potentially to Claude’s benefit.

The terms of the agreement are therefore extremely important and deserve close scrutiny. What exactly does the Pentagon want that Anthropic refused?

First, systems capable of selecting and striking targets without human supervision. The term “autonomous weapon” evokes cinematic fantasies far removed from real military engineering; yet technical and legal debate around such systems has been ongoing for over a decade, and the level of automation already deployed in current conflicts is highly advanced. Drones capable of autonomously identifying categories of targets, air-defense systems deciding within milliseconds whether a flying object constitutes a threat, and loitering munitions programmed to strike designated targets are all weapons already in use. What changes with large language models and next-generation computer vision systems is the scale and granularity of autonomy. Conventional automated targeting relies on relatively rigid parameters, whereas AI-based systems can reason under ambiguity, integrate heterogeneous sources, and infer intent. This expansion of capability also enlarges the shadow zone in which lethal decisions are made: the more sophisticated the system, the harder it becomes to draw a clear line between assisting human decision-making and replacing it.

Not that humans are more merciful than AI — we are the ones who commit genocides — but that is not the point. International humanitarian law rests on principles such as distinction between combatants and civilians, proportionality, and precaution in attack. All presuppose an agent capable of contextual judgment and accountable for their actions. When the decision to open fire is delegated to an algorithm, the chain of responsibility breaks at a difficult-to-identify point. Is responsibility borne by the programmer? The commander who authorized deployment? The operator who pressed the button knowing the system would act independently thereafter? This legal vacuum is one of the main reasons why a broad coalition of jurists, engineers, and humanitarian organizations has long called for a binding treaty prohibiting lethal systems lacking meaningful human oversight.

UN negotiations within the framework of the Convention on Certain Conventional Weapons have continued informally since 2014 without ever reaching a formal negotiating mandate. Major military powers — the United States, Russia, and China — have consistently resisted binding regulation, while a coalition of smaller states and organizations such as the Campaign to Stop Killer Robots has pushed in the opposite direction.

The second issue is mass surveillance. AI makes automated population monitoring possible at previously unimaginable scale: geolocation tracking, online behavior analysis, financial data aggregation, facial recognition — all processed and correlated in real time. The Axios report on Pentagon requests to Anthropic (collection of Americans’ web browsing data, geolocation, and financial information) shows that these were concrete scenarios the U.S. government intended to enable in some form. If such monitoring is already troubling when conducted by states against other states, applying it domestically stands in direct contradiction with any meaningful conception of democracy.

Although the author is a pacifist and considers any military application unacceptable, from a realist perspective preventing the use of AI in warfare is impossible — demanding it would be akin to banning electricity or computing within armed forces. Artificial intelligence is infrastructure that will inevitably integrate everywhere; the question is not whether it will be used, but how far its use will go. The most useful parallel may be chemical weapons.

The 1925 Geneva Protocol and the 1993 Chemical Weapons Convention emerged after immense trauma: the gas warfare of World War I and chemical bombings during the Iran–Iraq War. Is it possible to achieve something similar for military AI before an equivalent trauma occurs? History suggests no: arms control conventions usually arise from retrospective rejection rather than prevention. There is, however, a more recent and encouraging precedent — the 1997 Ottawa Treaty banning anti-personnel mines. Landmines were not eliminated, but their use became an act of international condemnation carrying real political cost. That outcome was achieved through sustained and organized public opinion movements.

This may be the viable path for military AI as well: a concrete, informed social rejection of autonomous weapons and mass surveillance — not of artificial intelligence itself, but of its most dangerous applications and the governments pursuing them.

I have few illusions. In the current geopolitical context, with nuclear non-proliferation treaties increasingly hollow and international law regularly disregarded, prospects are bleak. The only hope I see lies in a strong public opinion movement. Yet public discourse, largely gripped by media panic surrounding AI, tends to focus on minor or nonexistent problems while neglecting — or equating — existential ones. In this environment of widespread misinformation, where more outrage is sparked by ChatGPT misusing a verb tense than by OpenAI signing military agreements, forming a coherent protest that does not devolve into apocalyptic noise is extraordinarily difficult.

Francesco D’Isa