22 Comments
User's avatar
Huvinesh Rajendran's avatar

I’m not saying Anthropic is right here, but it’s worth fighting for their right to be wrong. Seems like they have received so much goodwill from this but idk how long that’s going to last. I’m an AI Engineer in my day job, and I’ve already received order from management to switch all of our products using OpenAI models to Anthropic models.

Mike Randolph's avatar

Here is my thinking with the help of Claud and ChatGPT:

The entity being monitored should not design its own monitoring architecture. That's not a political claim. It's an engineering principle that holds regardless of who's in office. And none of the three enforcement architectures that just played out satisfy it.

Contract-only red lines (Anthropic's approach): enforcement is retrospective — litigation, years after the fact, useless during operations.

Vendor-operated safety stack (OpenAI's approach): enforcement can be real-time, a genuine advantage — but the monitor's revenue depends on the contract continuing, which is conflicted, not independent.

"All lawful purposes" defined by executive-branch authorities (the Pentagon's position): enforcement is circular. In the contract language OpenAI published, the intelligence-use standard cites Executive Order 12333 and related directives as the governing framework — but those authorities are interpreted by the same executive apparatus being monitored.

Three architectures. Three failure modes.

The missing option is the boring one: a correction loop with a monitor independent of both buyer and vendor — the structural role the NRC plays for nuclear power or the FAA for aviation. Not a private company deciding whether to halt operations. A statutory body with the independence and authority to audit classified AI deployments.

Altman and Amodei appear to agree on more red lines than the current discourse suggests — both named the same two. What neither can credibly provide alone is independent oversight.

And after the supply chain risk designation — which created costs extending well beyond the lost contract into every business relationship touching the Pentagon — both have self-interested reasons to demand it. The same body that prevents misuse also limits the executive branch's discretionary leverage over companies that negotiate contract terms.

What would prove me wrong: if competitive dynamics between these companies prevent coordination even when both face the same structural threat.

Michael Sullivan's avatar

Your proposed monitoring role is, it seems to me, butting up very hard against the right's embrace of the unitary executive legal theory. It's now very hard to have one branch of the government act as a meaningful check on another.

Mike Randolph's avatar

Agree, but I think it goes deeper than unitary executive theory. Every proposed monitoring path shares three defects: they detect but can't compel, the monitors bear no cost for missing things, and the informal enforcement layer that made any of it work — career civil service, press gatekeeping, professional norms — has degraded underneath the formal structures. No redesign of the formal mechanisms fixes that.

Benjamin Gibert's avatar

Another great piece exploring the nuance beyond the headlines. Given all the unknowns, it's hard to know what exactly happened. But it does feel like this came down to negotiating personalities more than the actual language in the contract.

D72's avatar

the biggest piece of context missing in this article is this DoD (perhaps it would be more fitting to just refer to them as the DoW) is part of an abnormal administration.

that they reacted with a blowhard, legally unbackable SCR designation demonstrates that the type of administration you're dealing with is not normal.

another piece of context? greg brockman's $25M donation to said administration.

you don't need complete information to read hands, understand incentives, and manage risk.

Rohit Krishnan's avatar

I agree the admin is jumpy and abnormal, but then Anthropic with Palantir have been working with them for a year plus now. I also think the donation is overplayed.

There are a lot of “hints” floating around that make people play 4D chess but in the end I came down on the side I wrote about.

D72's avatar
Mar 2Edited

broader context might be that: altman is not a good product leader or a good operator (see loopt, see his undisciplined capital allocation without FCF, see how he allowed top talent to leave openai); he is, as PG has observed, "extremely good at getting powerful"; and extraordinarily ambitious.

the man does deals - he's one of the best dealmakers ever. and he needs more deals and bigger deals since ant and google have both caught up in big ways. it's not a surprise his hyperambition and openai's survival has lead him to compromising decisions

Frank Greco's avatar

Many colleagues in the AI engineering space are switching from OpenAI to Anthropic. No one trusted OpenAI or Sam Altman before; now it's assumed you shouldn't trust either of them.

Rohit Krishnan's avatar

This is the kind of behaviour that I find most perplexing and an example of mood affiliation

Frank Greco's avatar

Mood? Let's be honest, no one trusts any of the foundation models on privacy or theft. For OpenAI to get that govt contract, they have had to accede to Hegseth's demands. Clearly, Hegseth has no idea that GenAI is probabilistic and non-deterministic, and autonomous killing and mass surveillance are huge risks diplomatically and ethically.

Gweezy's avatar

right on the money as always

A. Jacobs's avatar

This feels like a deeper issue around where constraints actually live in AI systems. Ethical principles and contractual red lines operate at a symbolic level, but operational control depends on deployment architecture and visibility into use. When those layers diverge, governance becomes very difficult.

Julian's avatar

Great piece!

> Which is why I feel

I know this is a typo you corrected but honestly my favourite line :)

Rohit Krishnan's avatar

Haha thanks !!

marshm's avatar

Most of this seems like arguing about the deck chairs as the Titanic sinks. The govt. has already fired members of the board on NLRB, MSPB, FTC and most of these are becoming ruled legal precedent under the current supreme court. President cannot be prosecuted for anything at all, ruled legal by SCOTUS. Qualified immunity & Bivens doctrine means most govt. officials can almost never be prosecuted for breaking the law. The terms of a contract mean very little when the law is insane and no one who breaks it can even be held responsible. It's already been broken and then retroactively labeled legal with NSA. Anthropic's departure means maybe a 3 month delay in building the govts. surveillance architecture with OpenAI instead, we haven't solved anything.

Julián's avatar

"all lawful use" sounds like a reasonable standard until you realize lawful is a floor, not a ceiling. anthropic's position was: legal alone doesn't clear our red lines. openai's position was: your red lines are yours to enforce, not ours.

the DoW picked openai because militaries don't want a third-party ethics board inside the ops loop. but the long-run risk is obvious: "lawful" keeps expanding to match operational needs. who argues the ethics case in year 3?

Rohit Krishnan's avatar

It can and should be a democratic process, I’m highly skeptical relying on Anthropic’s red lines, esp considering Darios memo now, and the fact that they tried multiple times still to do things like autonomous drones and helped target in Iran etc. There’s a little too much playing footsie to be swayed by some rhetoric here.

Julián's avatar

that's fair, and the iran/drone history is exactly why i'm skeptical of the institutional framing too. my point wasn't that anthropic's red lines are trustworthy. it's that "lawful" as a standard expands to meet operational needs regardless of who draws the line. democratic process is the right answer in theory. the question is whether it moves fast enough when the technology doesn't wait for committee.

Mark Ramm's avatar

Smart structural analysis. The two enforcement models — Anthropic's contractual red lines vs. OpenAI's operational visibility — is a distinction most coverage misses.

But I'd push back on framing this as a symmetrical engineering problem. The government's own behavior reveals it wasn't evaluating enforcement models. It was picking winners.

OpenAI's president gave $25M to MAGA Inc. OpenSecrets documented that the companies donating to Trump's AI ballroom project are the same ones receiving billions in federal AI contracts and helped write the AI Action Plan. Anthropic isn't on the donor list.

Emil Michael was still on the phone negotiating with Anthropic when Hegseth tweeted the supply chain risk designation. Then CENTCOM used Claude — via the Maven Smart System — to bomb Iran hours after the ban.

The enforcement architecture question is real. But it's downstream of a simpler one: who had $25 million worth of access to the table where enforcement architecture gets decided?

https://theramm.substack.com/p/the-pentagon-banned-claude-as-a-national

The Synthesis's avatar

The $25M buys you the seat. But the enforcement architecture question isn't downstream — it's what makes the seat worth buying. OpenAI's "operational visibility" model is attractive to DoW precisely because it doesn't constrain future use. The donation gets you to the table; the permissive enforcement model is the product you're selling once you're there. Same pattern as defense procurement has always run — the contract terms matter more than the sticker price.