9 Comments
User's avatar
Huvinesh Rajendran's avatar

I’m not saying Anthropic is right here, but it’s worth fighting for their right to be wrong. Seems like they have received so much goodwill from this but idk how long that’s going to last. I’m an AI Engineer in my day job, and I’ve already received order from management to switch all of our products using OpenAI models to Anthropic models.

Benjamin Gibert's avatar

Another great piece exploring the nuance beyond the headlines. Given all the unknowns, it's hard to know what exactly happened. But it does feel like this came down to negotiating personalities more than the actual language in the contract.

Dave's avatar

the biggest piece of context missing in this article is this DoD (perhaps it would be more fitting to just refer to them as the DoW) is part of an abnormal administration.

that they reacted with a blowhard, legally unbackable SCR designation demonstrates that the type of administration you're dealing with is not normal.

another piece of context? greg brockman's $25M donation to said administration.

you don't need complete information to read hands, understand incentives, and manage risk.

Rohit Krishnan's avatar

I agree the admin is jumpy and abnormal, but then Anthropic with Palantir have been working with them for a year plus now. I also think the donation is overplayed.

There are a lot of “hints” floating around that make people play 4D chess but in the end I came down on the side I wrote about.

Dave's avatar
1hEdited

broader context might be that: altman is not a good product leader or a good operator (see loopt, see his undisciplined capital allocation without FCF, see how he allowed top talent to leave openai); he is, as PG has observed, "extremely good at getting powerful"; and extraordinarily ambitious.

the man does deals - he's one of the best dealmakers ever. and he needs more deals and bigger deals since ant and google have both caught up in big ways. it's not a surprise his hyperambition and openai's survival has lead him to compromising decisions

Frank Greco's avatar

Many colleagues in the AI engineering space are switching from OpenAI to Anthropic. No one trusted OpenAI or Sam Altman before; now it's assumed you shouldn't trust either of them.

Rohit Krishnan's avatar

This is the kind of behaviour that I find most perplexing and an example of mood affiliation

Frank Greco's avatar

Mood? Let's be honest, no one trusts any of the foundation models on privacy or theft. For OpenAI to get that govt contract, they have had to accede to Hegseth's demands. Clearly, Hegseth has no idea that GenAI is probabilistic and non-deterministic, and autonomous killing and mass surveillance are huge risks diplomatically and ethically.