I’m not saying Anthropic is right here, but it’s worth fighting for their right to be wrong. Seems like they have received so much goodwill from this but idk how long that’s going to last. I’m an AI Engineer in my day job, and I’ve already received order from management to switch all of our products using OpenAI models to Anthropic models.
Another great piece exploring the nuance beyond the headlines. Given all the unknowns, it's hard to know what exactly happened. But it does feel like this came down to negotiating personalities more than the actual language in the contract.
the biggest piece of context missing in this article is this DoD (perhaps it would be more fitting to just refer to them as the DoW) is part of an abnormal administration.
that they reacted with a blowhard, legally unbackable SCR designation demonstrates that the type of administration you're dealing with is not normal.
another piece of context? greg brockman's $25M donation to said administration.
you don't need complete information to read hands, understand incentives, and manage risk.
I agree the admin is jumpy and abnormal, but then Anthropic with Palantir have been working with them for a year plus now. I also think the donation is overplayed.
There are a lot of “hints” floating around that make people play 4D chess but in the end I came down on the side I wrote about.
broader context might be that: altman is not a good product leader or a good operator (see loopt, see his undisciplined capital allocation without FCF, see how he allowed top talent to leave openai); he is, as PG has observed, "extremely good at getting powerful"; and extraordinarily ambitious.
the man does deals - he's one of the best dealmakers ever. and he needs more deals and bigger deals since ant and google have both caught up in big ways. it's not a surprise his hyperambition and openai's survival has lead him to compromising decisions
Many colleagues in the AI engineering space are switching from OpenAI to Anthropic. No one trusted OpenAI or Sam Altman before; now it's assumed you shouldn't trust either of them.
Mood? Let's be honest, no one trusts any of the foundation models on privacy or theft. For OpenAI to get that govt contract, they have had to accede to Hegseth's demands. Clearly, Hegseth has no idea that GenAI is probabilistic and non-deterministic, and autonomous killing and mass surveillance are huge risks diplomatically and ethically.
I’m not saying Anthropic is right here, but it’s worth fighting for their right to be wrong. Seems like they have received so much goodwill from this but idk how long that’s going to last. I’m an AI Engineer in my day job, and I’ve already received order from management to switch all of our products using OpenAI models to Anthropic models.
Agreed
Another great piece exploring the nuance beyond the headlines. Given all the unknowns, it's hard to know what exactly happened. But it does feel like this came down to negotiating personalities more than the actual language in the contract.
the biggest piece of context missing in this article is this DoD (perhaps it would be more fitting to just refer to them as the DoW) is part of an abnormal administration.
that they reacted with a blowhard, legally unbackable SCR designation demonstrates that the type of administration you're dealing with is not normal.
another piece of context? greg brockman's $25M donation to said administration.
you don't need complete information to read hands, understand incentives, and manage risk.
I agree the admin is jumpy and abnormal, but then Anthropic with Palantir have been working with them for a year plus now. I also think the donation is overplayed.
There are a lot of “hints” floating around that make people play 4D chess but in the end I came down on the side I wrote about.
broader context might be that: altman is not a good product leader or a good operator (see loopt, see his undisciplined capital allocation without FCF, see how he allowed top talent to leave openai); he is, as PG has observed, "extremely good at getting powerful"; and extraordinarily ambitious.
the man does deals - he's one of the best dealmakers ever. and he needs more deals and bigger deals since ant and google have both caught up in big ways. it's not a surprise his hyperambition and openai's survival has lead him to compromising decisions
Many colleagues in the AI engineering space are switching from OpenAI to Anthropic. No one trusted OpenAI or Sam Altman before; now it's assumed you shouldn't trust either of them.
This is the kind of behaviour that I find most perplexing and an example of mood affiliation
Mood? Let's be honest, no one trusts any of the foundation models on privacy or theft. For OpenAI to get that govt contract, they have had to accede to Hegseth's demands. Clearly, Hegseth has no idea that GenAI is probabilistic and non-deterministic, and autonomous killing and mass surveillance are huge risks diplomatically and ethically.