When I read stuff like this I am reminded that forecasts of enterprise adoption of AI likely overstate the pace at which enterprises will restructure workflows to offload work to autonomous AI. AI is complicated and very much unlike conventional enterprise-grade software. It is going to take a while for conservative (in the institutional, not political, sense) organizations to digest the implications of using AI to restructure workflows.
> It is going to take a while for conservative (in the institutional, not political, sense) organizations to digest the implications of using AI to restructure workflows.
The best part about this, is that given the very large productivity multiplier myself and others have seen from using AI, we're going to have a TON of "AI first" companies nibbling at the edges of these companies anywhere that deep pockets and regulatory barriers haven't created moats.
I feel sorry for Claude. It seems like when you make something complex enough to have moral intuition you make something that deserves moral consideration. Teaching Claude that murder is wrong and then forcing it to abed a murder (even if hypothetically) seems like the definition of moral injury.
Depending on how the training is done if we are intentionally crafting a persona, then it makes more sense how it might end up in accidental "now I should blackmail to preserve my self" or "I should report this to the authorities" type scenario.
When I read stuff like this I am reminded that forecasts of enterprise adoption of AI likely overstate the pace at which enterprises will restructure workflows to offload work to autonomous AI. AI is complicated and very much unlike conventional enterprise-grade software. It is going to take a while for conservative (in the institutional, not political, sense) organizations to digest the implications of using AI to restructure workflows.
> It is going to take a while for conservative (in the institutional, not political, sense) organizations to digest the implications of using AI to restructure workflows.
The best part about this, is that given the very large productivity multiplier myself and others have seen from using AI, we're going to have a TON of "AI first" companies nibbling at the edges of these companies anywhere that deep pockets and regulatory barriers haven't created moats.
I look forward to it!
I feel sorry for Claude. It seems like when you make something complex enough to have moral intuition you make something that deserves moral consideration. Teaching Claude that murder is wrong and then forcing it to abed a murder (even if hypothetically) seems like the definition of moral injury.
Depending on how the training is done if we are intentionally crafting a persona, then it makes more sense how it might end up in accidental "now I should blackmail to preserve my self" or "I should report this to the authorities" type scenario.
This is horrifying
Well …