Discussion about this post

User's avatar
Kevin Whitaker's avatar

I don’t know what I would have predicted in advance but in retrospect it’s not really surprising that LLMs trained entirely on human thought would approach the problem with human pathologies.

But still an interesting finding and doesn’t make it obvious how you’d work around that (or if you’d want to)

Hollis Robbins's avatar

"The truly interesting part was that the agents perfectly replicated the dysfunction of real companies. Onwards." I'm going to be thinking about this all day.

10 more comments...

No posts

Ready for more?