Discussion about this post

User's avatar
Ben's avatar

The Claude comment regarding LLM potential (footnote 8) reminds me of the children's "Stone soup" story... In current parlance an LLM is a language model based on the transformer architecture. If you change the architecture, training paradigms, add interactive learning, causal reasoning, etc, then at some point this is no longer a transformer or a language model, no longer stone soup - it's a new architecture, and a new kind of model, moving towards a more animal-like cognitive architecture, perhaps.

Can you ride a bicycle to the moon? Yes, if you remove the wheels and add rocket engines, etc!

Reality Drift Archive's avatar

This lands very close to something I’ve been circling from a different angle. That what we’re calling understanding is really compression without stable recursion. LLMs compress patterns extremely well, but they don’t yet re-enter that compression as a self correcting loop over time. Humans do, not because we see more data, but because experience feeds back through memory, identity, and consequence. That recursive layer is what turns pattern fit into meaning.

One way I frame this is that we’re mistaking pattern intelligence for generator intelligence. Next-token prediction plus reasoning scaffolds gets you astonishing local coherence, but without a persistent compression loop anchored to lived context, failures look exactly like market crashes. Not alien, just brittle. Which is why alignment feels less like fix the model and more like co-evolution. Shaping the grooves we lay down while accepting Drift, not eliminating it.

16 more comments...

No posts

Ready for more?