Discussion about this post

User's avatar
Ben's avatar

The Claude comment regarding LLM potential (footnote 8) reminds me of the children's "Stone soup" story... In current parlance an LLM is a language model based on the transformer architecture. If you change the architecture, training paradigms, add interactive learning, causal reasoning, etc, then at some point this is no longer a transformer or a language model, no longer stone soup - it's a new architecture, and a new kind of model, moving towards a more animal-like cognitive architecture, perhaps.

Can you ride a bicycle to the moon? Yes, if you remove the wheels and add rocket engines, etc!

Expand full comment
Kenny Fraser's avatar

It kinda fun to think back. When I was at school I had a near perfect memory - so I also figured I didn't need to study. And that worked until sometime halfway through my degree course!

Expand full comment
12 more comments...

No posts