Discussion about this post

User's avatar
Ben's avatar

The Claude comment regarding LLM potential (footnote 8) reminds me of the children's "Stone soup" story... In current parlance an LLM is a language model based on the transformer architecture. If you change the architecture, training paradigms, add interactive learning, causal reasoning, etc, then at some point this is no longer a transformer or a language model, no longer stone soup - it's a new architecture, and a new kind of model, moving towards a more animal-like cognitive architecture, perhaps.

Can you ride a bicycle to the moon? Yes, if you remove the wheels and add rocket engines, etc!

Expand full comment
Nicholas R Karp's avatar

It is unreasonable to expect a single paradigm to replicate every aspect of human intelligence. Some problems are purely perceptual, some associative, some procedural; and I'd expect there are many more categories we have yet to define with clarity. Like calculators, rules engines, and convolutional nets, LLMs add another significant tool that can do a certain, limited, subset of what people can do. I expect progress toward AGI to emerge as a series of exciting breakthroughs leading to plateaus until the next breakthrough. The real question is how much AI systems can shorten the time to the next breakthrough.

Expand full comment
13 more comments...

No posts

Ready for more?