5 Comments
Jul 28, 2021Liked by Rohit Krishnan

Well, you know, there's a large cognitive science literature dating back to the 1970s and 1980s that models knowledge as an interrelated network of ideas. I've even contributed to that literature.

Expand full comment
author

Please do send me some of yours? Would love to read. This exercise started as a way to structure my philosophical thoughts, and meandered. I'll do a v2 of these soon..

Expand full comment
Jul 28, 2021Liked by Rohit Krishnan

Here's two papers to give you a feel:

This is a set of informal notes, From Associative Nets to the Fluid Mind, https://www.academia.edu/9508938/From_Associative_Nets_to_the_Fluid_Mind

Here's a more formal paper that I did for the Center for Manufacturing Productivity at RPI, Ontology in Knowledge Representation for CIM, https://www.academia.edu/19804747/Ontology_in_Knowledge_Representation_for_CIM

The second is firmly within what is known as "classical" or "symbolic" AI, which fell through in the 1980s. Since I was never committed to that as a rock-bottom theory, that didn't particularly bother me. A lot of the current crop of machine learning/neural network types disdain symbolic computing. I think they're wrong. The trick is to figure out how to implement symbolic computing within a neural network framework. I've got a blog post on that, Geoffrey Hinton says deep learning will do everything. I’m not sure what he means, but I offer some pointers. Version 2, https://new-savanna.blogspot.com/2021/05/geoffrey-hinton-says-deep-learning-will_31.html

My major project for the summer is to write up an intelligible set of notes on how that might be done.

Expand full comment
author

I'd love to read what you write on this topic - bookmarked these ones to read this weekend! My personal hypothesis is that deep learning is pretty great, but we'll have to bring symbolic AI back and incorporate it to increase the ability to do few-shot or zero-shot learning. It's an exciting space.

Expand full comment
Jul 28, 2021Liked by Rohit Krishnan

David Ferrucci of Elemental Cognition (formerly head of IBM's Watson project) agrees with you on that. FWIW I worked with him a bit when he was a graduate student at RPI, but not enough so I would claim him as my student. You might want to read what I have to say about GPT-3, GPT-3: Waterloo or Rubicon? Here be Dragons, Version 2, https://www.academia.edu/43787279/GPT_3_Waterloo_or_Rubicon_Here_be_Dragons_Version_2

I think it represents a significant advance, and we need to try to understand what it is doing; I offer some preliminary thoughts in that paper. But I think such techniques will bottom out sooner or later.

Expand full comment