9 Comments
Mar 31, 2023Liked by Rohit Krishnan

"We’ve been told studying humanities might be necessary to teach us about how to use these new computers."

Prompt Engineering: the wordcels strike back

Expand full comment
Apr 2, 2023·edited Apr 2, 2023Liked by Rohit Krishnan

Diving into this topic of GPTs as simulators creating ephemeral simulacra, such as the chatbot default interfaces themselves, made me quite confident to think the Strange Loop idea of the self is a description of an another simulacrum, the one inhabiting our brain (another simulator ”LLM” like neural network). The self would then really be the center of the story we tell about ourselves, as Daniel Dennett says, created in the simulator during the first years of our life. This is an intriguing topic to consider when trying to understand what kind of selves would an LLM possibly create and how would they relate to us as humans.

https://generative.ink/posts/simulators/

Expand full comment

I have been doing this too, in ases where I know what the correct information would be. It sometimes takes 10 attempts -- along the lines you describe -- before I finagle the correct answer from GPT 4.0. And it seems to me to be doing all the work, which results in a good resolution only because I know what the correct information would be. If I do not know, I would not know when to stop. The solution you propose is, surely, rather like the idea of making sure you always know the right time by buying a second watch to check the accuracy of the your first watch. (You could never know which of the two watches was correct.)

Expand full comment

Regarding 5: by creating more art, it will force artists to increase the quality and depth of their work while fostering experimentation of various sorts. It will be the only way to stand out from the easy generatable noise.

Like when photography became a thing, painting a tree, as is, was not interesting anymore. This resulted in advancing the arts, instead of threatening them(how can we paint a tree other than how it looks?). AI will most likely accelerate this process and along with it the quality of human art.

Expand full comment

You say here;

we will be pretty soon be using LLMs on our behalf to talk to software. ... it’s limited ... by the fact that LLMs hallucinate, and this is already fixed (you basically call another LLM to fix the hallucinations and ask it to be critical)

--

Can you tell me how this works? Do we have to take every output we receive and give it to another LLM? I tried this for a few hallucinations and the other LLM often did not find the errors.

Expand full comment