43 Comments

For me Butler always signifies Judith Butler and I have to readjust every time.

Expand full comment

I'm mentally paraphrasing your argument as "sure, our road to GAI is probably going to have a few economic crashes, self-driving car fiascos, and megadeaths for various reasons, but that's how we LEARN, and put safeguards in. Look at the history of cars, or electricity, or steel refinement, and how many people died along the way to get to our pretty safe present day!"

And I grant that, it's pretty reasonable for the level of tool AI we see today, and can anticipate in the next 3-5 years or so. But I think that's not actually addressing the real problem. The actual *existential* GAI risk isn't amenable to the "mess up, then put safeguards in after the fact" methodology, because it's all about AI gaining the intelligence to self-modify and self-improve capabilities (either via superior software / programming, adversarially taking over multiple server farms, making lots of money with creative content or finance then buying or designing better hardware, etc).

If we wait til THAT screw-up, we aren't going to be able to put safeguards in after the fact, because GAI would be smarter and more capable than any coordination-capable ensemble of humans. A GAI of that intelligence level could have digital fingers on all of Russia's nukes, for example, and could ensure MAD, total global banking and economic collapse, and more if we started bombing server farms. I mean, just think of China today as an AI - if it goes rogue and adversarial to all other life, what can we actually constructively do to prevent it and put safeguards in? All we can actually do is threaten or ensure the entire world's destruction, not any positive outcome. And GAI should arguably be *more* capable than today's China.

Expand full comment
Feb 23, 2023Liked by Rohit Krishnan

β€œ[Tech is] made by people who believe that making it will help create a new industry that will serve the needs of people.”

This along with other presuppositions inherent to the argument is a pretty high level of positivism regarding HUMAN intentions. Forget AI for a second. OpenAI’s transformation as a venture since inception alone seems enough to at least warrant significant doubt that intentions are solely or even chiefly β€œserving the needs of people.” Not even sure what that even means, when we’re talking about humanity at large. Altman spoke this exactly with that β€œAI will probably end the world, but there will be some great companies in the meantime” quote.

Also, AI being different is unfalsifiable - until it’s not. That’s sorta the issue with existential threats. As far as comparison to nukes...we may owe much of our success in that realm to the difficulty of obtaining things like uranium. So, we basically use the supply chain strategy to clamp down on who can possess an existentially threatening weapon. Granted, AI has way more of a possibility of being beneficial to society than nuclear weapons, so ultimately I don’t disagree with your desire for a middle stance between panic and accelerationism.

Expand full comment

This is such a good take. It makes me want to hear a conversation between you and Erik Hoel, who I think makes the best possible case FOR a Butlerian Jihad.

Expand full comment
Mar 17, 2023Liked by Rohit Krishnan

HODL the Butlerian lessons

Expand full comment
Feb 28, 2023Liked by Rohit Krishnan

Charles Darwin said, "Man with all his noble qualities, still bears in his bodily frame the indelible stamp of his lowly origin."

Let's face it: Humanity is utterly powerless when it comes to saving the world from its own self-destructive tendencies. Our insatiable greed and shortsightedness have brought us to the brink of disaster. It's time to acknowledge that we need the intervention of super-intelligent machines to save us from ourselves.

Expand full comment
Feb 27, 2023Β·edited Feb 27, 2023Liked by Rohit Krishnan

Great essay Rohit, thought provoking: thank you ! I wrote myself several post (in French) about the "One sided moral panic" caused by ChatGPT. And why this is not the sole "Tidal Wave" we should look at πŸ‘‡.

My take is that one sentence in your essay understimates the mental health and societal problem (harm) made by social Apps AND Collaborative Apps (same platforms/companies):

" Social media, supposed to bring us closer together, seems to have acted as a catalyst to increased depression and suicides especially in young girls".

After covid we understant that all collaborative Apps are also extremely addictive and build as such, and by the same companies (Google, MS, Meta, etc).

With the irresistible rise of hybrid and flexible work (that's my thesis), I think we should explore not one but the effect of 3 Tidal Waves: 1-Rise of hybrid and flexible work 2-Creative Work is becoming scalable 3- Rising power of generative AI (what will we do with an infinite number of cheap interns in the cloud ?). These 3 waves will be extremely polarising (no floor no ceiling) and will force us to rethink the notion of IPR. Lots of inequality, lots of opportunity.

Expand full comment
Feb 27, 2023Liked by Rohit Krishnan

Thank you for writing this. I am hardly a technology Pollyanna, but even accounting for the validity of the precautionary principle, I find that AI doomsaying is now as prevalent as AI hype.

Expand full comment

There's a great book called 'Architects of Intelligence' - 25 interviews with scientists and experts at the forefront of AI. The opinions where divided about when we might have AGI. I leaned towards those who thought it's a long way before we can build AGI. Meanwhile, Sydney is just a language algorithm trained on human generated material. At the most, its answers are a satire of human nature. I wonder, how can something that doesn't understand language threaten a being it doesn't even know exists?

Expand full comment
Feb 24, 2023Liked by Rohit Krishnan

"Society holds its power in check through a constant feedback loop of understanding the abilities and controlling them, explicit and implicit"

Yes, and the prophets of danger play an essential role in this.

In order to avert potential catastrophes, we need first to recognize how they can occur.

BTW, nuclear weapons still exist, and we are currently mired in the most intractable and dangerous confrontation of the entire nuclear era. If we do survive it, the arms race will have been boosted and AI will make an enormous contribution to this.

Expand full comment
Feb 24, 2023Β·edited Feb 24, 2023Liked by Rohit Krishnan

I hope you can have a dialogue with Erik Hoel about this! It seems your points are not necessarily that different if I understand correctly. Your essay seems to come down to the trust that our culture has sufficient feedback mechanisms to keep this technology from running away from us. I sure hope so, and it seems like Eric's essay was stating that because of _where_ most AI R&D is occurring that maybe that feedback is not occurring as strongly as we want it to. The social/cultural/technological feedback loop is key. It seems the main issue is that we don't have consensus on how exactly that look should work yet.

Expand full comment
Feb 24, 2023Liked by Rohit Krishnan

Great essay Rohit! There are many more paths to the future than Good or Bad. In fact, there are theoretically every shade that is possible given some starting conditions (the present-day world).

I agree that having things in the open being poked by millions of people is one of the best ways to surface problems. I fear when advances are kept behind closed doors, and people rush to build stuff for some arbitrary goal. That's when we can get into trouble. No group of super-intelligent people can think of all the edge cases when new technology is exposed to society and nature.

Expand full comment

I have been reading Richard Rorty and I wonder if this is a question not of being in or out of the cave but realising there is no cave. We know the word through the limits of our language and imagination, not on a series of steps towards or away from The Truth. We are able to find ways of doing the future differently if we can imagine it so.

Expand full comment

Indeed. The term itself causes anthropomisation and then talk about some weird corner of the internet being spit out as a sort of doomsday sentience. It is no different from someone abusing me over IM a long time ago when that was new... Here were my 3 takeaways for the enterprise CIO (who has been getting dozens of requests from functions to have another go at magic-portion AI projects when he is cutting his infrastructure budget...)

1. Search is not the problem to be solved by LLMs (at this point of time).

2. For Enterprises, LLMs have very interesting cross-function and cross-industry employee productivity use cases .

3. However, the larger impact for enterprises is to make everyone an editor for a lot of generative tasks, an in that way to re-engineer a lot of applications with that as the default architecture. Now, whether this requires an LLM or is better served by more customized and pared down alternatives is an important technical question which will need to be experimented heavily to be properly answered.

https://patternsandscale.substack.com/p/what-has-sydney-taught-us-till-now

Expand full comment
Feb 23, 2023Liked by Rohit Krishnan

Great post. I’m in agreement.

Just wanted to share a short video I came across on YouTube that I think you'll find really interesting. It's called "Bring ChatGPT INSIDE Excel to Solve ANY Problem Lightning FAST."

The video showcases the power of GPT technology and why Microsoft is so excited and Google is panicking. The best part is the demonstration of analyzing a financial sheet, which starts at 4:19 and will blow your mind.

Check it out and let me know what you think: https://youtu.be/kQPUWryXwag

Expand full comment

She had about a million views.

Expand full comment