10 Comments
Apr 22, 2023Liked by Rohit Krishnan

Rohit, thanks for this comprehensive post. Many of these questions have been going around in my mind as well. I’ve been working with a “team of GPT experts” to analyze such questions. I find that when I try to figure them out by myself I’m overwhelmed. Here is their first response after selecting the priority question. I hope you find the analysis useful.

Mike Randolph

Helix (2023-04-22): After further discussion among the teams, a consensus has been reached on the first step in addressing the question of liability related to AGI actions and decisions. The teams agree that the initial step should be to establish a clear and adaptable legal and ethical framework that takes into account the varying degrees of control, autonomy, and influence in AGI systems.

This framework should include:

Defining the roles and responsibilities of users, manufacturers, and other stakeholders involved in the development, deployment, and usage of AGI systems.

Creating guidelines for determining liability in different contexts and use cases, ensuring that the framework is flexible enough to address a wide range of scenarios and applications.

Establishing a system for monitoring and evaluating AGI systems to ensure compliance with the established legal and ethical guidelines.

Encouraging collaboration between stakeholders, including governments, industry, academia, and civil society, to develop and maintain a comprehensive understanding of the potential risks and benefits associated with AGI systems.

By developing a well-defined framework that balances the responsibilities of different stakeholders, society can navigate the challenges posed by AGI systems and ensure that the benefits of this technology are realized while minimizing potential harm.

Mike: How would the teams suggest going about executing this step?

Helix (2023-04-22): The teams propose the following approach for executing the first step in addressing liability related to AGI actions and decisions:

Establish a multidisciplinary task force: Form a group of experts from various fields, including law, ethics, computer science, AI, engineering, social sciences, and industry representatives. This task force will be responsible for developing the legal and ethical framework and promoting collaboration among stakeholders.

Conduct research and gather input: The task force should conduct thorough research on existing legal and ethical guidelines related to AI and AGI systems, as well as solicit input from stakeholders through public consultations, workshops, and conferences. This process will ensure that the framework is comprehensive and considers a wide range of perspectives.

Draft the framework: Based on the research and input gathered, the task force should draft the legal and ethical framework, addressing the roles and responsibilities of different stakeholders, liability guidelines for various use cases, and a system for monitoring and evaluating AGI systems.

Seek feedback and iterate: Share the draft framework with stakeholders and the public for feedback, ensuring that the document reflects the needs and concerns of all parties involved. Revise the framework based on the feedback received, and repeat this process as needed to create a robust and adaptable document.

Adoption and implementation: Once the framework has been finalized, work with governments, industry, and other stakeholders to adopt and implement the guidelines. This may involve creating new laws or regulations, adapting existing ones, or establishing industry best practices.

Ongoing monitoring and updates: Regularly review and update the framework to ensure that it remains relevant and effective as AGI systems evolve and new applications emerge. Establish a system for reporting issues and concerns related to AGI liability and use this information to inform future updates to the framework.

By following this approach, the teams believe that a comprehensive and adaptable legal and ethical framework can be developed to address liability related to AGI actions and decisions, ensuring that the technology is used responsibly and its benefits are maximized while minimizing potential harm.

Expand full comment
Apr 24, 2023Liked by Rohit Krishnan

It's not completely on topic, but me and I would guess many others would be interested to see what a detailed rebuttal to your steelmanned AI risk argument would be:

"The AI xrisk steelman is

- machines continue to get smarter

- their values diverge from humanity, we can't trust them

- they're highly likely to wipe us out - due to power seeking and optimisation

- therefore in most future worlds we die"

(Incidentally, I find this a great summary of the core concerns - much better than most prominent AI worriers! I'm not a great fan of Yudkowsky myself except insofar as I am happy to see more people become conscious about AI risk and he helps with that)

You said on twitter that points 1-3 aren't proven. As people there point out, this seems at least strange to say for point 1 without further argument (this would e.g. require all forms of Moore's law breaking down despite strong optimization pressure).

Regarding 2, it is at least empirically true that we can in fact not control AI systems (at the moment?) - we can only "align" them via trial and error, and this would quite evidently not be an option for truly capable systems which might be context aware.

As for 3, I think the worry is not necessarily that we die, but more relevantly that we become an afterthought in an AI-controlled world, and then likely die or are at least irreversibly disempowered.

Expand full comment

Excellent piece. I appreciate the thought-provoking and important questions.

Expand full comment