In the multiple discussions around the intensity of AGI and what it bodes for us in the future, most of the conversation revolves around the technology itself, , and the worries we ought to have around existential risk.
Even amongst those who worry, and despite the incredible amount of publicity, attempts at creating better ideas on what we ought to do haven’t gone beyond the obviously absurd like creating a draconian world government to monitor chip use and bomb GPU clusters. There’s very little that is actually out there that’s viable in terms of practical suggestions on what we ought to do. The best recent example we have is a halfhearted letter at a 6 month pause, which is a negative signal based on who signed it, impossible to enforce, and pointless even if you did.
For background I am in favour of *not* worrying about us going extinct from a superintelligence gone awry. I absolutely think we’re likely to create incredible machines in the next decade that will match or exceed humans in multiple areas. Now, what can we do about it!
There’s a large amount of wishful thinking in all these proposals. So I thought it helpful to level set.
Robin Hanson recently talked about the need for liability laws as a way to ensure the development of this technology is done in a safe fashion. The assumption is that any harms caused by AI development is to be left at the feet of the developers like OpenAI.
Tyler Cowen also recently spoke of the difficulty of directly attributing any harmful outcomes of AI at their feet. Which is also fair, because it’s not like if I made a billion dollars using LLMs I’m meant to give part of it to OpenAI. If the technology provider can’t control how the technology is used, which in most cases they can’t, then a general liability is hard to put at their feet.
books and maps have provided dangerous information to many criminals and terrorists. But liability for these kinds of crimes is generally not placed on the publisher. It is impractical to demand that all published information be the right combination of true and harmless. And what is the output of an LLM but a new and more powerful kind of book or map? (Or how about a more mischievous question: What if the LLM query requested that the answer be printed in the form of a book?)
Tyler looks at this from the point of view of “easily avoidable harm reduction”, where if they could have easily prevented the tool from being used for evil, then they end up being liable.
But all of this feels like the beginning of an actual conversation for how policy ought to work. If we are to design our response to the world it’s helpful to start with what the world looks like.
I. The Playing Field
1 The foundational model
OpenAI arguably led the race by using Google’s published papers and top researchers to create the major breakthrough that kickstarted this. The current field of players able to invest and spend billions of dollars are but a handful, and mostly working with the large tech companies.
Barring major changes, we can imagine that there are plenty more billions where this came from, especially considering that the major players are all seeing enormous interest, internal and external, at productising this technology.
2 Open source
Especially considering the release of Llama into the wild, the rest of the Camelids, rise in Lora methods, Stable LM, Red Pyjama as a methodology, and much more, not to mention the previous giants of Eleuther AI et al, the floodgates are well and truly open. When I can run a small model on my phone today, we’re no longer in an arena where we can simple ‘turn it off’.
3 The startups
Considering the sheer scale of things that are going on in the foundational model side, it only makes sense that there are hundreds or thousands of people who are excited about building the next generation of software.
That’s why we see things like AutoGPT and BabyAGI as agents, and thousands of projects to do everything from making recipes to completing stories to doing better autocomplete for code!
4 The hardware
Nvidia is the arguable champion regardless of who wins, as they rode the gaming world first, the crypto wave before and now the AI wave, where GPUs are consistently the thing in demand. But the other major tech giants are also entering the waters, whether it’s Apple with their custom silicon or Google trying to ensure their TPUs aren’t left far behind.
II. Impact
Considering the status quo, we potentially have an incredibly powerful tool soon in our grasp. Is it likely to be like software, or the cloud, or nuclear arsenal, there are plenty of analogies floating around. We don’t quite know what’s likely to happen.
What we do know is that a large chunk of the world’s intellectual work, or at least the intellectual drudgery that inevitably accompanies intellectual work, is now automated. Or at least distinctly reduced.
So let’s assume this continues and we’re going to develop AGI, or something equivalent, which in this case means a very powerful piece of software that can translate our wishes to actions in the real world, with the usual limits that human civilisation, tech debt and physics place in front of us.
However there is, and will continue to be, an actual competitive tension regarding who will create the first major AGI system. This is true regardless of the fact that If one company or organisation creates it first, what’s their lead likely to be? Is it in weeks, months or years?
There’s already efforts being made to make specific foundational models, at company levels or country levels. Whether this has a material impact beyond a fine-tuned version of a generic open source model is unsure.
Once powerful enough AGI systems exist in the world, they will impact the biggest decisions we make in our lives - medical decisions, financial decisions, education, love!
Here’s one story. If (when?) AI truly does take over the means of production, making multiple jobs obsolete and substantially improving the returns to capital, this is likely to impact labour relations. It will likely to lead to UBI, or a better social safety net, or a larger Alaska style “equity share” for citizens.
And as using AI does cause major problems, we’d have to think about policing them. We’d start by actively using adversarial LLMs (or their successors) to police each other, and perhaps putting them inside walled gardens.
And if AGI is sufficiently powerful and people create agents for it, it’s likely to even require an identity. Once it can act on its own behalf or volition, legally I mean, the structure of society changes and we now have citizens of multiple classes, even if one class aren’t technically citizens.
But it’s one way this can go, and the number of actual questions in front of us are legion.
III. Questions
All this is predicated on the idea that intelligence, or cognition, is something that will get democratised.
Which means if you truly did have technology that’s powerful enough, then what should we be thinking about? The assumption is that we will build incredibly powerful computer systems, and that they will need to be treated as both powerful and dangerous in terms of their impact on the world.
So, rather than worrying about the percentage probability of the world imploding, here are some questions I think are incredibly important that I’d like some answers to.
I. Economic and Legal Issues
A. Taxes and Liability
How should AGI-generated income be taxed? Should it get redistributed to those who contributed to the training, like with Stability, or shared amongst everyone, like an AGI UBI?
Who should be held liable for AGI actions and decisions? Is it a last mile problem, with the user to blame, or a first mile problem, with the manufacturer, or somewhere in the middle depending on the task (court sentencing vs self driving)?
B. Copyright and Intellectual Property
How will AGI creations be protected under current copyright law? Does AGI have personhood? Can an LLM create a new product and own the copyright? Can an AGI be considered a legal author or inventor?
How will the lifetime of intellectual property change when we can have software written once and thrown away? Do we need to incentivise human code writers or artists so that we get enough training data in the future?
C. Markets
How can monopolistic behaviour be addressed in an AGI-driven economy? Is it winner-take-all based on who makes AGI first? It seems implausible to me, since inference is so much cheaper than training, but it does strike me that multiple winners are likely, with plenty of Open Source to fill in the blanks.
What measures should be taken to ensure fair competition? Does this need government intervention?
II. Social Considerations
A. Privacy and Data Security
How can AGI be designed to respect individual privacy rights? Is this even possible, even if desirable?
What security measures should be in place to protect sensitive data? How much of sensitive data can you glean from interpolation if you know “related” data? What’s the entropy of sensitive data?
B. Education and Workforce
How can education systems adapt to an AGI-driven society? Should everyone have a personal tutor, and will this widen the Gini coefficient of classroom learning? Will there be classrooms?
What strategies can be implemented to address potential workforce displacement? Is it a “truckdrivers learning to code” moment, or is it a “coders learn to drive trucks” moment?
III. International Relations
A. Foreign Policy and Military Affairs
How will AGI impact the balance of power between nations? Is it going to be the same, except the poorer nations lag even further behind in terms of capabilities? Or like the case with cellphones and the internet will the gap be inevitably leapfrogged?
What role will AGI play in intelligence gathering and military operations? Whether it’s surveillance or prediction of behaviour or autonomous agents carrying out operations, what’s the orchestration strategy?
B. International Treaties and Trade
How should international agreements address AGI-related concerns? Especially economic, but also in terms of using AGI to negotiate trade treaties (or Brexit)? Could it have made the Brexit negotiations easier, or at least faster, if both sides had an AGI?
IV. Politics
How will AGI influence political party platforms and strategies? Will it cause a “flattening” of party platforms as many more facts turn out to be true because “computer says so”? We no longer argue about optimal tax policy, but overindex on arguing about social policies because that’s a philosophical difference?
What role will AGI play in coalition government negotiations and decision-making? How many of the decisions turn out to be delegateable once you have a machine capable of crunching the numbers and figuring out an optimal answer to hard economic problems (eg what should the optimal land value tax rate be in central London)?
IV. Policy
There are plenty of metaphors to talk about the current thing. While we are debating this, we’re stuck in a loop of constant streaming predictions about a future we barely understand.
And if intelligence exists “on tap”, shouldn’t it be used, nay consulted, in the creation of policies relating to its use? If not, what would you want to see to say yes or no? It’s easy to say something broad like “explainability”, but when you break it down it quickly becomes murky. Our belief in its explanations tell us as much about us as it does about it.
The question in front of us is what to do considering this uncertainty. It’s quite likely that this wave, or perhaps the next, will eventually help create a wave of tools that make intelligence as cheap as we’ve made computation.
And when that happens, we are going to have to grapple with the fact that the society as it stands today will get overturned in multiple ways, some which we can foresee and some which we can’t. For the ones where we expect change though, it makes sense to start thinking about those areas today.
We have precedent. Considering personal computers becoming mainstream and then the internet becoming ubiquitous, arguably changing the way we work and communicate, there have been a few direct and many more indirect ways its impacted life.
We got legislation like Online Safety Bill or Digital Competition Act, both of which are trying to retool previously existing laws into the online domain
Similarly we’ve had net neutrality questions, access acts, cybersecurity laws, all of which aim to bring previously important questions of free speech, equity in access and security considerations to the 21st century
We have GDPR, which mainly tries to make personal data work the same as your personal belongings, with mixed success
We have Section 230 and its ilk which tries to distinguish providing a venue for user generated content without taking liability for user generated content, but where it’s impossibly hard to prove the “least cost” use or to apply Coase
Guidelines like FISMA in the US or DTA in Australia to explore how to best use these tools to deliver public services better
Considering how much these have changed the landscape of our society over the past half century, it’s worth noting the changes are incremental and mainly in retooling existing beliefs and philosophies into their modern counterparts, than otherwise.
It is also worth thinking if there has ever been a regulation or policy that was created early enough that it had positive proactive impact on an industry. The closest perhaps were our stance on human cloning or GDPR, the latter of which gave individuals more rights at the cost of making data processing and management cumbersome for all enterprises.
The biggest changes have been social. Our interactions have moved online. We no longer write longhand memos or have them typed out in a secretary pool. We routinely perform computations more powerful than that landed Neil Armstrong on the moon on our handheld devices. We specialise even further in our jobs and have incredibly complex global interconnections to maintain our economies - both physical in terms of goods, intellectual in terms of labour, and knowledge in terms of scientific research and publishing.
In society however we saw the emergence of new giants, like India with outsourcing, new talent clusters, from Nigeria to Singapore, and new industrial giants who are the largest corporate megafauna to ever grace the world. None of which were easily foreseeable or in our cone of foresight, but were integrated into our overarching worldview.
It will be thus with AGI too. To think about the implications of this technology is to think about our society as it’s structured, and the philosophies we espouse collectively by deciding to live this way. More than the times of the personal computer or the internet, this has the potential to fundamentally alter society. That’s both the promise and the opportunity.
Addendum
Regarding AGI I've been soliciting historic ideas on what a workable structured policy that's proactive would need to abide by, and so far there are three core takeaways:
Almost all proactive policy enacted on technology we don't understand have been to ban them (eg GMOs). This only works in cases where the downside of the technology outweighs any possible upside. There have been extremely impressive treaties like nuclear nonproliferation and Outer Space Treaty which worked well because we knew what the end state we were aiming for.
The place where proactive policies work the best are subsidies and investments - when governments specifically steer investment into promising areas without being overfocused on one particular outcome.
There exist frameworks like GDPR that create a method to reconcile the existing commercial practices with consumer protection, and yet even these are hard to implement in practice.
The lesson from these is for policies to become more nimble in response and forward thinking in terms of investment, and definitely to avoid kneejerk bans which are both stifling and counterproductive.
Rohit, thanks for this comprehensive post. Many of these questions have been going around in my mind as well. I’ve been working with a “team of GPT experts” to analyze such questions. I find that when I try to figure them out by myself I’m overwhelmed. Here is their first response after selecting the priority question. I hope you find the analysis useful.
Mike Randolph
Helix (2023-04-22): After further discussion among the teams, a consensus has been reached on the first step in addressing the question of liability related to AGI actions and decisions. The teams agree that the initial step should be to establish a clear and adaptable legal and ethical framework that takes into account the varying degrees of control, autonomy, and influence in AGI systems.
This framework should include:
Defining the roles and responsibilities of users, manufacturers, and other stakeholders involved in the development, deployment, and usage of AGI systems.
Creating guidelines for determining liability in different contexts and use cases, ensuring that the framework is flexible enough to address a wide range of scenarios and applications.
Establishing a system for monitoring and evaluating AGI systems to ensure compliance with the established legal and ethical guidelines.
Encouraging collaboration between stakeholders, including governments, industry, academia, and civil society, to develop and maintain a comprehensive understanding of the potential risks and benefits associated with AGI systems.
By developing a well-defined framework that balances the responsibilities of different stakeholders, society can navigate the challenges posed by AGI systems and ensure that the benefits of this technology are realized while minimizing potential harm.
Mike: How would the teams suggest going about executing this step?
Helix (2023-04-22): The teams propose the following approach for executing the first step in addressing liability related to AGI actions and decisions:
Establish a multidisciplinary task force: Form a group of experts from various fields, including law, ethics, computer science, AI, engineering, social sciences, and industry representatives. This task force will be responsible for developing the legal and ethical framework and promoting collaboration among stakeholders.
Conduct research and gather input: The task force should conduct thorough research on existing legal and ethical guidelines related to AI and AGI systems, as well as solicit input from stakeholders through public consultations, workshops, and conferences. This process will ensure that the framework is comprehensive and considers a wide range of perspectives.
Draft the framework: Based on the research and input gathered, the task force should draft the legal and ethical framework, addressing the roles and responsibilities of different stakeholders, liability guidelines for various use cases, and a system for monitoring and evaluating AGI systems.
Seek feedback and iterate: Share the draft framework with stakeholders and the public for feedback, ensuring that the document reflects the needs and concerns of all parties involved. Revise the framework based on the feedback received, and repeat this process as needed to create a robust and adaptable document.
Adoption and implementation: Once the framework has been finalized, work with governments, industry, and other stakeholders to adopt and implement the guidelines. This may involve creating new laws or regulations, adapting existing ones, or establishing industry best practices.
Ongoing monitoring and updates: Regularly review and update the framework to ensure that it remains relevant and effective as AGI systems evolve and new applications emerge. Establish a system for reporting issues and concerns related to AGI liability and use this information to inform future updates to the framework.
By following this approach, the teams believe that a comprehensive and adaptable legal and ethical framework can be developed to address liability related to AGI actions and decisions, ensuring that the technology is used responsibly and its benefits are maximized while minimizing potential harm.
It's not completely on topic, but me and I would guess many others would be interested to see what a detailed rebuttal to your steelmanned AI risk argument would be:
"The AI xrisk steelman is
- machines continue to get smarter
- their values diverge from humanity, we can't trust them
- they're highly likely to wipe us out - due to power seeking and optimisation
- therefore in most future worlds we die"
(Incidentally, I find this a great summary of the core concerns - much better than most prominent AI worriers! I'm not a great fan of Yudkowsky myself except insofar as I am happy to see more people become conscious about AI risk and he helps with that)
You said on twitter that points 1-3 aren't proven. As people there point out, this seems at least strange to say for point 1 without further argument (this would e.g. require all forms of Moore's law breaking down despite strong optimization pressure).
Regarding 2, it is at least empirically true that we can in fact not control AI systems (at the moment?) - we can only "align" them via trial and error, and this would quite evidently not be an option for truly capable systems which might be context aware.
As for 3, I think the worry is not necessarily that we die, but more relevantly that we become an afterthought in an AI-controlled world, and then likely die or are at least irreversibly disempowered.