8 Comments
Sep 25, 2023·edited Sep 25, 2023Liked by Rohit Krishnan

I enjoyed reading your history very much, but as someone who does have concerns around AI, I was confused by the phrasing in the last line: "And so we are falling into the trap of doing something". I read the article before this last line as akin to a bird's eye view of the history leading up to our current moment in tech, but this line seemed to suggest it would be a mistake to try to "do" something about the potential risks of AI. I'm definitely not an AI doomer, but I don't clearly see the argument against learning from our past mistakes with social media and applying them to AI? I think someone like Sam Altman may even have taken heed against the unchecked "move fast and break things" philosophy. Enjoyed the piece.

Expand full comment
author

Thanks for the kind words! Re the application to AI the point is we're jumping ahead at regulating it without knowing what/ how/ why we're regulating it, as if the process is what will give us the answers.

Expand full comment

Thanks for the response ! Hm, interesting. I do think that the why(ex:things like genAI causing an influx of false information) and the what(ex:ChatGPT, Deepfakes,etc) often seem to be defined, but the how is a bit nebulous, at least to public knowledge. I wonder if regulation in this space just has the nature of making more ill-defined trade offs compared to something classic like the seat-belt or runoff waste, just because its much more complex and multi-disciplinary. But, I don't know if that should dissuade action, perhaps we have to accept that it will by nature be uncertain but trying to tract it and do something may be better than letting it run completely unchecked. If we had some unknown form of energy spawn out of a lab for example, I think a lot of people's priors would be to regulate it so we can understand it more, even if we don't understand everything about it.

Expand full comment

Fascinating debate - I’ll just add a tiny thought: the action of making something regulate-able, a public concern rather than a curio concern, is an action in its own right. The “how” doesn’t always matter yet. I mean, the USA was built on the strip of philosophy that humans everywhere are entitled to pursue happiness -- legislate that! :) ha! But having, in principle, the notion that AI is a public concern of great potential global societal impact, and that therefore the companies that trade in it are not free to do whatever they want, is an important something.

Expand full comment
author

We can't regulate something at that high a remove. Not just about tradeoffs being ill defined, they're not known. Even with deepfakes, which we know is an issue, what can you actually do?

- control generation, we can't as we'll have to build a global panopticon at the minimum

- dissuade dissemination, we already do but if people share wrong info we can't stop them

- criminalise sharing, that's too Draconian, and unenforceable

- set liabilities, which will chill entire tech development including detection, far worse outcomes

- create detection methods, that's a technological question, not a regulatory one

There's no regulatory silver bullet for these questions.

Expand full comment

I do still think there are regulation decisions which can be enforced which would create somewhat measurable trade offs or improvements. Just a small example that was in the EU AI Act was requiring that if content is made using Gen AI, it must be disclosed that this is the case. While, yes, this would not be perfectly effective, that is a problem with a lot of regulation, and in my mind is not a good argument for not enforcing it now as much as we can while we work on better methods of implementation. Even a tenant like the ban on the development on facial recognition software passes the test of being tractable in terms of a what, how, and why. There are definite tradeoffs being made there, but at least the tradeoffs are defined. It is a nebulous landscape, for sure. But my priors do actually lie with the idea that the tech industry, which I don't see as fundamentally different in regards to externalities compared to something like the oil industry, doesn't really have the incentive to regulate themselves in accordance with overall social good.

Expand full comment
Sep 25, 2023Liked by Rohit Krishnan

Very valuable history. Thanks , Rohit. As Peter Gabriel sang, “Lord, here comes the flood. We’ll say goodbye to flesh and blood.” One aspect of social media’s effect on the economics of music can be seen either as a straitjacket or an opportunity. Artists are seen not to exist if they don’t have a strong social media presence. This does compel us to find creative ways to be noticed, and also, increasingly after the disastrous effect of the pandemic, has made live performances more vital and important. I’m not talking about the Swift Beyoncé Sheeran mega-machines in particular but of venues all the way down to 50 capacity. It seems that people want to see and hear something real!

Expand full comment