Discussion about this post

User's avatar
Ted Wade's avatar

A couple of important edge cases here. One is reacting swiftly to extremely high cost situations. Like an apparent nuke launch or an apparent AI foom. Quickness of reaction would be demanded yet not very helpful for producing the best outcome. Predictive knowledge has far, far more value - in support of avoiding the need for reaction at all. And that is what sensible people are advocating.

Second case is that signal detection dominates in some reaction situations. Warming is an example where clear signals abound. E.g., the oceans have clearly heated up, methane is rising, species and populations disappearing, fire and flood, temperature records. But we are doing nothing. The quick reaction/fast money faction is ignoring the signal and using every defense mechanism in the book to prevent anyone (not just themselves) from meaningfully responding to it.

Expand full comment
R.B. Griggs's avatar

Yes! Lots of resonance with Kevin Kelly's Pro-Actionary Principle (via Max More): https://kk.org/thetechnium/the-pro-actiona/

Every viable proposal for AI alignment will need "better tools for anticipation, better tools for ceaseless monitoring and testing, better tools for determining and ranking risks, better tools for remediation of harm done, and better tools and techniques for redirecting technologies as they grow."

Expand full comment
22 more comments...

No posts