Fantastic to get this update—was just finding myself complaining about the lack of good object-level AI policy proposals!
At the risk of letting perfect be the enemy of the good, I would love a top level post for each of the recommendations, going into much greater detail. Getting discussions of policy proposals into the open where they can be criticized from diverse perspectives is crucial to arrive at policies that are robustly good.
One thing I find interesting to think about, is how well-funded non-governmental actors might be able to bring these policies to life. After all, I expect most progress to come out of a few influential labs. Getting a handshake agreement from those labs, would achieve results not too dissimilar from national legislation.
For rapid shutdown mechanisms, for example, the bottleneck to me seems just as much to be developing the actual protocols as getting adoption. If a great protocol is developed that would allow openAI leadership to shut down a compute cluster at the hardware level running an experimental AI, and adopting the protocol doesn’t add much overhead, I feel like there’s a non-zero chance they might adopt it without any coercion. If the overhead is significant, how significant would it be? Is it within the bounds of what a wealthy actor could subsidize?
Fantastic to get this update—was just finding myself complaining about the lack of good object-level AI policy proposals!
At the risk of letting perfect be the enemy of the good, I would love a top level post for each of the recommendations, going into much greater detail. Getting discussions of policy proposals into the open where they can be criticized from diverse perspectives is crucial to arrive at policies that are robustly good.
One thing I find interesting to think about, is how well-funded non-governmental actors might be able to bring these policies to life. After all, I expect most progress to come out of a few influential labs. Getting a handshake agreement from those labs, would achieve results not too dissimilar from national legislation.
For rapid shutdown mechanisms, for example, the bottleneck to me seems just as much to be developing the actual protocols as getting adoption. If a great protocol is developed that would allow openAI leadership to shut down a compute cluster at the hardware level running an experimental AI, and adopting the protocol doesn’t add much overhead, I feel like there’s a non-zero chance they might adopt it without any coercion. If the overhead is significant, how significant would it be? Is it within the bounds of what a wealthy actor could subsidize?