I don’t think points about timelines reflect an accurate model of how AI regulations and guardrails are actually developed. What we need is for Congress to pass a law ordering some department within the executive branch to regulate AI, eg by developing permitting requirements or creating guidelines for legal AI research or whatever. Once this is done, the specifics of how AI is regulated are mostly up to that executive branch, which can and will change over time.
Because of this, it is never “too soon” to order the regulation of AI. We may not know exactly what regulations would be like, but this is very unlikely to be written into law anyway. What we want right now is to create mechanisms to develop and enforce safety standards. Similar arguments apply to internal safety standards at companies developing AI capabilities.
It seems really hard for us to know exactly when AGI (or ASI or whatever you want to call it) is actually imminent. Even if it was possible, however, I just don’t think last-minute panicking about AGI would actually accomplish much. It’s all but impossible to quickly create societal consensus that the world is about to end before any harm has actually occurred. I feel like there’s an unrealistic image of “we will panic and then everyone will agree to immediately stop AI research” implicit in this post. The smart thing to do is to develop mechanisms early and then use these mechanisms when we get closer to crunch time.
I don’t think points about timelines reflect an accurate model of how AI regulations and guardrails are actually developed. What we need is for Congress to pass a law ordering some department within the executive branch to regulate AI, eg by developing permitting requirements or creating guidelines for legal AI research or whatever. Once this is done, the specifics of how AI is regulated are mostly up to that executive branch, which can and will change over time.
Because of this, it is never “too soon” to order the regulation of AI. We may not know exactly what regulations would be like, but this is very unlikely to be written into law anyway. What we want right now is to create mechanisms to develop and enforce safety standards. Similar arguments apply to internal safety standards at companies developing AI capabilities.
It seems really hard for us to know exactly when AGI (or ASI or whatever you want to call it) is actually imminent. Even if it was possible, however, I just don’t think last-minute panicking about AGI would actually accomplish much. It’s all but impossible to quickly create societal consensus that the world is about to end before any harm has actually occurred. I feel like there’s an unrealistic image of “we will panic and then everyone will agree to immediately stop AI research” implicit in this post. The smart thing to do is to develop mechanisms early and then use these mechanisms when we get closer to crunch time.