I don’t think “has the ship sailed or not” is a binary (see also this LW comment). We’re not actually at maximum attention-to-AI, and it is still worthy of consideration whether to keep pushing things in the direction of more attention-to-AI rather than less. And this is really a quantitative matter, since a treaty can only buy some time (probably at most a few years).
Good point re it being a quantitative matter. I think the current priority is to kick the can down the road a few years with a treaty. Once that’s done we can see about kicking the can further. Without a full solution to x-safety|AGI (dealing with alignment, misuse and coordination), maybe all we can do is keep kicking the can down the road.
I don’t think “has the ship sailed or not” is a binary (see also this LW comment). We’re not actually at maximum attention-to-AI, and it is still worthy of consideration whether to keep pushing things in the direction of more attention-to-AI rather than less. And this is really a quantitative matter, since a treaty can only buy some time (probably at most a few years).
Good point re it being a quantitative matter. I think the current priority is to kick the can down the road a few years with a treaty. Once that’s done we can see about kicking the can further. Without a full solution to x-safety|AGI (dealing with alignment, misuse and coordination), maybe all we can do is keep kicking the can down the road.