“woah, AI is powerful, I better be the one to build it”
I think this ship has long since sailed. The (Microsoft) OpenAI, Google Deepmind and (Amazon) Anthropic race is already enough to end the world. They have enough money, and all the best talent. If anything, if governments enter the race that might actually slow things down, by further dividing talent and the hardware supply.
We need an international AGI non-proliferation treaty. I think any risks for governments joining the race is more than outweighed by the chances of them working toward a viable treaty.
I don’t think “has the ship sailed or not” is a binary (see also this LW comment). We’re not actually at maximum attention-to-AI, and it is still worthy of consideration whether to keep pushing things in the direction of more attention-to-AI rather than less. And this is really a quantitative matter, since a treaty can only buy some time (probably at most a few years).
Good point re it being a quantitative matter. I think the current priority is to kick the can down the road a few years with a treaty. Once that’s done we can see about kicking the can further. Without a full solution to x-safety|AGI (dealing with alignment, misuse and coordination), maybe all we can do is keep kicking the can down the road.
I think this ship has long since sailed. The (Microsoft) OpenAI, Google Deepmind and (Amazon) Anthropic race is already enough to end the world. They have enough money, and all the best talent. If anything, if governments enter the race that might actually slow things down, by further dividing talent and the hardware supply.
We need an international AGI non-proliferation treaty. I think any risks for governments joining the race is more than outweighed by the chances of them working toward a viable treaty.
I don’t think “has the ship sailed or not” is a binary (see also this LW comment). We’re not actually at maximum attention-to-AI, and it is still worthy of consideration whether to keep pushing things in the direction of more attention-to-AI rather than less. And this is really a quantitative matter, since a treaty can only buy some time (probably at most a few years).
Good point re it being a quantitative matter. I think the current priority is to kick the can down the road a few years with a treaty. Once that’s done we can see about kicking the can further. Without a full solution to x-safety|AGI (dealing with alignment, misuse and coordination), maybe all we can do is keep kicking the can down the road.