From my perspective, talking about “a pause” can still be helpful because I think we should be aiming to use a significant fraction of the 1 billion years we have of habbitable Earth to do AI safety research (even just 0.1 % would be 1 million years). I also tend to agree with David Thorstad that extinction risks are greatly exagerated, and that they can be mitigated without advanced AI, such that there is no rush to develop it. Of course, I simultaneuously agree advanced AI is crucial for a flourishing longterm future! One can reasonably argue a long pause like the one I am suggesting is utterly intractable, but I am not so confident. I have barely though about these matters, but I liked the post Muddling Along Is More Likely Than Dystopia:
Summary: There are historical precedents where bans or crushing regulations stop the progress of technology in one industry, while progress in the rest of society continues. This is a plausible future for AI.
Vasco—A pause of a million years to do AI safety research, before developing ASI, sounds like lunacy at first glance—except I think that actually, it’s totally reasonable on the cosmic time scale you mentioned.
Development of bad ASI could hurt not just humanity, but intelligent life throughout the galaxy and the local cluster. This imposes a very heavy moral duty to get AI right.
If other intelligence aliens could vote on how long our AI Pause should be, they might very well vote for an extremely long, very risk-averse pause. And I think it’s worth trying to incorporate their likely preferences into whatever decisions we make.
Nice post, Stephen!
From my perspective, talking about “a pause” can still be helpful because I think we should be aiming to use a significant fraction of the 1 billion years we have of habbitable Earth to do AI safety research (even just 0.1 % would be 1 million years). I also tend to agree with David Thorstad that extinction risks are greatly exagerated, and that they can be mitigated without advanced AI, such that there is no rush to develop it. Of course, I simultaneuously agree advanced AI is crucial for a flourishing longterm future! One can reasonably argue a long pause like the one I am suggesting is utterly intractable, but I am not so confident. I have barely though about these matters, but I liked the post Muddling Along Is More Likely Than Dystopia:
Vasco—A pause of a million years to do AI safety research, before developing ASI, sounds like lunacy at first glance—except I think that actually, it’s totally reasonable on the cosmic time scale you mentioned.
Development of bad ASI could hurt not just humanity, but intelligent life throughout the galaxy and the local cluster. This imposes a very heavy moral duty to get AI right.
If other intelligence aliens could vote on how long our AI Pause should be, they might very well vote for an extremely long, very risk-averse pause. And I think it’s worth trying to incorporate their likely preferences into whatever decisions we make.