Vasco—A pause of a million years to do AI safety research, before developing ASI, sounds like lunacy at first glance—except I think that actually, it’s totally reasonable on the cosmic time scale you mentioned.
Development of bad ASI could hurt not just humanity, but intelligent life throughout the galaxy and the local cluster. This imposes a very heavy moral duty to get AI right.
If other intelligence aliens could vote on how long our AI Pause should be, they might very well vote for an extremely long, very risk-averse pause. And I think it’s worth trying to incorporate their likely preferences into whatever decisions we make.
Vasco—A pause of a million years to do AI safety research, before developing ASI, sounds like lunacy at first glance—except I think that actually, it’s totally reasonable on the cosmic time scale you mentioned.
Development of bad ASI could hurt not just humanity, but intelligent life throughout the galaxy and the local cluster. This imposes a very heavy moral duty to get AI right.
If other intelligence aliens could vote on how long our AI Pause should be, they might very well vote for an extremely long, very risk-averse pause. And I think it’s worth trying to incorporate their likely preferences into whatever decisions we make.