I can’t follow what you’re saying in the ‘AGI will be aligned by default’ section. I think you’re saying in that scenario it will be so good that you should disregard everything else and try and make it happen ASAP? If so, that treats all other x-risk and trajectory change scenarios as having probability indistinguishable from 0, which can’t be right. There’s always going to be one you think has distinctly higher probability than the others, and (as a longtermist) you should work on that.
I think that by the AGI timelines of the EA community, yes other X-risks have roughly a probability of extinction indistinguishable from 0. And conditional on AGI working we’ll also go out of the other risks most likely.
Whereas without AGI, biorisks X-risks might become a thing, not in the short run but in the second half of the century.
I can’t follow what you’re saying in the ‘AGI will be aligned by default’ section. I think you’re saying in that scenario it will be so good that you should disregard everything else and try and make it happen ASAP? If so, that treats all other x-risk and trajectory change scenarios as having probability indistinguishable from 0, which can’t be right. There’s always going to be one you think has distinctly higher probability than the others, and (as a longtermist) you should work on that.
I think that by the AGI timelines of the EA community, yes other X-risks have roughly a probability of extinction indistinguishable from 0.
And conditional on AGI working we’ll also go out of the other risks most likely.
Whereas without AGI, biorisks X-risks might become a thing, not in the short run but in the second half of the century.