I think it hinges on whether our AI successors would be counted as “life”, or whether they “matter morally”. I think the answer is likely no to both[1]. Therefore the risk of extinction boils down to risk of misaligned ASI wiping out the biosphere. Which I think is ~90% likely this century on the default trajectory, absent a well enforced global moratorium on ASI development.
Or at least “no” to the latter; if we consider viruses to be life that don’t matter morally, or are in fact morally negative, we can consider (default rogue) ASI to be similar.
I think it hinges on whether our AI successors would be counted as “life”, or whether they “matter morally”. I think the answer is likely no to both[1]. Therefore the risk of extinction boils down to risk of misaligned ASI wiping out the biosphere. Which I think is ~90% likely this century on the default trajectory, absent a well enforced global moratorium on ASI development.
Or at least “no” to the latter; if we consider viruses to be life that don’t matter morally, or are in fact morally negative, we can consider (default rogue) ASI to be similar.