I found this article enthralling. But I have a critique:
So humanity would have to go extinct in some way that leaves no other intelligent life (or intelligent machines) behind.
A few people I know think this is not a very āwildā outcome. Earth could suffer a disaster that wipes out both humanity and the digital infrastructure needed to sustain advanced AI. I think this is a distinct possibility because humanity seems resilient whereas IT infrastructure is especially brittleāit depends on electricity and communications systems of some sort.
To put some numbers on this:
In The Precipice, Toby Ord estimates that total existential risk is 1ā6 in the next 100 years, and x-risk from AI is 1ā10. So the total x-risk not from AI is 1/6ā1/10=1/15 in the next century. This means that such a disaster (one in which humans and AI both go extinct) is likely to happen once every 1500 years.
Given that humanity goes extinct, another intelligent species emerging on Earth and restarting civilization seems really unlikely. Iād put it at once every 100,000 years (a scientific wild-ass guess).
Another intuition that may explain peopleās faith in the āskeptical viewā: Species come and go on Earth all the time. Humans are just another speciesāand, at that, are ādisruptingā the ānatural orderā of Earthās biosphere, and will eventually go extinct too.
If humanity simply goes extinct without reaching meaningful space expansion, I agree that that outcome would not be particularly wild.
However, I would find it wild to think this is definitely (or even āoverwhelmingly likelyā) where things are heading. (While I also find it wild to think thereās a decent chance that we will reach galaxy scale.)
I agree with that. I think humanity (as a cultural community, not the species) will most likely have the ability to expand across the Solar System this century, and will most likely have settled other star systems by a billion years from now, when Earth is expected to become uninhabitable.
I found this article enthralling. But I have a critique:
A few people I know think this is not a very āwildā outcome. Earth could suffer a disaster that wipes out both humanity and the digital infrastructure needed to sustain advanced AI. I think this is a distinct possibility because humanity seems resilient whereas IT infrastructure is especially brittleāit depends on electricity and communications systems of some sort.
To put some numbers on this:
In The Precipice, Toby Ord estimates that total existential risk is 1ā6 in the next 100 years, and x-risk from AI is 1ā10. So the total x-risk not from AI is 1/6ā1/10=1/15 in the next century. This means that such a disaster (one in which humans and AI both go extinct) is likely to happen once every 1500 years.
Given that humanity goes extinct, another intelligent species emerging on Earth and restarting civilization seems really unlikely. Iād put it at once every 100,000 years (a scientific wild-ass guess).
Another intuition that may explain peopleās faith in the āskeptical viewā: Species come and go on Earth all the time. Humans are just another speciesāand, at that, are ādisruptingā the ānatural orderā of Earthās biosphere, and will eventually go extinct too.
If humanity simply goes extinct without reaching meaningful space expansion, I agree that that outcome would not be particularly wild.
However, I would find it wild to think this is definitely (or even āoverwhelmingly likelyā) where things are heading. (While I also find it wild to think thereās a decent chance that we will reach galaxy scale.)
I agree with that. I think humanity (as a cultural community, not the species) will most likely have the ability to expand across the Solar System this century, and will most likely have settled other star systems by a billion years from now, when Earth is expected to become uninhabitable.