I found this article enthralling. But I have a critique:
So humanity would have to go extinct in some way that leaves no other intelligent life (or intelligent machines) behind.
A few people I know think this is not a very “wild” outcome. Earth could suffer a disaster that wipes out both humanity and the digital infrastructure needed to sustain advanced AI. I think this is a distinct possibility because humanity seems resilient whereas IT infrastructure is especially brittle—it depends on electricity and communications systems of some sort.
To put some numbers on this:
In The Precipice, Toby Ord estimates that total existential risk is 1⁄6 in the next 100 years, and x-risk from AI is 1⁄10. So the total x-risk not from AI is 1/6−1/10=1/15 in the next century. This means that such a disaster (one in which humans and AI both go extinct) is likely to happen once every 1500 years.
Given that humanity goes extinct, another intelligent species emerging on Earth and restarting civilization seems really unlikely. I’d put it at once every 100,000 years (a scientific wild-ass guess).
Another intuition that may explain people’s faith in the “skeptical view”: Species come and go on Earth all the time. Humans are just another species—and, at that, are “disrupting” the “natural order” of Earth’s biosphere, and will eventually go extinct too.
If humanity simply goes extinct without reaching meaningful space expansion, I agree that that outcome would not be particularly wild.
However, I would find it wild to think this is definitely (or even “overwhelmingly likely”) where things are heading. (While I also find it wild to think there’s a decent chance that we will reach galaxy scale.)
I agree with that. I think humanity (as a cultural community, not the species) will most likely have the ability to expand across the Solar System this century, and will most likely have settled other star systems by a billion years from now, when Earth is expected to become uninhabitable.
I found this article enthralling. But I have a critique:
A few people I know think this is not a very “wild” outcome. Earth could suffer a disaster that wipes out both humanity and the digital infrastructure needed to sustain advanced AI. I think this is a distinct possibility because humanity seems resilient whereas IT infrastructure is especially brittle—it depends on electricity and communications systems of some sort.
To put some numbers on this:
In The Precipice, Toby Ord estimates that total existential risk is 1⁄6 in the next 100 years, and x-risk from AI is 1⁄10. So the total x-risk not from AI is 1/6−1/10=1/15 in the next century. This means that such a disaster (one in which humans and AI both go extinct) is likely to happen once every 1500 years.
Given that humanity goes extinct, another intelligent species emerging on Earth and restarting civilization seems really unlikely. I’d put it at once every 100,000 years (a scientific wild-ass guess).
Another intuition that may explain people’s faith in the “skeptical view”: Species come and go on Earth all the time. Humans are just another species—and, at that, are “disrupting” the “natural order” of Earth’s biosphere, and will eventually go extinct too.
If humanity simply goes extinct without reaching meaningful space expansion, I agree that that outcome would not be particularly wild.
However, I would find it wild to think this is definitely (or even “overwhelmingly likely”) where things are heading. (While I also find it wild to think there’s a decent chance that we will reach galaxy scale.)
I agree with that. I think humanity (as a cultural community, not the species) will most likely have the ability to expand across the Solar System this century, and will most likely have settled other star systems by a billion years from now, when Earth is expected to become uninhabitable.