because it feels very differently about “99% of humanity is destroyed, but the remaining 1% are able to rebuild civilisation” and “100% of humanity is destroyed, civilisation ends”
Maybe? This depends on what you think about the probability that intelligent life re-evolves on earth (it seems likely to me) and how good you feel about the next intelligent species on earth vs humans.
the particular focus on extinction increases the threat from AI and engineered biorisks
IMO, most x-risk from AI probably doesn’t come from literal human extinction but instead AI systems acquiring most of the control over long run resources while some/most/all humans survive, but fair enough.
Maybe? This depends on what you think about the probability that intelligent life re-evolves on earth (it seems likely to me) and how good you feel about the next intelligent species on earth vs humans.
Yeah, it seems possible to be longtermist but not think that human extinction entails loss of all hope, but extinction still seems more important to the longtermist than the neartermist.
IMO, most x-risk from AI probably doesn’t come from literal human extinction but instead AI systems acquiring most of the control over long run resources while some/most/all humans survive, but fair enough.
valid. I guess longtermists and neartermists will also feel quite different about this fate.
Conditioned on human extinction, do you expect intelligent life to re-evolve with levels of autonomy similar to what humanity has now (which seems quite important for assessing how bad human extinction would be on longtermist grounds)? I don’t think it’s likely.
Maybe the underlying crux (if your intuition differs) is what proportion of human extinction scenarios (not including non-extinction x-risk) involve intelligent/agentic AIs, and/or other conditions which would significantly limit the potential of new intelligent life even if it did re-emerge. My current low-resilience impression is probably 90+%.
And the above considerations and credences make how good the next intelligent species are vs. humans fairly inconsequential.
Maybe? This depends on what you think about the probability that intelligent life re-evolves on earth (it seems likely to me) and how good you feel about the next intelligent species on earth vs humans.
IMO, most x-risk from AI probably doesn’t come from literal human extinction but instead AI systems acquiring most of the control over long run resources while some/most/all humans survive, but fair enough.
Yeah, it seems possible to be longtermist but not think that human extinction entails loss of all hope, but extinction still seems more important to the longtermist than the neartermist.
valid. I guess longtermists and neartermists will also feel quite different about this fate.
Conditioned on human extinction, do you expect intelligent life to re-evolve with levels of autonomy similar to what humanity has now (which seems quite important for assessing how bad human extinction would be on longtermist grounds)? I don’t think it’s likely.
Maybe the underlying crux (if your intuition differs) is what proportion of human extinction scenarios (not including non-extinction x-risk) involve intelligent/agentic AIs, and/or other conditions which would significantly limit the potential of new intelligent life even if it did re-emerge. My current low-resilience impression is probably 90+%.
And the above considerations and credences make how good the next intelligent species are vs. humans fairly inconsequential.