You raise some interesting points. Some quick notes/counterpoints:
Not all existential risk is extinction risk.
Existential risk doesn’t have an extremely clean definition, but in the simple extinction/doom/non-utopia ontology, most longtermist EA’s intuitive conception of “existential risk” is closer to risk of “doom” than “extinction”
Nuclear war may not be a large direct existential risk, but it’s an existential risk factor.
The world could be made scarier after large-scale nuclear war, and thus less hospitable for altruistic values (plus other desiderata)
AI may or may not kill us all. But this point is academic and only mildly important, because if unaligned AI takes over, we (humanity and our counterfactual descendants) have lost control of the future.
Almost all moral value in the future is in the tails (extremely good and extremely bad outcomes).
Those outcomes likely require optimization for, and it seems likely that our spiritual descendants optimize heavily for good stuff than bad stuff.
Bad stuff might happen incidentally (historical analogues include factory farming and slavery), but they aren’t being directly optimized for, so will be a small fraction of the badness of maximally bad outcomess.
Yeah I think I have the most problem with (4), something that I probably should have expressed more in the post.
It’s true that humans are in theory trying to optimize for good outcomes, and this is a reason to expect utility to diverge to infinity. However, there are in my view equally good reasons utility to diverge to negative infinity- that being that the world is not designed for humans. We are inherently fragile creatures, only suitable to live in a world with specific temperature, air composition, etc. There are a lot of large-scale phenomenon causing these factors to change—s-risks—that could send utility plunging. This, plus the fact that current utility is below 0, means that I think existential risk is probably a moral benefit.
I also agree that this whole thing is pretty pedantic, especially in cases like AI domination.
I think the main question here is: What can we do today to make the world better in the future? If you believe AI could make the world a lot worse, or even just lock in the already existing state, it seems really valuable to do work on that not happening. If you additionally believe AI could solve problems such as wild animal suffering or unhappy humans then it seems like an even more area problem to spend your time on.
(I think this might be less clear for biorisk where the main concern really is extinction.)
Hi, welcome to the forum.
You raise some interesting points. Some quick notes/counterpoints:
Not all existential risk is extinction risk.
Existential risk doesn’t have an extremely clean definition, but in the simple extinction/doom/non-utopia ontology, most longtermist EA’s intuitive conception of “existential risk” is closer to risk of “doom” than “extinction”
Nuclear war may not be a large direct existential risk, but it’s an existential risk factor.
The world could be made scarier after large-scale nuclear war, and thus less hospitable for altruistic values (plus other desiderata)
AI may or may not kill us all. But this point is academic and only mildly important, because if unaligned AI takes over, we (humanity and our counterfactual descendants) have lost control of the future.
Almost all moral value in the future is in the tails (extremely good and extremely bad outcomes).
Those outcomes likely require optimization for, and it seems likely that our spiritual descendants optimize heavily for good stuff than bad stuff.
Bad stuff might happen incidentally (historical analogues include factory farming and slavery), but they aren’t being directly optimized for, so will be a small fraction of the badness of maximally bad outcomess.
Thank you for the response!
Yeah I think I have the most problem with (4), something that I probably should have expressed more in the post.
It’s true that humans are in theory trying to optimize for good outcomes, and this is a reason to expect utility to diverge to infinity. However, there are in my view equally good reasons utility to diverge to negative infinity- that being that the world is not designed for humans. We are inherently fragile creatures, only suitable to live in a world with specific temperature, air composition, etc. There are a lot of large-scale phenomenon causing these factors to change—s-risks—that could send utility plunging. This, plus the fact that current utility is below 0, means that I think existential risk is probably a moral benefit.
I also agree that this whole thing is pretty pedantic, especially in cases like AI domination.
“the world is not designed for humans”
I think our descendants will unlikely be flesh-and-blood humans but rather digital forms of sentience: https://www.cold-takes.com/how-digital-people-could-change-the-world/
I think the main question here is: What can we do today to make the world better in the future? If you believe AI could make the world a lot worse, or even just lock in the already existing state, it seems really valuable to do work on that not happening. If you additionally believe AI could solve problems such as wild animal suffering or unhappy humans then it seems like an even more area problem to spend your time on.
(I think this might be less clear for biorisk where the main concern really is extinction.)