Yeah I think I have the most problem with (4), something that I probably should have expressed more in the post.
It’s true that humans are in theory trying to optimize for good outcomes, and this is a reason to expect utility to diverge to infinity. However, there are in my view equally good reasons utility to diverge to negative infinity- that being that the world is not designed for humans. We are inherently fragile creatures, only suitable to live in a world with specific temperature, air composition, etc. There are a lot of large-scale phenomenon causing these factors to change—s-risks—that could send utility plunging. This, plus the fact that current utility is below 0, means that I think existential risk is probably a moral benefit.
I also agree that this whole thing is pretty pedantic, especially in cases like AI domination.
I think the main question here is: What can we do today to make the world better in the future? If you believe AI could make the world a lot worse, or even just lock in the already existing state, it seems really valuable to do work on that not happening. If you additionally believe AI could solve problems such as wild animal suffering or unhappy humans then it seems like an even more area problem to spend your time on.
(I think this might be less clear for biorisk where the main concern really is extinction.)
Thank you for the response!
Yeah I think I have the most problem with (4), something that I probably should have expressed more in the post.
It’s true that humans are in theory trying to optimize for good outcomes, and this is a reason to expect utility to diverge to infinity. However, there are in my view equally good reasons utility to diverge to negative infinity- that being that the world is not designed for humans. We are inherently fragile creatures, only suitable to live in a world with specific temperature, air composition, etc. There are a lot of large-scale phenomenon causing these factors to change—s-risks—that could send utility plunging. This, plus the fact that current utility is below 0, means that I think existential risk is probably a moral benefit.
I also agree that this whole thing is pretty pedantic, especially in cases like AI domination.
“the world is not designed for humans”
I think our descendants will unlikely be flesh-and-blood humans but rather digital forms of sentience: https://www.cold-takes.com/how-digital-people-could-change-the-world/
I think the main question here is: What can we do today to make the world better in the future? If you believe AI could make the world a lot worse, or even just lock in the already existing state, it seems really valuable to do work on that not happening. If you additionally believe AI could solve problems such as wild animal suffering or unhappy humans then it seems like an even more area problem to spend your time on.
(I think this might be less clear for biorisk where the main concern really is extinction.)