I’m sorry, this doesn’t engage with the main point(s) you are trying to make, but I’m not sure why you use the term “existential risk” (which you define as risks of human extinction and undesirable lock-ins that don’t involve s-risk-level suffering) when you could have just used the term “extinction risk”.
You say:
If you’re uncertain whether humanity’s future will be net positive, and therefore whether existential risk[1] reduction is good, you might reason that we should keep civilization going for now so we can learn more and, in the future, make a better-informed decision about whether to keep it going.
Reducing extinction risk if humanity’s future is net negative is bad. However, reducing risk of “undesirable lock-ins” seems robustly good no matter what the expected value of the future is. So I’m not sure bucketing these two together under the heading of “existential risk” really works.
Minor point: I don’t think it’s strictly true that reducing risks of undesirable lock-ins is robustly good no matter what the expected value of the future is. It could be that a lock-in is not good, but it prevents an even worse outcome from occurring.
I included other existential risks in order to counter the following argument: “As long as we prevent non-s-risk-level undesirable lock-ins in the near-term, future people can coordinate to prevent s-risks.” This is a version of the option value argument that isn’t about extinction risk. I realize this might be a weird argument for someone to make, but I covered it to be comprehensive.
But the way I wrote this, I was pretty much just focused on extinction risk. So I agree it doesn’t make a lot of sense to include other kinds of x-risks. I’ll edit this now.
I’m sorry, this doesn’t engage with the main point(s) you are trying to make, but I’m not sure why you use the term “existential risk” (which you define as risks of human extinction and undesirable lock-ins that don’t involve s-risk-level suffering) when you could have just used the term “extinction risk”.
You say:
Reducing extinction risk if humanity’s future is net negative is bad. However, reducing risk of “undesirable lock-ins” seems robustly good no matter what the expected value of the future is. So I’m not sure bucketing these two together under the heading of “existential risk” really works.
Thanks :) Good point.
Minor point: I don’t think it’s strictly true that reducing risks of undesirable lock-ins is robustly good no matter what the expected value of the future is. It could be that a lock-in is not good, but it prevents an even worse outcome from occurring.
I included other existential risks in order to counter the following argument: “As long as we prevent non-s-risk-level undesirable lock-ins in the near-term, future people can coordinate to prevent s-risks.” This is a version of the option value argument that isn’t about extinction risk. I realize this might be a weird argument for someone to make, but I covered it to be comprehensive.
But the way I wrote this, I was pretty much just focused on extinction risk. So I agree it doesn’t make a lot of sense to include other kinds of x-risks. I’ll edit this now.