Thank you everyone for the many responses! I will address one point which came up in multiple comments here as a top-level comment, and otherwise respond to comments.
Regarding the length of the long-term future:
My main concern here is that it seems really hard to reach existential security (i.e. extinction risks falling to smaller and smaller levels), especially given that extinction risks have been rising in recent decades. If we do not reach existential security the future population is much smaller accordingly and gets less weight in my considerations.
I take concerns around extinction risks seriously—but they are an argument against longtermism, not in favour of it. It just seems really weird to me to jump from ‘extinction risks are rising so much, we must prioritize them!’ to ‘there is lots of value in the long-term future’. The latter is only true if we manage to get rid of those extinction risks.
The line about totalitarianism is not central for me. Oops. Clearly should not have opened the section with a reference to it.
I think even with totalitarianism reaching existential security is really hard—the world would need to be permanently locked into a totalitarian state.
Something that stood out to me in that discussion
(in a comment by Paul Christiano: “Stepping back, I think the key object-level questions are something like “Is there any way to build a civilization that is very stable?” and “Will people try?” It seems to me you should have a fairly high probability on “yes” to both questions.”)
as well as Toby’s EAG Reconnect AMA is how much of the belief that we can reach existential security might be based on a higher level of baseline optimism than I have about humanity.
This is just a note that I still intend to respond to a lot of comments, but I will be slow! (I went into labour as I was writing my previous batch of responses and am busy baby cuddling now.)
I think you mean to say ‘existential risk’ rather than ‘extinction risk’ in this comment?
I think even with totalitarianism reaching existential security is really hard—the world would need to be permanently locked into a totalitarian state.
Something I didn’t say in my other comment is that I do think the future could be very, very long under a misaligned AI scenario. Such an AI would have some goals, and it would probably be useful to have a very long time to achieve those goals. This wouldn’t really matter if there was no sentient life around for the AI to exploit, but we can’t be sure that this would be the case as the AI may find it useful to use sentient life.
Overall I am interested to hear your view on the importance of AI alignment as, from what I’ve heard, it sounds like it could still be important taking into account your various views.
If we do not reach existential security the future population is much smaller accordingly and gets less weight in my considerations. I take concerns around extinction risks seriously—but they are an argument against longtermism, not in favour of it. It just seems really weird to me to jump from ‘extinction risks are rising so much, we must prioritize them!’ to ‘there is lots of value in the long-term future’. The latter is only true if we manage to get rid of those extinction risks.
I don’t understand. It seems that you could see the value of the long term future being unrelated to the probability of x risk. Then, the more you value the long term future, the more you value improving x risk.
I think a sketch of the story might go: let’s say your value for reaching the best final state of the long term future is “V”.
If there’s a 5%, 50%, or 99.99% risk of extinction, that doesn’t affect V (but might make us sadder that we might not reach it).
Generally (e.g. assuming that x risk can be practically reduced) it’s more likely you would work on x-risk as your value of V is higher.
It seems like this explains why the views are correlated, “extinction risks are rising so much, we must prioritize them!” and “there is lots of value in the long-term future”. So these views aren’t a contradiction.
Am I slipping in some assumption or have I failed to capture what you envisioned?
Thank you everyone for the many responses! I will address one point which came up in multiple comments here as a top-level comment, and otherwise respond to comments.
Regarding the length of the long-term future: My main concern here is that it seems really hard to reach existential security (i.e. extinction risks falling to smaller and smaller levels), especially given that extinction risks have been rising in recent decades. If we do not reach existential security the future population is much smaller accordingly and gets less weight in my considerations. I take concerns around extinction risks seriously—but they are an argument against longtermism, not in favour of it. It just seems really weird to me to jump from ‘extinction risks are rising so much, we must prioritize them!’ to ‘there is lots of value in the long-term future’. The latter is only true if we manage to get rid of those extinction risks.
The line about totalitarianism is not central for me. Oops. Clearly should not have opened the section with a reference to it.
I think even with totalitarianism reaching existential security is really hard—the world would need to be permanently locked into a totalitarian state.
I recommend reading this shortform discussion on reaching existential security.
Something that stood out to me in that discussion (in a comment by Paul Christiano: “Stepping back, I think the key object-level questions are something like “Is there any way to build a civilization that is very stable?” and “Will people try?” It seems to me you should have a fairly high probability on “yes” to both questions.”)
as well as Toby’s EAG Reconnect AMA is how much of the belief that we can reach existential security might be based on a higher level of baseline optimism than I have about humanity.
This is just a note that I still intend to respond to a lot of comments, but I will be slow! (I went into labour as I was writing my previous batch of responses and am busy baby cuddling now.)
I think you mean to say ‘existential risk’ rather than ‘extinction risk’ in this comment?
Something I didn’t say in my other comment is that I do think the future could be very, very long under a misaligned AI scenario. Such an AI would have some goals, and it would probably be useful to have a very long time to achieve those goals. This wouldn’t really matter if there was no sentient life around for the AI to exploit, but we can’t be sure that this would be the case as the AI may find it useful to use sentient life.
Overall I am interested to hear your view on the importance of AI alignment as, from what I’ve heard, it sounds like it could still be important taking into account your various views.
I don’t understand. It seems that you could see the value of the long term future being unrelated to the probability of x risk. Then, the more you value the long term future, the more you value improving x risk.
I think a sketch of the story might go: let’s say your value for reaching the best final state of the long term future is “V”.
If there’s a 5%, 50%, or 99.99% risk of extinction, that doesn’t affect V (but might make us sadder that we might not reach it).
Generally (e.g. assuming that x risk can be practically reduced) it’s more likely you would work on x-risk as your value of V is higher.
It seems like this explains why the views are correlated, “extinction risks are rising so much, we must prioritize them!” and “there is lots of value in the long-term future”. So these views aren’t a contradiction.
Am I slipping in some assumption or have I failed to capture what you envisioned?