Hey, great post, I pretty much agree with all of this.
My caveat is: One aspect of longtermism is that the future should be big and long, because that’s how we’ll create the most moral value. But a slightly different perspective is that the future might be big and long, and so that’s where the most moral value will be, even in expectation.
The more strongly you believe that humanity is not inherently super awesome, the more important that latter view seems to be. It’s not “moral value” in the sense of positive utility, it’s “moral value” in the sense of lives that can potentially be affected.
For example, you write:
I do not feel compelled by arguments that the future could be very long. I do not see how this is possible without at least soft totalitarianism, which brings its own risks of reducing the value of the future.
And I agree! But where you seem to be implying “the future will only be stable under totalitarianism, so it’s not really worth fighting for”, I would argue “the future will be stable under totalitarianism, so it’s really important to fight totalitarianism in particular!” An overly simplistic way of thinking about this is that longtermism is (at least in public popular writing) mostly concerned with x-risk, but under your worldview, we ought to be much more concerned about s-risk. I completely agree with this conclusion, I just don’t think it goes against longtermism, but that might come down to semantics.
FWIW my completely personal and highly speculative view is that EA orgs and EA leaders tend to talk too much about x-risk and not enough about s-risk, mostly because the former is more palatable, and is currently sufficient for advocating for s-risk relevant causes anyway. Or more concretely: It’s pretty easy to imagine an asteroid hitting the planet, killing everyone, and eliminating the possibility of future humans. It’s a lot wackier, more alienating and more bizarre to imagine an AI that not only destroys humanity, but permanently enslaves it in some kind of extended intergalactic torture chamber. So (again, totally guessing), many people have decided to just talk about x-risk, but use it as a way to advocate for getting talent and funding into AI Safety, which was the real goal anyway.
On a final note, if we take flavors of your view with varying degrees of extremity, we get, in order of strength of claim:
X-risk is less important than s-risk
We should be indifferent about x-risk, there’s too much uncertainty both ethically and in terms of what the future will actually look like
The potential for s-risk is so bad that we should invite and actually trying to cause x-risk, unless s-risk reduction is really tractable
S-risks aside, humanity is just really net negative and we should invite x-risk no matter what
(to be clear, I don’t think you’re making any of these claims yourself, but they’re possible paths views similar to yours might lead to).
Some of these strike me as way too strong and unsubstantiated, but regardless of what we think object-level, it’s not hard to think of reasons these views might be under-discussed. So I think what you’re really getting at is something like, “does EA have the ability to productively discuss info-hazards”. And the answer is that we probably wouldn’t know if it did.
FWIW my completely personal and highly speculative view is that EA orgs and EA leaders tend to talk too much about x-risk and not enough about s-risk, mostly because the former is more palatable, and is currently sufficient for advocating for s-risk relevant causes anyway. Or more concretely: It’s pretty easy to imagine an asteroid hitting the planet, killing everyone, and eliminating the possibility of future humans. It’s a lot wackier, more alienating and more bizarre to imagine an AI that not only destroys humanity, but permanently enslaves it in some kind of extended intergalactic torture chamber.
I’m pretty sure that risks of scenarios a lot broader and less specific than extended intergalactic torture chambers count as s-risks. S-risks are defined as merely “risks of astronomical suffering.” So the risk of having, for example, a sufficiently extremely large future with a small but nonzero density of suffering would count as an s-risk. See this post from Tobias Baumann for examples.
To be clear, by “x-risk” here, you mean extinction risks specifically, and not existential risks generally (which is what “x-risk” was coined to refer to, from my understanding)? There are existential risks that don’t involve extinction, and some s-risks (or all, depending on how we define s-risk) are existential risks because of the expected scale of their suffering.
Hey, great post, I pretty much agree with all of this.
My caveat is: One aspect of longtermism is that the future should be big and long, because that’s how we’ll create the most moral value. But a slightly different perspective is that the future might be big and long, and so that’s where the most moral value will be, even in expectation.
The more strongly you believe that humanity is not inherently super awesome, the more important that latter view seems to be. It’s not “moral value” in the sense of positive utility, it’s “moral value” in the sense of lives that can potentially be affected.
For example, you write:
And I agree! But where you seem to be implying “the future will only be stable under totalitarianism, so it’s not really worth fighting for”, I would argue “the future will be stable under totalitarianism, so it’s really important to fight totalitarianism in particular!” An overly simplistic way of thinking about this is that longtermism is (at least in public popular writing) mostly concerned with x-risk, but under your worldview, we ought to be much more concerned about s-risk. I completely agree with this conclusion, I just don’t think it goes against longtermism, but that might come down to semantics.
FWIW my completely personal and highly speculative view is that EA orgs and EA leaders tend to talk too much about x-risk and not enough about s-risk, mostly because the former is more palatable, and is currently sufficient for advocating for s-risk relevant causes anyway. Or more concretely: It’s pretty easy to imagine an asteroid hitting the planet, killing everyone, and eliminating the possibility of future humans. It’s a lot wackier, more alienating and more bizarre to imagine an AI that not only destroys humanity, but permanently enslaves it in some kind of extended intergalactic torture chamber. So (again, totally guessing), many people have decided to just talk about x-risk, but use it as a way to advocate for getting talent and funding into AI Safety, which was the real goal anyway.
On a final note, if we take flavors of your view with varying degrees of extremity, we get, in order of strength of claim:
X-risk is less important than s-risk
We should be indifferent about x-risk, there’s too much uncertainty both ethically and in terms of what the future will actually look like
The potential for s-risk is so bad that we should invite and actually trying to cause x-risk, unless s-risk reduction is really tractable
S-risks aside, humanity is just really net negative and we should invite x-risk no matter what (to be clear, I don’t think you’re making any of these claims yourself, but they’re possible paths views similar to yours might lead to).
Some of these strike me as way too strong and unsubstantiated, but regardless of what we think object-level, it’s not hard to think of reasons these views might be under-discussed. So I think what you’re really getting at is something like, “does EA have the ability to productively discuss info-hazards”. And the answer is that we probably wouldn’t know if it did.
I’m pretty sure that risks of scenarios a lot broader and less specific than extended intergalactic torture chambers count as s-risks. S-risks are defined as merely “risks of astronomical suffering.” So the risk of having, for example, a sufficiently extremely large future with a small but nonzero density of suffering would count as an s-risk. See this post from Tobias Baumann for examples.
To be clear, by “x-risk” here, you mean extinction risks specifically, and not existential risks generally (which is what “x-risk” was coined to refer to, from my understanding)? There are existential risks that don’t involve extinction, and some s-risks (or all, depending on how we define s-risk) are existential risks because of the expected scale of their suffering.
Ah, yes, extinction risk, thanks for clarifying.