[Epistemic status: fast written post of something I have basically always felt but never really put into words. It still is a half-baked idea and it is really simple, but I don’t think I ever came across reading anything similar]
EA and adjacent movements puts a lot of attention into distinguish whether a risk is actually an x-risk or not. Of course, the difference in the outcomes between a fulfilled x-risk and a fulfilled not-x-risk are not of grade but of class: humanity (and its potential) exists or not.
I think the fact that the difference between x-risks and the other risks is of class, makes EAs spend way too much energy and time assessing whether a risk is actually existential or not. I don’t think this is time well spend. In general, if somebody has to put a good deal of effort and research to assess this, it means that the risk is a hell of a risk, and the incentives to minimise it, most likely maximal. [I am referring to, basically, close calls here, but I can imagine myself expanding it to global catastrophic risks, for example.]
In practical terms, these differences seem to me almost irrelevant. Does it make any difference in the actions anyone would possibly want to take to mitigate an extreme risk whether this risk is actually existential or not? For example: Does it make any difference whether a non-alligned superintelligent AGI will actively try to kill all humanity or not? If we are certain that it won’t, we would still live in a world where we are the ants and it is humanity. Even if we think we eventually could climb out of our ‘ant state’ to a state with more potential for humanity, should we really put less effort in mitigating this risk than if we’d think the AGI will eliminate us? It would feel very odd to me to answer yes to this question. [edited to add the following:] The reality is that resources (all, from money, to energy, effort or time) are finite and not enough are devoted to mitigate very large risks in general. Until this changes, whether a risk is actually existential or not seems to me much less important than EA as movement think.
In another level, there is also the issue that such nitpicks generate a lot of debate and are often difficult to fully understand by the general public. More often that we would like, these debates contribute to the growing wave against EA, since from the outside this can look like some nerds having fun / wasting time and calling themselves “effective” for doing so.
[I wrote this post pretty fast and almost in one go. Please, tell me if there is something that is not understandable, improvable, or wrong and I will try to update accordingly]
“Is this risk actually existential?” may be less important than we think
[Epistemic status: fast written post of something I have basically always felt but never really put into words. It still is a half-baked idea and it is really simple, but I don’t think I ever came across reading anything similar]
EA and adjacent movements puts a lot of attention into distinguish whether a risk is actually an x-risk or not. Of course, the difference in the outcomes between a fulfilled x-risk and a fulfilled not-x-risk are not of grade but of class: humanity (and its potential) exists or not.
I think the fact that the difference between x-risks and the other risks is of class, makes EAs spend way too much energy and time assessing whether a risk is actually existential or not. I don’t think this is time well spend. In general, if somebody has to put a good deal of effort and research to assess this, it means that the risk is a hell of a risk, and the incentives to minimise it, most likely maximal. [I am referring to, basically, close calls here, but I can imagine myself expanding it to global catastrophic risks, for example.]
In practical terms, these differences seem to me almost irrelevant. Does it make any difference in the actions anyone would possibly want to take to mitigate an extreme risk whether this risk is actually existential or not? For example: Does it make any difference whether a non-alligned superintelligent AGI will actively try to kill all humanity or not? If we are certain that it won’t, we would still live in a world where we are the ants and it is humanity. Even if we think we eventually could climb out of our ‘ant state’ to a state with more potential for humanity, should we really put less effort in mitigating this risk than if we’d think the AGI will eliminate us? It would feel very odd to me to answer yes to this question. [edited to add the following:] The reality is that resources (all, from money, to energy, effort or time) are finite and not enough are devoted to mitigate very large risks in general. Until this changes, whether a risk is actually existential or not seems to me much less important than EA as movement think.
In another level, there is also the issue that such nitpicks generate a lot of debate and are often difficult to fully understand by the general public. More often that we would like, these debates contribute to the growing wave against EA, since from the outside this can look like some nerds having fun / wasting time and calling themselves “effective” for doing so.
[I wrote this post pretty fast and almost in one go. Please, tell me if there is something that is not understandable, improvable, or wrong and I will try to update accordingly]