For example: Does it make any difference whether a non-alligned superintelligent AGI will actively try to kill all humanity or not? If we are certain that it won’t, we would still live in a world where we are the ants and it is humanity.
This misunderstands what an existential risk is, at least as used by the philosophers who’ve written about this. Nick Bostrom, for example, notes that the extinction of humanity is not the only thing that counts as an extinction risk. (The term “existential risk” is unfortunately a misnomer in this regard.) Something that drastically curtails the future potential of humanity would also count.
Surely a future in which humanity flourishes into the longterm future is a better one than a future where people are living as “ants.” And if we have uncertainty about which path we’re on and there are plausible reasons to think we’re on the ant path, it can be worthwhile to figure that out so we can shift in a better direction.
Exactly. Even if the ant path may not be permanent, ie. if we could climb out of it.
My point is that, in terms of the effort I would like humanity to devote to minimise this risk, I don’t think it makes any difference whether the ant state is strictly permanent or we could eventually get out of it. Maybe if it were guaranteed to get out of it or even “only” very likely that we could get out of this ant state I could understand devoting less effort in mitigating this risk than if we’d think the AGI will eliminate us (or the ant state would be unescapable).
If we agree on this, the fact that a risk is actually existential or not is in practice close to irrelevant.
Maybe a more realistic example would be helpful here. There have been recent reports claiming that, although it will negatively affect millions of people, climate change is unlikely to be an existential risk. Suppose that’s true. Do you think EAs should devote as much time and effort preventing climate change-level risks as they do preventing existential risks?
Let’s speak about humanity in general and not about EAs, cause where EA focus does not only depend on the degree of the risk.
Yes, I don’t think humanity should currently devote less efforts to prevent such risks than x-risks. Probably the point is that we are doing way too less to tackle dangerous non-immediate risks in general, so it does not make any practical difference whether the risk is existential or only almost existential. And this point of view does not seem controversial at all, it is just not explicitly stated. It is not just not-EAs that are devoting a lot of effort to prevent climate change, an increasing fraction of EAs do as well.
I suppose I agree that humanity should generally focus more on catastrophic (non-existential) risks.
That said, I think this is often stated explicitly. For example, MacAskill in his recently book explicitly says that many of the actions we take to reduce x-risks will also look good even for people with shorter-term priorities.
Do you have any quote from someone who says we shouldn’t care about catastrophic risks at all?
Do you have any quote from someone who says we shouldn’t care about catastrophic risks at all?
I’m not saying this. And I really don’t see how you came to think I do.
The only thing I say is that I don’t see how anyone would argue that humanity should devote less effort to mitigate a given risk just because it turns out that it is not actually existential even though it may be more than catastrophic. Therefore, finding out if a risk is actually existential or not is not really valuable.
I’m not saying anything new here, I made this point several times above. Maybe it is not very clearly done, but I don’t really know how to state it differently.
This misunderstands what an existential risk is, at least as used by the philosophers who’ve written about this. Nick Bostrom, for example, notes that the extinction of humanity is not the only thing that counts as an extinction risk. (The term “existential risk” is unfortunately a misnomer in this regard.) Something that drastically curtails the future potential of humanity would also count.
;-)
I’m not sure I understand your point then...
Surely a future in which humanity flourishes into the longterm future is a better one than a future where people are living as “ants.” And if we have uncertainty about which path we’re on and there are plausible reasons to think we’re on the ant path, it can be worthwhile to figure that out so we can shift in a better direction.
Exactly. Even if the ant path may not be permanent, ie. if we could climb out of it.
My point is that, in terms of the effort I would like humanity to devote to minimise this risk, I don’t think it makes any difference whether the ant state is strictly permanent or we could eventually get out of it. Maybe if it were guaranteed to get out of it or even “only” very likely that we could get out of this ant state I could understand devoting less effort in mitigating this risk than if we’d think the AGI will eliminate us (or the ant state would be unescapable).
If we agree on this, the fact that a risk is actually existential or not is in practice close to irrelevant.
Maybe a more realistic example would be helpful here. There have been recent reports claiming that, although it will negatively affect millions of people, climate change is unlikely to be an existential risk. Suppose that’s true. Do you think EAs should devote as much time and effort preventing climate change-level risks as they do preventing existential risks?
Let’s speak about humanity in general and not about EAs, cause where EA focus does not only depend on the degree of the risk.
Yes, I don’t think humanity should currently devote less efforts to prevent such risks than x-risks. Probably the point is that we are doing way too less to tackle dangerous non-immediate risks in general, so it does not make any practical difference whether the risk is existential or only almost existential. And this point of view does not seem controversial at all, it is just not explicitly stated. It is not just not-EAs that are devoting a lot of effort to prevent climate change, an increasing fraction of EAs do as well.
I suppose I agree that humanity should generally focus more on catastrophic (non-existential) risks.
That said, I think this is often stated explicitly. For example, MacAskill in his recently book explicitly says that many of the actions we take to reduce x-risks will also look good even for people with shorter-term priorities.
Do you have any quote from someone who says we shouldn’t care about catastrophic risks at all?
I’m not saying this. And I really don’t see how you came to think I do.
The only thing I say is that I don’t see how anyone would argue that humanity should devote less effort to mitigate a given risk just because it turns out that it is not actually existential even though it may be more than catastrophic. Therefore, finding out if a risk is actually existential or not is not really valuable.
I’m not saying anything new here, I made this point several times above. Maybe it is not very clearly done, but I don’t really know how to state it differently.