Maybe a more realistic example would be helpful here. There have been recent reports claiming that, although it will negatively affect millions of people, climate change is unlikely to be an existential risk. Suppose that’s true. Do you think EAs should devote as much time and effort preventing climate change-level risks as they do preventing existential risks?
Let’s speak about humanity in general and not about EAs, cause where EA focus does not only depend on the degree of the risk.
Yes, I don’t think humanity should currently devote less efforts to prevent such risks than x-risks. Probably the point is that we are doing way too less to tackle dangerous non-immediate risks in general, so it does not make any practical difference whether the risk is existential or only almost existential. And this point of view does not seem controversial at all, it is just not explicitly stated. It is not just not-EAs that are devoting a lot of effort to prevent climate change, an increasing fraction of EAs do as well.
I suppose I agree that humanity should generally focus more on catastrophic (non-existential) risks.
That said, I think this is often stated explicitly. For example, MacAskill in his recently book explicitly says that many of the actions we take to reduce x-risks will also look good even for people with shorter-term priorities.
Do you have any quote from someone who says we shouldn’t care about catastrophic risks at all?
Do you have any quote from someone who says we shouldn’t care about catastrophic risks at all?
I’m not saying this. And I really don’t see how you came to think I do.
The only thing I say is that I don’t see how anyone would argue that humanity should devote less effort to mitigate a given risk just because it turns out that it is not actually existential even though it may be more than catastrophic. Therefore, finding out if a risk is actually existential or not is not really valuable.
I’m not saying anything new here, I made this point several times above. Maybe it is not very clearly done, but I don’t really know how to state it differently.
Maybe a more realistic example would be helpful here. There have been recent reports claiming that, although it will negatively affect millions of people, climate change is unlikely to be an existential risk. Suppose that’s true. Do you think EAs should devote as much time and effort preventing climate change-level risks as they do preventing existential risks?
Let’s speak about humanity in general and not about EAs, cause where EA focus does not only depend on the degree of the risk.
Yes, I don’t think humanity should currently devote less efforts to prevent such risks than x-risks. Probably the point is that we are doing way too less to tackle dangerous non-immediate risks in general, so it does not make any practical difference whether the risk is existential or only almost existential. And this point of view does not seem controversial at all, it is just not explicitly stated. It is not just not-EAs that are devoting a lot of effort to prevent climate change, an increasing fraction of EAs do as well.
I suppose I agree that humanity should generally focus more on catastrophic (non-existential) risks.
That said, I think this is often stated explicitly. For example, MacAskill in his recently book explicitly says that many of the actions we take to reduce x-risks will also look good even for people with shorter-term priorities.
Do you have any quote from someone who says we shouldn’t care about catastrophic risks at all?
I’m not saying this. And I really don’t see how you came to think I do.
The only thing I say is that I don’t see how anyone would argue that humanity should devote less effort to mitigate a given risk just because it turns out that it is not actually existential even though it may be more than catastrophic. Therefore, finding out if a risk is actually existential or not is not really valuable.
I’m not saying anything new here, I made this point several times above. Maybe it is not very clearly done, but I don’t really know how to state it differently.