Is AI risk classified as a longtermist cause? If so, why?
It seems like a lot people in EA think that AI risk is a relevant concern within the next 10 years, let alone the next 100 years. My impression is most of the people who think so believe that the near term is enough to justify worrying about AI risk, and that you don’t need to invoke people who won’t be born for another 100 years to make the case.
In this report, we don’t actually classify any causes as longtermist or otherwise. We discuss this in footnote 3.
In this survey, as well as asking respondents about individual causes, we asked them how they would allocate resources across “Longtermist (including global catastrophic risks)”, “Near-term (human-focused)” and “Near-term (animal focused)”. We also asked a separate question in the ‘ideas’ section about their agreement with the statement “The impact of our actions on the very long-term future is the most important consideration when it comes to doing good.”
This is in contrast to previous years, where we conducted Exploratory Factor Analysis / Exploratory Graph Analysis of the individual causes, and computed scores corresponding to the “longtermist” (Biosecurity, Nuclear risk, AI risk, X-risk other and Other longtermist) and “neartermist” (Mental health, Global poverty and Neartermist other) groupings we identified. As we discussed in those previous years (e.g. here and here), the terms “longtermist” and “neartermist”, as just a matter of simplicity/convenience, matching the common EA understanding of those terms, but people might favour those causes for reasons other than longtermism / neartermism per se, e.g. decision-theoretic or epistemic differences.
Substantively, one might wonder: “Are people who support AI or other global catastrophic risk work, allocating more resources to the “Near-term” buckets, rather than to the “Longtermist (including global catastrophic risks)” bucket, because they think that AI will happen in the near-term and be sufficiently large that it would dominate even if you discount longterm effects?” This is a reasonable question. But as we show in the appendix, higher ratings of AI Safety are associated with significantly higher allocations (almost twice as large) to the “Longtermist” bucket, and lower allocations to the Near-term buckets. And, as we see in the Ideas and Cause Prioritizations section, endorsing the explicit “long-term future” item, is strongly positively associated with higher prioritization of AI Safety.
Is AI risk classified as a longtermist cause? If so, why?
It seems like a lot people in EA think that AI risk is a relevant concern within the next 10 years, let alone the next 100 years. My impression is most of the people who think so believe that the near term is enough to justify worrying about AI risk, and that you don’t need to invoke people who won’t be born for another 100 years to make the case.
Thanks Yarrow,
In this report, we don’t actually classify any causes as longtermist or otherwise. We discuss this in footnote 3.
In this survey, as well as asking respondents about individual causes, we asked them how they would allocate resources across “Longtermist (including global catastrophic risks)”, “Near-term (human-focused)” and “Near-term (animal focused)”. We also asked a separate question in the ‘ideas’ section about their agreement with the statement “The impact of our actions on the very long-term future is the most important consideration when it comes to doing good.”
This is in contrast to previous years, where we conducted Exploratory Factor Analysis / Exploratory Graph Analysis of the individual causes, and computed scores corresponding to the “longtermist” (Biosecurity, Nuclear risk, AI risk, X-risk other and Other longtermist) and “neartermist” (Mental health, Global poverty and Neartermist other) groupings we identified. As we discussed in those previous years (e.g. here and here), the terms “longtermist” and “neartermist”, as just a matter of simplicity/convenience, matching the common EA understanding of those terms, but people might favour those causes for reasons other than longtermism / neartermism per se, e.g. decision-theoretic or epistemic differences.
Substantively, one might wonder: “Are people who support AI or other global catastrophic risk work, allocating more resources to the “Near-term” buckets, rather than to the “Longtermist (including global catastrophic risks)” bucket, because they think that AI will happen in the near-term and be sufficiently large that it would dominate even if you discount longterm effects?” This is a reasonable question. But as we show in the appendix, higher ratings of AI Safety are associated with significantly higher allocations (almost twice as large) to the “Longtermist” bucket, and lower allocations to the Near-term buckets. And, as we see in the Ideas and Cause Prioritizations section, endorsing the explicit “long-term future” item, is strongly positively associated with higher prioritization of AI Safety.