In this report, we don’t actually classify any causes as longtermist or otherwise. We discuss this in footnote 3.
In this survey, as well as asking respondents about individual causes, we asked them how they would allocate resources across “Longtermist (including global catastrophic risks)”, “Near-term (human-focused)” and “Near-term (animal focused)”. We also asked a separate question in the ‘ideas’ section about their agreement with the statement “The impact of our actions on the very long-term future is the most important consideration when it comes to doing good.”
This is in contrast to previous years, where we conducted Exploratory Factor Analysis / Exploratory Graph Analysis of the individual causes, and computed scores corresponding to the “longtermist” (Biosecurity, Nuclear risk, AI risk, X-risk other and Other longtermist) and “neartermist” (Mental health, Global poverty and Neartermist other) groupings we identified. As we discussed in those previous years (e.g. here and here), the terms “longtermist” and “neartermist”, as just a matter of simplicity/convenience, matching the common EA understanding of those terms, but people might favour those causes for reasons other than longtermism / neartermism per se, e.g. decision-theoretic or epistemic differences.
Substantively, one might wonder: “Are people who support AI or other global catastrophic risk work, allocating more resources to the “Near-term” buckets, rather than to the “Longtermist (including global catastrophic risks)” bucket, because they think that AI will happen in the near-term and be sufficiently large that it would dominate even if you discount longterm effects?” This is a reasonable question. But as we show in the appendix, higher ratings of AI Safety are associated with significantly higher allocations (almost twice as large) to the “Longtermist” bucket, and lower allocations to the Near-term buckets. And, as we see in the Ideas and Cause Prioritizations section, endorsing the explicit “long-term future” item, is strongly positively associated with higher prioritization of AI Safety.
Thanks Yarrow,
In this report, we don’t actually classify any causes as longtermist or otherwise. We discuss this in footnote 3.
In this survey, as well as asking respondents about individual causes, we asked them how they would allocate resources across “Longtermist (including global catastrophic risks)”, “Near-term (human-focused)” and “Near-term (animal focused)”. We also asked a separate question in the ‘ideas’ section about their agreement with the statement “The impact of our actions on the very long-term future is the most important consideration when it comes to doing good.”
This is in contrast to previous years, where we conducted Exploratory Factor Analysis / Exploratory Graph Analysis of the individual causes, and computed scores corresponding to the “longtermist” (Biosecurity, Nuclear risk, AI risk, X-risk other and Other longtermist) and “neartermist” (Mental health, Global poverty and Neartermist other) groupings we identified. As we discussed in those previous years (e.g. here and here), the terms “longtermist” and “neartermist”, as just a matter of simplicity/convenience, matching the common EA understanding of those terms, but people might favour those causes for reasons other than longtermism / neartermism per se, e.g. decision-theoretic or epistemic differences.
Substantively, one might wonder: “Are people who support AI or other global catastrophic risk work, allocating more resources to the “Near-term” buckets, rather than to the “Longtermist (including global catastrophic risks)” bucket, because they think that AI will happen in the near-term and be sufficiently large that it would dominate even if you discount longterm effects?” This is a reasonable question. But as we show in the appendix, higher ratings of AI Safety are associated with significantly higher allocations (almost twice as large) to the “Longtermist” bucket, and lower allocations to the Near-term buckets. And, as we see in the Ideas and Cause Prioritizations section, endorsing the explicit “long-term future” item, is strongly positively associated with higher prioritization of AI Safety.