I am the Principal Research Director at Rethink Priorities. I lead our Surveys and Data Analysis department and our Worldview Investigation Team.
The Worldview Investigation Team previously completed the Moral Weight Project and CURVE Sequence / Cross-Cause Model. We’re currently working on tools to help EAs decide how they should allocate resources within portfolios of different causes, and to how to use a moral parliament approach to allocate resources given metanormative uncertainty.
The Surveys and Data Analysis Team primarily works on private commissions for core EA movement and longtermist orgs, where we provide:
Private polling to assess public attitudes
Message testing / framing experiments, testing online ads
Expert surveys
Private data analyses and survey / analysis consultation
Impact assessments of orgs/programs
Formerly, I also managed our Wild Animal Welfare department and I’ve previously worked for Charity Science, and been a trustee at Charity Entrepreneurship and EA London.
My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.
Thanks Yarrow,
In this report, we don’t actually classify any causes as longtermist or otherwise. We discuss this in footnote 3.
In this survey, as well as asking respondents about individual causes, we asked them how they would allocate resources across “Longtermist (including global catastrophic risks)”, “Near-term (human-focused)” and “Near-term (animal focused)”. We also asked a separate question in the ‘ideas’ section about their agreement with the statement “The impact of our actions on the very long-term future is the most important consideration when it comes to doing good.”
This is in contrast to previous years, where we conducted Exploratory Factor Analysis / Exploratory Graph Analysis of the individual causes, and computed scores corresponding to the “longtermist” (Biosecurity, Nuclear risk, AI risk, X-risk other and Other longtermist) and “neartermist” (Mental health, Global poverty and Neartermist other) groupings we identified. As we discussed in those previous years (e.g. here and here), the terms “longtermist” and “neartermist”, as just a matter of simplicity/convenience, matching the common EA understanding of those terms, but people might favour those causes for reasons other than longtermism / neartermism per se, e.g. decision-theoretic or epistemic differences.
Substantively, one might wonder: “Are people who support AI or other global catastrophic risk work, allocating more resources to the “Near-term” buckets, rather than to the “Longtermist (including global catastrophic risks)” bucket, because they think that AI will happen in the near-term and be sufficiently large that it would dominate even if you discount longterm effects?” This is a reasonable question. But as we show in the appendix, higher ratings of AI Safety are associated with significantly higher allocations (almost twice as large) to the “Longtermist” bucket, and lower allocations to the Near-term buckets. And, as we see in the Ideas and Cause Prioritizations section, endorsing the explicit “long-term future” item, is strongly positively associated with higher prioritization of AI Safety.