I am the Principal Research Director at Rethink Priorities. I lead our Surveys and Data Analysis department and our Worldview Investigation Team.
The Worldview Investigation Team previously completed the Moral Weight Project and CURVE Sequence / Cross-Cause Model. We’re currently working on tools to help EAs decide how they should allocate resources within portfolios of different causes, and to how to use a moral parliament approach to allocate resources given metanormative uncertainty.
The Surveys and Data Analysis Team primarily works on private commissions for core EA movement and longtermist orgs, where we provide:
Private polling to assess public attitudes
Message testing / framing experiments, testing online ads
Expert surveys
Private data analyses and survey / analysis consultation
Impact assessments of orgs/programs
Formerly, I also managed our Wild Animal Welfare department and I’ve previously worked for Charity Science, and been a trustee at Charity Entrepreneurship and EA London.
My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.
This isn’t expressing disagreement, but I think it’s also important to consider the social effects of our speaking in line with different epistemic practices, i.e.,
When someone says “AI will kill us all” do people understand us as expressing 100% confidence in extinction, or do they interpret it as mere hyperbole and rhetoric, and infer that what we actually mean is that AI will potentially kill us all or have other drastic effects
When someone says “There’s a high risk AI kills us all or disempowers us” do people understand this as us expressing very high confidence that it kills us all or as saying it almost certainly won’t kill us all.