I’ve recently read the book Epic Measures, about the global burden of disease studes. One person involved in the studies had a habit of asking people who said their intervention was the most important, “what intervention is the second most important?”
It was intended as a gotcha, but I think it’s actually a really interesting question that sheds a lot of light on cause prioritization, and I’ve got a lot out of thinking about it.
So: people who prioritize a single charity, intervention, cause, or broad area in which to work, what charity/intervention/cause/broad area in which to work is second-best, and why?
Interesting question!
EA Survey Cause Selection data somewhat speaks to this. One difference is that we didn’t do forced ranking on the cause prioritisation scale, e.g. people could rate more than one charity as “near top priority,” but we can still compare the % of people who selected each cause as “near top priority” (the second highest ranking that could be given).
Below I show what % of people selected each cause as “near top” priority for those who selected AI, Poverty or Animal Welfare as “top priority” (I could do this for the other causes on request).
As you might expect, people who rate AI as top are more inclined to rate other LTF/x-risk causes as near top priority and more people who rate Poverty as top, rate Climate Change as near top (these tended to follow similar patterns in the analyses in our main report on this). Among people who selected Animal Welfare as top, the largest number selected Poverty as near top priority.
Notably Biosecurity appears as the cause most selected as “near top” by AI advocates and the second most selected cause for those who rate Poverty top. This is in line with the results discussed in the main post where Biosecurity received the highest % of “near top” ratings of any cause (slightly higher than Global Poverty) though very low numbers of “top priority” ratings, meaning that it is only middle of the pack (5/11) in terms of “top or near top priority” ratings
I think causes that are more robust to cluelessness should be higher priority than causes that are less so.
I feel pretty uncertain about which cause in the “robust-to-cluelessness” class should be second priority.
If I had to give an ordered list, I’d say:
1. AI alignment work
2. Work to increase the number of people that are both well-intentioned & highly capable
3. …
1st = clean meat/GFI 2nd = diet change/THL