EA and Global Poverty. Let’s Gather Evidence

There was a recent discussion on twitter about whether global development had been deprioritised within EA. This struck a chord with some (*edit* despite the claim in the twitter thread being false). So:

What is the priority of Global poverty within EA, compared to where it ought to be?

I am going to post some data and some theories. I’d like if people in the comments falsified them and then we’d know the answer.

  • Some people seem to think that global development is lower priority than it should be within EA. Is this view actually widespread?

  • Global poverty was held in very high esteem in 2020. Without further evidence we should assume it still is. In the 2020 survey, no cause area had a higher average rating (I’m eyeballing this graph) or a higher % of near top + top priority ratings. In 2020, global development was considered the highest priority by EAs in general.

  • The FTX future fund lists economic growth as one of its areas of interest (https://​​ftxfuturefund.org/​​area-of-interest/​​)

  • Theory: Elite EA conversation discusses global poverty less than AI or animal welfare. What is the share of cause areas among forum posts, 80k episodes or EA tweets? I’m sure some of this information is trivial for one of you to find. Is this theory wrong?

  • Theory: Global poverty work has ossified around GiveWell and their top charities. Jeff Mason and Yudkowsky both made variations of this point. Yudkowsky’s reasoning was that risktakers hadn’t been in global poverty research anyway—it attracted a more conservative kind of person. I don’t know how to operationalise thoughts against this, but maybe one of you can.

  • Personally, I think that many people find global poverty uniquely compelling. It’s unarguably good. You can test it. It has quick feedback loops (compared to many other cause areas). I think it’s good to be in coalition with the most effective area of an altruistic space that vibes with so many people. I like global poverty as a key concern (even though it’s not my key concern) because I like good coalitional partners. And Longtermist and global development EAs seem to me to be natural allies.

  • I can also believe that if we care about the lives of people currently alive in the developing world and have AI timelines of less than 20 years, we shouldn’t focus on global development. I’m not an expert here and this view makes me uncomfortable, but conditional on short AI timelines, I can’t find fault with it. In terms of QALYs there may be more risk to the global poor from AI than malnourishment. If this is the case, EA would moves away from being divided by cause areas towards a primary divide of “AI soon” vs “AI later” (though deontologists might argue it’s still better to improve people’s lives now rather than save them from something that kills all of us). Feel fry to suggest flaws in this argument

  • I’m going to seed a few replies in the comments. I know some of you hate it when I do this, but please bear with me.

What do you think? What are the facts about this?

endnote: I predict 50% that this discussion won’t work, resolved by me in two weeks. I think that people don’t want to work together to build a sort of vague discussion on the forum. We’ll see.