Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
This is Miranda Kaplan, a communications associate at GiveWell. Thanks so much for publishing this; we really value critiques of our work, especially of our approach to moral weights, which is a particularly challenging feature of our analysis. I’ve shared this internally so the relevant people can take a thorough read through it, and we may share a response here, as we are able.
I really like this post and was excited to read it. I find most of it very valuable, which is important for me to say because I’ll only comment on 2 small things:
You quote from the report about racism in aid:
As you wrote in your footnote, I’m not sure I take claims like these at face value, since it’s my impression that aid distributed by recipient governments is inefficient or harmful much of the time, and it’s worse the less democratic a country is. There are only a handful of democracies in Africa. So you’d expect “country programmable aid” in most African countries to not be distributed according to the will of the intended recipients at all.
I’d be much more interested in knowing, for example, what percentage of aid programs were done in cooperation with locals.
Do you know the LEEP (the Lead Exposure Elimination Project)? It was also incubated by CE, and I think it’s another excellent positive example for the kind of thing you’re talking about. They partner with governments and paint manufacturers to regulate and eliminate lead paint from their markets, by building local networks, performing paint studies, and providing information and technical assitance. Their projects page is really exciting.
...Maybe all the orgs that come from CE are like that?
Thanks for your comment. Glad you liked the post.
In response to 1.
Yeah, there’s a scale from:
individuals (GiveDirectly)
community projects (the two case studies were operating at a somewhat-community level)
government interventions (KarenInKenya does work on this level… consulting with one or multiple Govts?)
International / top-down (“We’ll buy X mosquito nets and distribute them as efficiently as possible”)
I don’t think there’s much infrastructure set up for enabling communities directly, which I’d be interested to see someone try to design. I think there’s potential. One thing Karen mentions in the 80,000 Hours interview is that you don’t want to burden communities to provide services that they should just have by default, which is why she works at a governmental level to support the government to design and provide these services.
There was also a long tangent of my research into whether EA should be considering community-level infrastructure rather than programmes like GiveDirectly. The Page and Pande paper in the footnotes is pretty interesting and has some good discussion that was cut from the list of most persuasive arguments.
In response to 2.
I did a quick audit of Charity Entrepreneurship’s orgs and
(1) there didn’t seem to be too many that were overtly designed because the founders had insider-positioning in their target community. However, I knew I didn’t have time to research the background story for each founder individually and drawing conclusions from anything less thorough would clearly be bad on multiple levels. Therefore, I cut that thread from the original post.
(2) LEEP and Family Empowerment Media both seemed relevant examples that would be relevant to the post. There are a couple that seem to be policy-based and it’s unclear whether there’s a lot of insider-positioning in these circumstances, and also unclear whether policy initiatives benefit from insider positioning? I’d be excited to discuss this in way more depth.
I’m in favour of discussion about our possible blindspots and I don’t think we should avoid discussions of these topics when they naturally come up, although I don’t know how I feel about increasing conversations about “colonisation and institutional racism” within EA.
I think whether or not such conversations are productive or merely cause a bunch of conflicts that don’t change anyone’s mind is highly dependent on context and how these conversations are run. So I would be reluctant to make such a broad suggestion, at least without adding a bunch of caveats.
Yeah. I was quite nervous about posting this as a written critique because I agree—it can be really easy to talk past each other in discussions about colonialism and institutional racism, and this is exacerbated on text convos because (in my experience) people often have different meanings, experiences or inferences for the same word.
When I usually discuss these topics, it’s a physical conversation where people are trying to really understand and connect to the others in the convo, and have time to check understanding and rephrase as the conversation continues.
What kind of caveats would you add, out of interest?
One framework I’ve come across for discussing racism is to keep it personal, local and immediate—i.e. talk about your own experiences, and avoid speaking for other people. However, this seems counter to the EA conversational norms (e.g. see my reply to Rubi’s comment on this post) where we like to use concrete examples and hypotheticals.
I guess, if I was forced not to use hypotheticals or advocate for others’ experiences in EA, I would be really incentivised to seek others’ voices, and maybe that wouldn’t be the worst thing, in terms of genuinely bringing others to the table.
I guess the two main aspects are:
Is there a high-quality facilitator available who will be respected by both people in favour of these frameworks and who are skeptical of them?
Are the actual conversation participants likely to be able to have a productive conversation or are people likely to just talk past each other even with a quality facilitator?
One caveat is that it would need to be heavily moderated in order to be useful, and the moderators will need to be willing to swiftly ban people who derail from the topic or are using it to politicize the event. Normally, EA’s freewheeling norms are enough, but that’s only because very little politics enters the EA forum, and EA itself is mostly apolitical, so the most severe forms of motivated reasoning are less problematic. Given that it’s a political topic, there needs to be much stronger moderation if it’s to be useful.
Another caveat is to display trigger warnings. Normally, I don’t think trigger warnings are necessary, but in this context, where real politics are being discussed, and usually the worst forms of political topics to discuss, it’s almost a necessity to make sure everyone is aware of that problem.
This conversation can be useful, which is why I support it, but there are severe problems that need to be addressed in order to make it a reality.
Although I don’t agree with all conclusions, I liked to read this post and think some critique is extremely important.
You write: EA’s lack of diversity and parallels to colonialism and institutional racism have meaningful repercussions for EA as a movement. When we can’t engage with people who make these critiques, it damages our reputation and credibility.
I think indeed that with the recent shift from the EA movement towards political influence, optics and finding political alliances are increasingly important. EA ideas can resonate with both conservatives and the left, but I think that the (centre-)left is probably the most natural ally when it comes to tech/AI regulation, larger foreign budgets and improved animal welfare.
From my observations the EA movement is currently doing a mediocre job at aligning with the (centre-)left, mainly due to a lack of diversity in the movement (especially in leadership) and the optics of being a billionaire-backed movement. I saw an outburst of different forms of critique on EA and long termism from the (SJ-)left on Twitter in response to the Time article from some of the usual aspects (Timnit Gebru (multiple tweets an hour), Phil Torres) and some people new to me, e.g. Seth Lazar (although supported by Rob Reich) and many others.
As you mentioned CEA already works on the topic of diversity, although I think there is way more to be done. Some ideas (maybe on some of them work is already done):
Actively steering in newly starting groups to prevent path dependency that goes against diversity (e.g. focus on gender parity in group leadership, more diverse cause area focus of group from the start)
Way more efforts into community building in the
Global south
Non-elite universities with greater diversity in their student population
More focus on other types of impactful career paths that attract a diverse group instead of a (too big of a) focus on AI Safety work, e.g. animal welfare, global health & development and policy careers
Regarding the critique of EA being too much of a billionaire / finance-friendly movement:
Without actually engaging in addressing and researching issues related to global wealth disparity, we cause the impression that we are biased in this discussion in favour of large wealth disparities. We should do more research into this potential cause area and interventions, e.g. fair taxation, the rent-seeking nature of the financial sector and the optimal division of wealth created by the (SV) tech sector. If we conclude that those topics are not important, we should have excellent arguments why we don’t work on it. For now it just feels that these topics are not sufficiently discussed. This makes EA too suspicious to be an ally for the (centre-)left
I think I recently saw something about EA billionaires trying to increase tax on billionaires, which is along the lines of what you suggest.
Your suggestions for diversity in local groups would reduce the blindspots of my own that I uncovered during the writing of this post—I think it’s easy to fall into patterns as a group based on the interests of the members, and therefore forget how wide the conversation and action under the umbrella “EA” is.
The focal point of the post is more around EA’s potential to do more power-sharing, rather than solely increase the diversity of people within EA (though diversity is part of it). I think of it like a consultancy: a consultancy usually isn’t criticised for being too homogenous. Instead, people just decide not to use that consultancy in favour of one that has the skills/perspectives/track record/etc that they’re looking for. Although EA isn’t one centralised company, I see similarities because (in many but not all) cases, we are a group of people who are trying to apply tools to other peoples’ problems.
A consultancy doesn’t need political allies or alignment, though it may choose to take on projects of a certain flavour. I’d be interested to hear your thoughts on whether EA needs to (or should be) seeking the political alignment you mentioned.
I really liked this post—appreciated how detailed and constructive it was! As one of the judges for the red-teaming contest, I personally thought this should have gotten a prize, and I think it’s unfortunate that it didn’t. I’ve tried to highlight it here in a comment on the announcement of contest winners!
This comment is object-level, perhaps nitpicky, and I quite like your post on a high level.
Saving a life via, say, malaria nets gets you two benefits:
1. The person saved doesn’t die, meeting their preference for continuing to exist
2. The externalities of that person continuing to live, such as foregone grief by their family and community.
I don’t think it’s too controversial to say that the majority of the benefit from saving a life goes to the person whose life is saved, rather than the people who would be sad that they died. But the IDinsights survey only provides information about the latter.
Consider what would happen if beneficiary surveys find the opposite conclusion in future communities, that certain beneficiaries did not care at all about the death of children under the age of 9. It would be ridiculous and immoral to defer to that decision, and not provide any life-saving aid to those children. The reason for this is that the community being surveyed is not the primary beneficiary of aid to their children, their children are, so their preferences make up a small fraction of the aid’s value. But this also goes the other way, if the surveyed community overweights the lives of their children, that isn’t a reason for major deferral. Especially if stated preferences contradict revealed preferences, as they often do.
Yeah, and the IDinsight study only looked at #2 from your list above , which is one of the limitations and reasons more research would be good. This hits at a “collectivist culture vs individualist culture” nuance too, I suspect, because that could influence the weightings of #1 vs #2.
In a 2012 blog post Holden wrote about the GiveWell approach being purposefully health and life-based as this is possibly the best way to give agency to distant communities: https://blog.givewell.org/2012/04/12/how-not-to-be-a-white-in-shining-armor/
And they also have a note somewhere on their website about flow-on effects: GiveWell assumes the flow on effects from giving health/life-saving interventions is probably more cost effective than flow on effects from infrastructural interventions which end up improving health and lifespan.
In response to your comment about deferring to a hypothetical community who gives no life-saving intervention for people under 9 years old: if people had good access to information and resources, and their group decision was to focus a large amount of resource on saving lives of extremely old people on the community … Maybe we should do this? I say this because I can think of reasons a community might want grandparents around for another few years (e.g. to pass on language, culture, knowledge) instead of more children at the moment. I think, if a community was at massive risk of loss of culture, the donors’ insistence on saving young lives over the elders’ lives could be incredibly frustrating.
Not saying this to make any conclusions, but just as a counter-example that introduces a little more nuance than “morally wrong to let under 9yo’s die unnecessarily.”
If I had to deal with the situation as was proposed here:
At the end of the day, I tend towards individual fairness over group/cultural fairness, primarily because I don’t care too much for cultural essentialism/cultural preservation efforts. Thus I would choose to try to save the children first, then move on to the elderly. Yes, it will be incredibly frustrating, but then strife will always exist between group non-discrimination and individual fairness, I just try to resolve the strife in favor of one side.
Yeah, interesting. I definitely disagree with you on whose preferences should be met in this case, and suspect there are some situations where I would agree with you, but would require a lot of context to understand exactly where the lone of agree/disagree is.