EA cause areas are just areas where great interventions should be easier to find
I worry that some EAs consider certain interventions to be interventions with high expected value (high EV) because the intervention tackles a major EA cause area, rather than using major EA cause areas as a tool to identify high EV interventions. This strikes me as “thinking in the wrong direction”, and seems wrong because I think we should expect there to be many, many potential interventions in global health and development, reducing existential risk and improving animal welfare that have low expected value (low EV).
As a result of this error, I think some EAs overvalue some interventions in major EA cause areas, and undervalue some interventions that are not in major EA cause areas.
Because of one of the problems with the ITN framework (that we seem to switch between comparing problems and interventions when we move between importance, tractablity and neglectedness), I think it may help and be more accurate to view the major EA cause areas as areas where high EV interventions should be easier to find, and to view other cause areas as areas where high EV interventions should be harder to find.
Thinking in these terms would mean being more open to interventions that aren’t in major EA cause areas.
The main examples where I think EAs may underestimate the EV of an intervention because it doesn’t involve a major EA cause area are those where a particular form of activism / social movement / organisation could potentially be made more efficient / effective at a low cost. There are probably many such examples, with some having much greater EV and some having much smaller EV, but 2 examples I’d provide are: a) starting a campaign for the USA to recognise Palestine (https://forum.effectivealtruism.org/posts/qHhLrcDyhGQoPgsDg/should-someone-start-a-grassroots-campaign-for-usa-to) b) identifying areas and ethnic groups internationally at greatest risk of genocide / ethnic violence and trying to redirect funding for western anti-racism movements towards these areas
From discussion in comments: One general point I’d like to make is if a proposed intervention is “improving the efficiency of work on cause X”, a large amount of resources already being poored into cause X should actually increase the EV of the proposed intervention (but obviously, this is assuming that the work on cause X is positive in expectation).
- Taking prioritisation within ‘EA’ seriously by 18 Aug 2023 17:50 UTC; 102 points) (
- Disentangling “Improving Institutional Decision-Making” by 13 Sep 2021 23:50 UTC; 92 points) (
- 4 Aug 2023 17:45 UTC; 26 points) 's comment on University EA Groups Need Fixing by (
- 23 Aug 2022 21:23 UTC; 4 points) 's comment on EAs underestimate uncertainty in cause prioritisation by (
Hi! I was one of the downvoters on your earlier post about Israel/Palestine, but looking at the link again now, I see that nobody ever gave a good explanation for why the post got such a negative reception. I’m sorry that we gave such a hostile reaction without explaining. I can’t speak for all EAs, but I suspect that some of the main reasons for hesitation might be:
Israel-related issues are extremely politically charged, so taking any stance whatsoever might risk damaging the carefully non-politicized reputation that other parts of the EA movement have built up. I imagine that EAs would have similar hesitation about taking a strong stance on abortion rights (even though EAs often have strong views on population ethics), or officially endorsing a candidate in a US presidential election (even though the majority of EAs are probably Democrats).
The Israel/Palestine conflict is the opposite of neglected—tons of media coverage, hundreds of activist groups, and lots of funding on both sides. A typical EA might argue that it would be better for a newly-formed activist group to focus on something like the current situation in Chad, which attracts hundreds of times less media coverage although a much larger number of people have died. (Of course, raw death toll isn’t the final arbiter of cause importance—Israel is a nuclear power, after all, so its decisions have wide ramifications.)
For whatever reason, the Israel/Paletine conflict has gained a specific reputation as a devilishly intractable diplomatic puzzle—there’s little agreement on any obvious solutions that seem like they could resolve the biggest problems.
I’m more positive about your second idea—trying to identify the areas at greatest risk of conflict throughout the whole world and take actions to calm tensions before violence erupts. To some extent, this is the traditional work of diplomacy, international NGOs, etc, but these efforts could perhaps be better-targeted, and there are probably some unique angles here that EAs could look into. While international attention from diplomats and NGOs seems to parachute into regions right at the moment of crisis, I could imagine EAs trying to intervene earlier in the lead-up to conflicts, perhaps running low-cost radio programs trying to spread American-style values of tolerance and anti-racism. I could also imagine taking an even longer-term view, and trying to investigate ways to head off the root causes of political tension and violence on a timespan of decades or centuries. (Here is a somewhat similar project examining what gave rise to positive social movements like slavery abolitionism.)
Hi, thanks for providing those reasons, I can totally see the rationale!
One general point I’d like to make is if a proposed intervention is “improving the efficiency of work on cause X”, a large amount of resources already being poured into cause X should actually increase the EV of the proposed intervention (but obviously, this is assuming that the work on cause X is positive in expectation, and as you say, some may not feel this way about some pro-Palestinian activism).
FWIW, this is pretty much the rationale behind the climate recs of FP, we recommend orgs we think can leverage the enormous societal resources poured into climate into the most productive uses within the space. In line with your reasoning we also think that events that increase overall allocation to climate might improve the cost-effectiveness of the climate recs (e.g. Biden’s victory leading to higher returns).
I would also think (though don’t know for certain) that OPP’s recent bid to hire in global aid advocacy would draw on a similar theory of change, improving resource allocation in a field that is, comparatively speaking, not neglected.
You might be interested in previous discussion of genocide prevention as a cause area here.
I’m skeptical that funding ‘anti-racism’ movements would make sense as an intervention though, at least in the contemporary ‘woke’ sense of the phrase. Many prominent ‘anti-racist’ memes, like that the relative lack of success of one ethnic group should be attributed to exploitation by another, can increase racial tensions, and are similar to those used to justify genocides in the past.
I see this sentence as suggesting capitalizing on the (relative) popularity of anti-racism movements and trying to use society’s interest in anti-racism toward genocide prevention.
Yep exactly that!
It would help if you provided examples.
Thanks for the suggestion, I’ve added an attempt at this to the post
Now that you’ve given examples, can you provide an account of how increased funding in these areas can lead to improved well-being / preserves lives or DALYs / etc in expectation? Do you expect that targeted funds could be cost-competitive with GW top charities or likewise?
So in both of the examples provided, EAs would be funding / carrying out interventions that improve the effectiveness of other work, and it is this other work that would improve well-being / preserve lives in expectation.
Because I suspect that these interventions would be relatively cheap, and because this other work would already have lots of resources behind it, I think these interventions would slightly improve the effectiveness with which a large amount of resources are spent, to the extent that the interventions could compare with GW top charities in terms of expected value.
While I’m skeptical about the idea that particular causes you’ve mentioned could truly end up being cost effective paths to reducing suffering, I’m sympathetic to the idea that improving the effectiveness of activity in putatively non-effective causes is potentially itself effective. What interventions do you have in mind to improve effectiveness within these domains?
I think the interventions would be very specific to the domain. I mentioned an intervention to direct pro-Palestinian activism towards a tangible goal, and with redirecting western anti-racism work towards international genocide prevention, this could possibly be done by getting western anti-racism organisations to partner with similar organisations in countries with greater risk of genocides, which could lead to resource / expertise sharing over a long period of time.
Props for writing the post you were thinking about!
Overwhelmingly, the things you think of as “EA cause areas” translate to “areas where people have used common EA principles to evaluate opportunities”. And the things you think of as “not in major EA cause areas” are overwhelmingly “areas where people have not tried very hard to evaluate opportunities”.
Many of the “haven’t tried hard” areas are justifiably ignored, because there are major factors implying there probably aren’t great opportunities (very few people are affected, very little harm is done, or progress has been made despite enormous investment from reasonable people, etc.)
But many other areas are ignored because there just… aren’t very many people in EA. Maybe 150 people whose job description is something like “full-time researcher”, plus another few dozen people doing research internships or summer programs? Compare this to the scale of open questions within well-established areas, and you’ll see that we are already overwhelmed. (Plus, many of these researchers aren’t very flexible; if you work for Animal Charity Evaluators, Palestine isn’t going to be within your purview.)
Fortunately, there’s a lot of funding available for people to do impact-focused research, at least in areas with some plausible connection to long-term impact (not sure what’s out there for e.g. “new approaches in global development”). It just takes time and skill to put together a good application and develop the basic case for something being promising enough to spend $10k-50k investigating.
I’ll follow in your footsteps and say that I want to write a full post about this (the argument that “EA doesn’t prioritize X highly enough”) sometime in the next few months.