I think you’ve raised a really important point—in practice, cause prioritisation by individual EAs is heavily irrational, and is shaped by social dynamics, groupthink and deference to people who don’t want people to be deferring to them. Eliminating this irrationality entirely is impossible, but we can still try to minimise it.
I think one problem we have is that it’s true that cause prioritisation by orgs like 80000 Hours is more rational than many other communities aiming to make the world a better place. However, the bar here is extremely low, and I think some EAs (especially new EAs) see cause prioritisation by 80000 Hours as 100% rational. I think a better framing is to see their cause prioritisation as less irrational.
As someone who is not very involved with EA socially because of where I live, I’d also like to add that from the outside, there seems to be fairly strong, widespread consensus that EAs think AI Safety is the more important cause area. But then I’ve found that when I meet “core EAs”, eg—people working at CEA, 80k, FHI etc, there is far more divergence in views around AI x-risk than I’d expect, and this consensus does not seem to be present. I’m not sure why this discrepancy exists and I’m not sure how this could be fixed—maybe staff at these orgs could publish their “cause ranking” lists.
Some of my suggestions for all EA organisers and CEA to improve epistemics and cause prioritisation via intro fellowships and Arete fellowships:
Discuss this thought experiment to better emphasise uncertainty in cause prioritisation, to encourage more independent cause prioritisation, and discourage deference. a) “Imagine 100 different timelines where effective altruism emerged. How consistent do you think the movement’s cause priorities (and rankings of them) would be across these 100 different timelines?” and b) “Imagine effective altruism independently emerged in 100 different countries and these movements could not contact each other. How consistent do you think the movement’s cause priorities (and rankings of them) would be across these 100 different countries?”
Discuss specific, unavoidable philosophical problems with cause prioritisation. This includes a) the effects of defining problems more narrowly or more broadly on “pressingness”, b) the fact that cause prioritisation is used to identify impactful interventions, which is not ideal, and probably other problems that I can’t think of off the top of my head.
At the end of Arete Fellowships / an EA Intro Fellowships, show fellows data from the EA Surveys (particularly cause prioritisation survey data) to give them a more evidence-based sense of what the community actually thinks about things.
But then I’ve found that when I meet “core EAs”, eg—people working at CEA, 80k, FHI etc, there is far more divergence in views around AI x-risk than I’d expect, and this consensus does not seem to be present. I’m not sure why this discrepancy exists and I’m not sure how this could be fixed—maybe staff at these orgs could publish their “cause ranking” lists.
This post is now three years old but is roughly what you suggest. For convenience I will copy one of the more relevant graphs into this comment:
What (rough) percentage of resources should the EA community devote to the following areas over the next five years? Think of the resources of the community as something like some fraction of Open Phil’s funding, possible donations from other large donors, and the human capital and influence of the ~1000 most engaged people.
I work on the Virtual Programs team at CEA, and we’re actually thinking of making some updates to the handbook in the coming months. I’ve noted down your recommendations and we’ll definitely consider adding some of the resources you shared. In particular, I’d be excited to add the empirical data point about cause prio, and maybe something discussing deference and groupthink dynamics.
I do want to mention that some of these resources, or similar ones, already exist within the EA Handbook intro curriculum. To note a few:
Also I want to mention that while we are taking another look at the curriculum—and we will apply this lens when we do—my guess is that a lot of the issue here (as you point out!) actually happens through interpersonal dynamics, and is not informed by the curriculum itself, and hence requires different solutions.
Sorry to hear that you’ve had this experience.
I think you’ve raised a really important point—in practice, cause prioritisation by individual EAs is heavily irrational, and is shaped by social dynamics, groupthink and deference to people who don’t want people to be deferring to them. Eliminating this irrationality entirely is impossible, but we can still try to minimise it.
I think one problem we have is that it’s true that cause prioritisation by orgs like 80000 Hours is more rational than many other communities aiming to make the world a better place. However, the bar here is extremely low, and I think some EAs (especially new EAs) see cause prioritisation by 80000 Hours as 100% rational. I think a better framing is to see their cause prioritisation as less irrational.
As someone who is not very involved with EA socially because of where I live, I’d also like to add that from the outside, there seems to be fairly strong, widespread consensus that EAs think AI Safety is the more important cause area. But then I’ve found that when I meet “core EAs”, eg—people working at CEA, 80k, FHI etc, there is far more divergence in views around AI x-risk than I’d expect, and this consensus does not seem to be present. I’m not sure why this discrepancy exists and I’m not sure how this could be fixed—maybe staff at these orgs could publish their “cause ranking” lists.
Some of my suggestions for all EA organisers and CEA to improve epistemics and cause prioritisation via intro fellowships and Arete fellowships:
Discuss this thought experiment to better emphasise uncertainty in cause prioritisation, to encourage more independent cause prioritisation, and discourage deference. a) “Imagine 100 different timelines where effective altruism emerged. How consistent do you think the movement’s cause priorities (and rankings of them) would be across these 100 different timelines?” and b) “Imagine effective altruism independently emerged in 100 different countries and these movements could not contact each other. How consistent do you think the movement’s cause priorities (and rankings of them) would be across these 100 different countries?”
Discuss specific, unavoidable philosophical problems with cause prioritisation. This includes a) the effects of defining problems more narrowly or more broadly on “pressingness”, b) the fact that cause prioritisation is used to identify impactful interventions, which is not ideal, and probably other problems that I can’t think of off the top of my head.
Make new EAs aware of the Big List of Cause Candidates post, and the concept of Cause X.
When encouraging EA’s to get involved with the community, discuss the risk of optimising for social status instead of collective impact.
At the end of Arete Fellowships / an EA Intro Fellowships, show fellows data from the EA Surveys (particularly cause prioritisation survey data) to give them a more evidence-based sense of what the community actually thinks about things.
This post is now three years old but is roughly what you suggest. For convenience I will copy one of the more relevant graphs into this comment:
What (rough) percentage of resources should the EA community devote to the following areas over the next five years? Think of the resources of the community as something like some fraction of Open Phil’s funding, possible donations from other large donors, and the human capital and influence of the ~1000 most engaged people.
Hey—thanks for the suggestions!
I work on the Virtual Programs team at CEA, and we’re actually thinking of making some updates to the handbook in the coming months. I’ve noted down your recommendations and we’ll definitely consider adding some of the resources you shared. In particular, I’d be excited to add the empirical data point about cause prio, and maybe something discussing deference and groupthink dynamics.
I do want to mention that some of these resources, or similar ones, already exist within the EA Handbook intro curriculum. To note a few:
- Moral Progress & Cause X, Week 3
- Crucial Conversations, Week 4 (I think this gets at some similar ideas, although not exactly the same content as anything you listed)
- Big List of Cause Candidates, Week 7
Also I want to mention that while we are taking another look at the curriculum—and we will apply this lens when we do—my guess is that a lot of the issue here (as you point out!) actually happens through interpersonal dynamics, and is not informed by the curriculum itself, and hence requires different solutions.