As a community, EA sometimes talks about finding “Cause X” (example 1, example 2).
The search for “Cause X” featured prominently in the billing for last year’s EA Global (a).
I understand “Cause X” to mean “new cause area that is competitive with the existing EA cause areas in terms of impact-per-dollar.”
This afternoon, I realized I don’t really know how many people in EA are actively pursuing the “search for cause X.” (I thought of a couple people, who I’ll note in comments to this thread. But my map feels very incomplete.)
In my understanding “Cause X” is something we almost take for granted today, but that people in the future will see as a moral catastrophe (similarly as to how we see slavery today, versus how people in the past saw it). So it has a bit more nuance than just being a “new cause area that is competitive with the existing EA cause areas in terms of impact-per-dollar”.
I think there are many candidates seeming to be overlooked by the majority of society. You could also argue that no one of these is a real Cause X due to the fact that they are still recognised as problems by a large number of people. But this could be just the baseline of “recognition”a neglected moral problem will start from in a very interconnected world like ours. Here what comes to my mind:
Wild animal suffering (probably not recognised as a moral problem by the majority of the population)
Aging (many people probably ascribe it a neutral moral value, maybe because it is rightly regarded as a “natural part of life”. Right consideration but it doesn’t imply its moral value or how many resources we should devote to the problem)
“Resurrection” or, in practice, right now, cryonics. (Probably neutral value/not even remotely in the radar of the general population, with many people possibly even ascribing it a negative moral value)
Something related to subjective experience? (stuff related to subjective experience that people don’t deem worthy to assign moral value to because “times are still too rough to notice them”, or stuff related to subjective experience that we are missing out but could achieve today with the right interventions).
Cause areas that I think don’t fit the definition above:
Mental Health, since it is recognised as a moral problem by a large enough fraction of the population (but still probably not large enough?). Although it is still too neglected.
X-risk. Recognised as a moral problem (who wants the apocalypse?) but too neglected for reasons probably not related to ethics.
But who is working on finding Cause X? I believe you could argue that every organisation devoted to finding new potential cause areas is. You could probably argue that moral philosophers, or even just thoughtful people, have a chance of recognising it. I’m not sure if there is a project or organisation devoted specifically to this task, but judging from the other answers here, probably not.
What organizations do you have in mind?
Open Philanthropy, Give Well, Rethink Priorities probably qualify. To clarify: my phrase didn’t mean “devoted exclusively to finding new potential cause areas”.
I think alternate foods for catastrophes like nuclear winter is a cause X (disclaimer, co-founder of ALLFED).
Thanks!
Very curious why this was downvoted. (This idea has been floated before, e.g. on the 80,000 Hours podcast, and seems like a plausible Cause X.)
I think working on preventing collapse of civilization given loss of electricity/industry due to extreme solar storm, high altitude electromagnetic pulses and narrow AI computer virus is a cause X (disclaimer, co-founder of ALLFED).
This is not a solution/answer, but someone should design a clever way for us to be constantly searching for cause x. I think a general contest could help, such as an “Effective Thesis Prize”, to reward good works aligned with EA goals; perhaps cause x could be the aim of a contest of its own.
Rethink Priorities seems to be the obvious organization focused on this.
From their website:
Sounds like they’re currently focused on new animal welfare & community-building interventions, rather than finding an entirely different cause area.
We’re also working on understanding invertebrate sentience and wild animal welfare—maybe not “cause X” because other EAs are aware of this cause already, but I think will help unlock important new interventions.
Additionally, we’re doing some analysis of nuclear war scenarios and paths toward non-proliferation. I think this is understudied in EA, though again maybe not “cause X” because EAs are already aware of it.
Lastly, we’re also working on examining ballot initiatives and other political methods of achieving EA aims—maybe not cause X because it isn’t a new cause area, but I think it will help unlock important new ways of achieving progress on our existing causes.
Thanks!
Is there a public-facing prioritized list of Rethink Priorities projects? (Just curious)
Right now everything I mentioned is in https://forum.effectivealtruism.org/posts/6cgRR6fMyrC4cG3m2/rethink-priorities-plans-for-2019
We’re working on writing up an update.
Between this, some ideas about AI x-risk and progress, and the unique position of the EA community, I’m beginning to think that “move Silicon Valley to cooperate with the US government and defense on AI technology” is Cause X. I intend to post something substantial in the future.
Me.
Can you expand on this answer? E.g. how much this is a focus for you, how long you’ve been doing this, how long you expect to continue doing this, etc.
I’d refer you to the comments of https://forum.effectivealtruism.org/posts/AChFG9AiNKkpr3Z3e/who-is-working-on-finding-cause-x#Jp9J9fKkJKsWkjmcj
The link didn’t work properly for me. Did you mean the following comment?
Yep :)
GiveWell is searching for cost-competitive causes in many different areas (see the “investigating opportunities” table).
Good point. Plausibly this is Cause X research (especially if they team up with Mark Lutter & co.); I’ll be curious to see how far outside their traditional remit they go.
Arguably it was the philosophers that found the last few. Once the missing moral reasoning was shored up the cause area conclusion was pretty deductive.