Cause X Guide
One of the factors that makes the effective altruism movement different from so many others is that its members are unified by the broad question “How can I do the most good” instead of by specific solutions, such as “reduce climate change.” One of the most important questions EAs need to consider is what cause area presents the highest impact for their work.
There are four established cause areas in effective altruism: global poverty, factory-farmed animals, artificial intelligence existential risk, and EA meta. However, there are dozens of other cause areas that some EAs consider promising. The concept behind a “cause X” is that there could be a cause neglected by the EA community but that is as important, or more important, to work on than the four currently established EA cause areas. Finding a new cause X should be one of the biggest goals of the EA movement and one of the largest opportunities for an individual EA to achieve counterfactual impact.
One example of many of cause X’s posts having an impact is that some of these posts have influenced Charity Entrepreneurship’s focus on mental health. The Cause X discussion has also influenced one of the largest foundations in the world, Good Ventures.
This guide, however, aims to compile the most useful content for evaluating new possible cause Xs and compare them to the currently established top cause areas. Some of the content is old, and some of it does not perfectly address its question. However, these were the best sources I could find to debate and explain the issues. This guide is aimed at an intermediate EA audience who already has a solid understanding of EA ideas.
Organization
The guide is broken down into three sections. The introduction aims to explain the concepts needed to compare cause areas such as “Cause X,” “How a new cause area might be introduced to the EA community,” “Current methods used to split resources between causes,” and “Concerns with some of those methodologies.” The second section is focused on comparing top causes and reviewing some of the key issues that divide current supporters of the big four cause areas. The final section aims to present several possible candidates for cause X as new areas worth considering. It is only a small sample of the full list of causes presented and considered in the EA movement, but they were selected to represent the areas (other than the big four) that many EAs would consider promising. I used three different methods to devise a list of 15 cause areas that might be considered promising candidates for cause X, selecting five causes per method.
Method 1: Cause areas among the top ten listed on the EA survey
Method 2: Cause areas endorsed by two or more major EA organizations
Method 3: Cause profiles or pitches with 50 or more upvotes on the EA Forum
Goal
This guide aims to be a resource wherein cause Xs can be noticed, read about, and more deeply considered. There are hundreds of ways to make the world a better place. Given the EA movement’s relative youth and frequently unsystematic way of reviewing cause areas, there is ample room for more consideration and research. The goal of the guide is for more people to consider a wider range of cause areas so we, as a movement, have a better chance of finding new and impactful ways to do good.
Cause X guide content
Introduction
-Four focus areas of EA
-EA cause selection
-World view diversification
-Cause X
-What if you’re working on the wrong cause?
-EA representativeness
-How to get a cause into EA
Comparing top causes
-Animals > Humans
-Humans > Animals
-Long-term future > Near-term future
-Near-term future > Long-term future
-Meta > Direct
-Direct > Meta
New causes one could consider.
-Mental health
-Climate change
-Nuclear war
-Rationality
-Biosecurity
-Wild animal suffering
-Meta science research
-Improving institutional decision making
-Immigration reform
-Government policy
-Invertebrates
-Moral circle expansion
-Happiness
-Pain in the developing world
-Coal fires
If this guide is helpful to a lot of people, I will update or deepen the key posts or connect them better to make a more comprehensive PDF handbook. We will also keep a copy of this guide on Charity Entrepreneurship’s website here so it is easier for people to find in the future.
- Community vs Network by 12 Dec 2019 14:04 UTC; 130 points) (
- Why EA meta, and the top 3 charity ideas in the space by 6 Jan 2021 15:47 UTC; 88 points) (
- EA Survey 2019 Series: Cause Prioritization by 2 Jan 2020 17:32 UTC; 81 points) (
- Latest EA Updates for September 2019 by 28 Sep 2019 12:12 UTC; 61 points) (
- Baby Cause Areas, Existential Risk, and Longtermism by 25 May 2022 13:13 UTC; 60 points) (
- How to increase your odds of starting a career in charity entrepreneurship by 3 Dec 2019 17:40 UTC; 45 points) (
- New Cause Proposal: International Supply Chain Accountability by 1 Apr 2020 7:56 UTC; 32 points) (
- Local Group Event Idea: EA Community Talks by 20 Dec 2020 17:12 UTC; 26 points) (
- 12 Oct 2022 16:21 UTC; 14 points) 's comment on Ask Charity Entrepreneurship Anything by (
- 27 Mar 2017 2:23 UTC; 5 points) 's comment on Concrete project lists by (
- Insights From The Annual Retreat of EA Nigeria by 22 Nov 2023 15:30 UTC; 4 points) (
- Neglected/orphaned causes that need more attention and/or Cause X by 4 Jan 2023 6:02 UTC; 4 points) (
- 21 Oct 2020 8:23 UTC; 3 points) 's comment on Making More Sequences by (
- 21 Oct 2019 17:09 UTC; 2 points) 's comment on The Future of Earning to Give by (
It seems to me that this post has introduced a new definition of cause X that is weaker (i.e. easier to satisfy) than the one used by CEA.
This post defines cause X as:
But from Will MacAskill’s talk:
See also the first paragraph of Emanuele Ascani’s answer here.
From the “New causes one could consider” list in this post, I think only Invertebrates and Moral circle expansion would qualify as a potential cause X under CEA’s definition (the others already have researchers/organizations working on them full-time, or wouldn’t sound crazy to the average person).
I think it would be good to have a separate term specifically for the cause areas that seem especially crazy or unconceptualized, since searching for causes in this stricter class likely requires different strategies, more open-mindedness, etc.
Related: Guarded definition.
Improving how we measure well-being & happiness is related to Mental health and Meta science research.
See also Logarithmic Scales of Pleasure and Pain.
To zoom in on the “logarithmic scales of pleasure and pain” angle (I’m the author), I would say that this way of seeing the world suggests that the bulk of suffering is concentrated on a small percentage of experiences. Thus, finding scaleable treatments specially for ultra-painful conditions could take care of a much larger percent of the world burden of suffering than most people would intuitively realize. I really think this should be up in the list of considerations for Cause X. Specifically:
(see also the writeup of an event we hosted about possible new EA Cause Xs)
I like this post a lot; it is succint and provides a great actionable for EAs to act on.
Stylistically I would prefer if the Organization section was broken down into a paragraphs per section to make it easier to read.
I like that you precommited to a transparent way of selecting the new causes you present to the readers and limited the scope to 15. I would personally have liked to see them broken up in sections depending on what method they were chosen by.
For other readers who are eager for more, here there are other two that satisfy the criteria but I suppose did not make it to the list:
Atomically Precise Manufacturing (cause area endorse by two major organizations—OPP and Eric Drexler from FHI)
Aligning Recommender Systems (cause profile with more than 50 upvotes in the EA forum)
Easing euthenasia legal and logistics obstacles for those with painful terminal illness.
Treating cluster headaches is another new cause candidate. See also this proposed treatment.
Wouldn’t that be part of improving global health and/or wellbeing? To me this would be one meta level below the general cause areas.
New cause candidates Mental health, Moral circle expansion, and Happiness all overlap with Psychedelics.
Though you mentioned climate and nuclear, I think resilience to agricultural catastrophes and those catastrophes that could disrupt electricity/industry are separate possible cause X areas. This work is endorsed by ALLFED, BERI (through a grant), and CEA (through a grant). (Disclosure—I’m the director of ALLFED.)
Recently, I’ve been part of a small team that is working on the risks posed by technologies that allow humans to steer asteroids (opening the possibility of deliberately striking the Earth). We presented some of these results in a poster at EA Global SF 2019.
At the moment, we’re expanding this work into a paper. My current position is that this is an interesting and noteworthy technological risk that is (probably) strictly less dangerous/powerful than AI, but working on it can be useful. My reasons include: mitigating a risk that is largely orthogonal to AI is still useful; succeeding at preemptive regulation of a technological risk would improve our ability to do it for more difficult cases (e.g., AI); and popularizing the X-risk concept effectively via a more concrete/non-abstract manifestation than the more abstract risks from technologies like AI/biotech (most people understand the prevailing theory of the extinction of the dinosaurs and can somewhat easily imagine such a disaster in the future).
That’s a very interesting topic that I hadn’t considered before, and your argument for why it’s worth having at least some people thinking about and working on it seems sound to me.
But I also wondered when reading your comment whether publicly discussing such an idea is net negative due to posing information hazards. (That would probably just mean research on the idea should only be discussed individually with people who’ve been at least briefly vetted for sensibleness, not that research shouldn’t be conducted at all.) I had never heard of this potential issue, and don’t think I ever would’ve thought of it by myself, and my knee-jerk guess would be that the same would be true of most policymakers, members of the public, scientists, etc.
Have you thought about the possible harms of publicising this idea, and ran the idea of publicising it by sensible people to check there’s no unilateralist’s curse occurring?
(Edit: Some parts of your poster have updated me towards thinking it’s more likely than I previously thought that relevant decision-makers are or will become aware of this idea anyway. But I still think it may be worth at least considering potential information hazards here—which you may already have done.
A related point is that I recall someone—I think they were from FHI, but I can’t easily find the source—arguing that publicly emphasising the possibility of an AI arms race could make matters worse by making arms race dynamics more likely.)
Thanks for taking a look at the arguments and taking the time to post a reply here! Since this topic is still pretty new, it benefits a lot from each new person taking a look at the arguments and data.
I agree completely regarding information hazards. We’ve been thinking about these extensively over the last several months (and consulting with various people who are able to hold us to task about our position on them). In short, we chose every point on that poster with care. In some cases we’re talking about things that have been explored extensively by major public figures or sources, such as Carl Sagan or the RAND corporation. In other cases, we’re in new territory. We’ve definitely considered keeping our silence on both counts (also see https://forum.effectivealtruism.org/posts/CoXauRRzWxtsjhsj6/terrorism-tylenol-and-dangerous-information if you haven’t seen it yet). As it stands, we believe that the arguments in the poster (and the information undergirding those points) is of pretty high value to the world today and would actually be more dangerous if it were publicized at a later date (e.g., when space technologies are already much more mature and there are many status quo space forces and space industries who will fight regulation of their capabilities).
If you’re interested in the project itself, or in further discussions of these hazards/opportunities, let me know!
Regarding the “arms race” terminology concern, you may be referring to https://www.researchgate.net/publication/330280774_An_AI_Race_for_Strategic_Advantage_Rhetoric_and_Risks which I think is a worthy set of arguments to consider when weighing whether and how to speak on key subjects. I do think that a systematic case needs to be made in favor of particular kinds of speech, particularly around 1) constructively framing a challenge that humanity faces and 2) fostering the political will needed to show strategic restraint in the development and deployment of transformative technologies (e.g., though institutionalization in a global project). I think information hazards are an absolutely crucial part of this story, but they aren’t the entire story. With luck, I hope to contribute more thoughts along these lines in the coming months.
It sounds like you’ve given the possibility of information hazards careful attention, recognised the value of consulting others, and made reasonable decisions. (I expected you probably would’ve done so—just thought it’d be worth asking.)
I also definitely agree that the possibility of information hazards shouldn’t just serve as a blanket, argument-ending reason to not fairly publicly discuss any potentially dangerous technologies, and that it always has to be weighed against the potential benefits of such discussion.
See also: Do we know how many big asteroids could impact Earth?
Although it seems to be fine for the majority, school drives some children to suicide. Given that there is little evidence of benefit from schooling, advocating for letting those most affected have alternative options could be high impact.
There is strong evidence that the majority of children will never learn to read unless they are taught. Most children who go to school learn to read. That in itself is strong evidence that there are benefits to schooling.
In what countries are there no alternatives to attending school?
For 2.) e.g. in Germany homeschooling is illegal and attending school is a legal requirement for every child.
It’s also illegal in Turkey and (de jure at least) in China.
Thanks Denise, that’s helpful.
>There is strong evidence that the majority of children will never learn to read unless they are taught.
This is a different claim. I don’t know of strong evidence that children will fail to learn to read if not sent to school.
I claim that if state-funded universal primary education did not exist, a significant minority of the population would never learn to read. A current benefit of schools is providing near-universal literacy. I am frankly amazed that you claim that there is little evidence of benefit from schooling.
It seems like you’re arguing from common sense?
http://happinessishereblog.com/2017/10/reading-doesnt-need-taught-unschoolers-learn-read/
https://www.psychologytoday.com/us/blog/freedom-learn/201406/survey-grown-unschoolers-i-overview-findings
Blog posts won’t convince me; I studied linguistics and education for my undergrad, which convinced me that most children don’t teach themselves to read. A few do, and some have parents who teach them. But if you want to convince me that all children (not just a handful!) can and will teach themselves to read without school, you will need to show me some academic evidence.
I am convinced of this not only because I was explicitly taught it by experts in linguistics and education, but also because we did not have universal literacy before we had universal primary education (and countries without universal primary education still don’t!), and because we have evidence about which teaching systems will help children read more quickly and fluently than other teaching methods (and if teaching did literally nothing beneficial, like you still seem to be suggesting, we shouldn’t see significant differences between teaching methods).
Also consider, in this hypothetical world without schools, how children will access books.
Note: Assuming you’re not a senior policymaker or politician, I don’t think it’s a good use of my time to continue. I will however click on any relevant peer-reviewed studies and at least read the abstract, even if I don’t comment.
I apologise for my tone in this thread. I don’t think it was very helpful.
I feel like, before pulling out “blog posts won’t convince me” you could have first provided any links to support your view.
We seem to be having different conversations. I think you’re looking for strong evidence of stronger, more universal claims than I am making. I’m trying to say that this hypothesis (for some children) should be within the window of possibility and worthy of more investigation. There’s a potential motte and bailey problem with that, and the claims about evidence for benefit from schooling broadly should probably be separated from evidence for harms of schooling in specific cases.
>Imagine a country with two rules: first, every person must spend eight hours a day giving themselves strong electric shocks. Second, if anyone fails to follow a rule (including this one), or speaks out against it, or fails to enforce it, all citizens must unite to kill that person. Suppose these rules were well-enough established by tradition that everyone expected them to be enforced. -Meditations on Moloch
Imagine that an altruistic community in such a world is very open minded and willing consider not shocking yourself all the time, but wants to see lots of evidence for it produced by the tazer manufacturers, since after all they know the most about tazers and whether they are harmful...
If you give children the option of being tazed or going to school some of them are going to pick the tazer.
Does this mean you no longer endorse the original statement you made (“there is little evidence of benefit from schooling”)?
I’m feeling confused… I basically agreed with Khorton’s skepticism about that original claim, and now it sounds like you agree with Khorton too. It seems like you, in fact, believe something quite different from the original claim; your actual belief is something more like: “for some children, the benefits of schooling will not outweigh the torturous experience of attending school.” But it doesn’t seem like there has been any admission that the original claim was too strong (or, at the very least, that it was worded in a confusing way). So I’m wondering if I’m misinterpreting.
I think there are two claims. I stand by both, but think arguing them simultaneously causes things like a motte and bailey problem to rear its head.
This is KEY, in already industrialized countries kids may learn on their own or via homeschooling. For society as a whole public education is necessary, otherwise kids don’t learn.
Preventing childhood trauma is another new cause candidate.
Along these lines, preventing childhood lead poisoning is another potential candidate.
Subcategory: figuring out why child porn is increasing superlinearly
pedantic note: I believe GiveWell’s new focus on government policy falls within the existing categories of global health and institutional reform, rather than being its own cause area.
US campaign finance reform is another new cause candidate, related to (or a subset of) Improving institutional decision making.
EA emphasis is on “Global Health and Poverty”, the missing cause X here is Basic Education, I suggest that the cause area should be “Global Basic Education and Health”
Basic education being 12 years / high school equivalent in USA.