Three Heuristics for Finding Cause X
This post is cross-posted from effectivealtruism.org.
In the October 2016 EA Newsletter, we discussed Will MacAskill’s idea that discovering some important, but unaddressed moral catastrophe—which he calls “Cause X”—could be one of the most important goals of the EA community. By its very nature, Cause X is likely to be an idea that today seems implausible or silly, but will seem obvious in the future just as ideas like animal welfare or existential risk were laughable in the past. This characteristic of Cause X—that it may seem implausible at first—makes searching for Cause X a difficult challenge.
Fortunately, I think we can look at the history of past Cause X-style ideas to uncover some heuristics we can use to search for and evaluate potential Cause X candidates. I suggest three such heuristics below.
Heuristic 1: Expanding the moral circle
One notable trend in moral thinking since the Enlightenment is the expansion of the set of entities in the world deemed worthy of moral consideration. In Classical Athens, women had no legal personhood and were under the guardianship of their father or other male relative. In ancient Rome, full citizenship was granted only to male Roman Citizens with limited forms of citizenship granted to women, Client state citizens, and freed slaves. Little or no moral consideration was paid to slaves and people outside of Roman rule.
Most societies consider “insider” groups like the family or the tribal group to have moral worth and assign lesser or no moral significance outside that group. Over time, there seems to have been a steady expansion of this moral circle.
In Classical Athens, it seemed obvious and normal that only men should have political rights, that fathers should be allowed to discard unwanted infants, and that slaves should serve their masters. Concepts like “war crimes”—that foreigners had any right to humane treatment—or “women’s rights” would have sounded shocking and unnatural.
As the centuries have progressed, there has been a growing sense that categories like gender, nationality, sexuality, and ethnicity don’t determine the worth of a person. Movements for the liberation or fairer treatment of slaves, women, children, racial minorities, sexual minorities, and non-human animals went from seeming absurd and intractable, to being broadly accepted. Forms of subjugation that once seemed natural and customary grew controversial and then unacceptable.
In fact, we can think of many of the popular cause areas in effective altruism as targeting beneficiaries that have not yet gained full moral consideration in society. Global poverty expands the moral circle to include people you may never meet in other countries. Animal welfare expands the circle to include non-human animals. And far future causes expand the circle to include sentient beings that may exist in the future. Cause X may follow this trendline by taking the expanding moral circle in unexpected or counterintuitive directions.
Therefore, a natural heuristic for discovering Cause X is to push the trendline further. What sentient beings are being ignored? Could sentience arise in unexpected places? Should we use sentience to determine the scope of moral consideration or should we use some other criteria? Answering these questions may yield candidates for unexplored ways of improving the world.
Read more on the expanding moral circle
Singer’s book: The Expanding Circle
Available in PDF for free here
Examples
I’ve divided examples of this heuristic in action into two categories. The first is the intellectual challenge of figuring out which things we ought to care about. The second is the personal challenge of widening your score of concern for others.
Expanding the intellectual moral circle
One way to find a new Cause X is to expand the set of things that we recognize as mattering morally. Below is a list of examples of this heuristic in action. Inclusion here does not necessarily imply an endorsement of the idea.
Wild-Animal Suffering
This paper takes the common idea that we ought to have some concern for nonhuman animals directly affected by humans (e.g. factory farmed animals or pets) and extends it to include suffering in wild animals. Due to the enormous number of wild animals and the suffering they undergo, the paper argues that “[o]ur [top] priority should be to ensure that future human intelligence is used to prevent wild-animal suffering, rather than to multiply it.”
The Importance of Wild-Animal Suffering
Reading time: ~30m
Insect Suffering
If you agree that wild animals fall inside our moral circle, then one area of particular concern is the suffering of insects. Given that there are an estimated 1 billion insects for every human alive, consideration of insect suffering may matter a great deal if insects matter morally.
The Importance of Insect Suffering
Reading time: ~12m
The ethical importance of digital minds
Whatever it is that justifies including humans and animals in our scope of moral concern may exist in digital minds as well. As the sophistication and number of digital minds increase, our concern for how digital minds are treated may need to increase proportionally.
Do Artificial Reinforcement-Learning Agents Matter Morally?
Reading time: ~1h
Which Computations Do I Care About?
Reading time: ~1h
Expanding the personal moral circle
In addition to expanding the intellectual moral circle, we can expand our personal moral circle by increasing the amount of empathy, compassion and other pro-social behaviors that we exhibit towards others. An example of this is below although more research into this area is needed.
Using meditation to increase pro-social behaviors
Meditation, especially mindfulness, loving-kindness and compassion meditation appears to have positive effects on a large number of pro-social behaviors. Emerging research suggests positive effects on behavior in prosocial games, self-compassion and other-focused concern, positive emotions, life satisfaction, decreases in depressive symptoms, and decreasing implicit intergroup bias.
Loving Kindness and Compassion Meditation: Potential for Psychological Interventions
Reading time: ~22m
Heuristic 2: Consequences of technological progress
A second heuristic for finding a Cause X is to look for forthcoming technological advances that might have large implications for the future of sentient life. Work that focuses on the safety of advanced artificial intelligence is one example of how this approach has been successfully applied to find a Cause X.
This approach does not necessarily need to focus on the downsides of advanced technology. Instead, it could focus on hastening the development of particularly important or beneficial technologies (e.g. cellular agriculture) or on helping to shape the development trajectory of a beneficial technology to maximize the positive results and minimize the negative results (e.g. atomically precise manufacturing).
More on the consequences of technological progress
Nick Bostrom’s Letter from Utopia
Available here
Reading time: ~10m
Open Philanthropy Project’s Global Catastrophic Risks focus area
Available here
Basic intro reading time: ~2m
Examples
I’ve included some examples of this heuristic in action below. Inclusion here does not necessarily imply an endorsement of the idea.
Embryo selection
Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?
Reading time: ~16m
Atomically Precise Manufacturing (APM)
Risks from Atomically Precise Manufacturing
Reading time: ~15m
Molecular engineering: An approach to the development of general capabilities for molecular manipulation
Reading time: ~14m
Animal Product Alternatives
Scenarios for Cellular Agriculture
Reading time: ~10m
Animal Product Alternatives
Reading time: ~25m
Eliminating the biological substrates of suffering
The Hedonistic Imperative
Reading time: ~3h
Heuristic 3: Crucial Considerations
A final heuristic is to look for ideas or arguments that might necessitate a significant change in our priorities. Nick Bostrom calls these ideas “crucial considerations” and explains the concept as follows:
“A thread that runs through my work is a concern with “crucial considerations.” A crucial consideration is an idea or argument that might plausibly reveal the need for not just some minor course adjustment in our practical endeavours but a major change of direction or priority.
A good example of the use of this heuristic is the argument for focusing altruistic efforts on improving the far future. The key insights in the argument are that 1) the total value in the future may vastly exceed the value today and 2) we may be able to affect the far future. The core argument is relatively simple but has enormous implications for what we ought to do. Finding similar arguments is a promising method to discover Cause X.
Examples
I’ve included some examples of this heuristic in action below. Inclusion here does not necessarily imply an endorsement of the idea.
The simulation argument
Are you living in a computer simulation?
Reading time: ~20m
Infinite ethics
Infinite ethics
Reading time: 1h15m
Civilizational stability
The Long-Term Significance of Reducing Global Catastrophic Risks
Reading time: ~18 minutes
Conclusion
Finding Cause X is one of the most valuable, but challenging things the EA community could accomplish. The challenge comes two sources. The first is the enormous technical challenge of finding an extremely important, plausible and unexplored cause. The second is the emotional challenge of engaging with plausible Cause X candidates. This requires careful calibration between being appropriately skeptical about counterintuitive ideas on the one hand and being sufficiently open-minded to spot truly important ideas on the other.
Given this challenge, one way the EA community can make progress on Cause X is to create the kind of intellectual community that can engage sensibly with these ideas. Our results so far are promising, but there is much more to be done.
- 7 May 2020 7:55 UTC; 11 points) 's comment on MichaelA’s Quick takes by (
Neat. A couple notes:
The methodology here is empirical: you’ve identified the methods that have worked well to identify the causes X that we have already figured out. But if there were a different heuristic which is actually better at revealing causes—especially the causes which are hidden to the heuristics stated here and therefore hidden to us—then we wouldn’t know about it if we never tried it. And maybe the fact that we’ve tried these heuristics already implies that they’ve done what they can and we should try other heuristics instead. (Maybe it would help if we compiled a list of heuristics which have been tried and didn’t work.)
The third heuristic is less well defined than the other two and I don’t see a good way of formalizing it or systematically searching with it. What does it mean to find a crucial consideration if not just finding a big new cause? But I think a good heuristic which captures the idea behind Bostrom’s argument is to expand our comparisons across wider sets of possible worlds (including possible worlds where different theories of philosophy, etc are true) and take note of conditionally dependent considerations.
This doesn’t actually provide anything like a framework to evaluate Cause X candidates. Indeed, I would argue it doesn’t even provide a decent guide to finding plausible Cause X candidates.
Only the first methodology (expanding the moral sphere) identifies a type of moral claim that we have historically looked back on and found to be compelling. The second and third methods just list typical ways people in the EA community claim to have found Cause X. Moreover, there is good reason for thinking that successfully finding something that qualifies as Cause X will require coming up with something that isn’t an obvious candidate.
A few thoughts:
Methods work could be framed as trying to build new streetlights to search under. I would guess (to echo kbog) that building new streetlights will ultimately prove more important/be persistently neglected than expanding existing streetlights.
Expanding circle is predicated on our understanding of suffering. I expect direct inquiry into this by cognitive science to wind up having important ramifications for trying to reduce suffering.
Exploring ways of sharpening problems/problem formulations is an area I expect has not been exhausted of low hanging fruit. Much of EA itself, after all, could be seen as framing the charity problem similarly to common startup problem frames. This lead to applying a new set of tools to an existing problem that caused significant gains. This could also be framed as choice of reference class. Some examples: what does EA look like if you model the problems as bottleneck/constraint problems? What does it look like when you model it as memetic spread (and not simply EA should spread to as many as possible, but what are the 2nd order effects likely to look like given the memetic environment)? What does EA look like if you model it as a process that generates feedback loops? EA as a motivation problem? A game theory problem? You get the idea. Each frame is going to highlight different hypotheses that might be worth testing.
I think methods work is currently neglected given that we need to figure out the way the world is. The degree to which our search methods mimic existing search methods is the degree to which I expect our impact to be roughly the same as the impact generated by the existing systems.