What Cause(s) Do You Support? And Why?
Not The Whole Picture
The results for the 2014 effective altruism survey were recently released. While there’s criticism the survey wasn’t as representative or accurate as we would have hoped, the numbers given by survey respondents about what causes favor as the most effective to work within could be used as a proxy for what effective altruism supports on the whole. The numbers as taken from the survey results are given below.
Poverty 579
Metacharity 422
Rationality 411
Cause Prioritization 345
AI Risks 332
Environmentalism 317
Existential Risk 301
Animals 296
Politics 291
Far future 233
Aside from how the sample may not be representative of effective altruism as a whole community, I’m also not confident these results represent the nuance and specificity of why and how each of us support different causes. I think a big factor is many of us don’t favor only one cause, and which cause we favor “the most” doesn’t capture exactly capture how we would like our thoughts and actions to be represented. For example, the analysis of the survey noted:
Despite “animal welfare” being a less popular cause than other causes, there appears to be a widespread concern for reducing meat among EAs, with 69.1% of EAs at least reducing the amount of meat they eat, a 33.1% vegan/vegetarian rate, and a 15.6% vegan rate. While it’s hard to get reliable national statistics, it seems indisputable that the vegetarianism / veganism rate is much higher than the US national average of ~3% vegetarian and ~0.5% vegan.
This indicates a concern for non-human animal welfare or rights within effective altruism beyond the 8.4% of survey respondents who responded they believe animal advocacy is the cause to work in where they believe the most good can be done. Many people may care about non-human animals, and thus are vegan or vegetarian, but believe either the forms of animal advocacy effective altruism has examined don’t make it a tractable cause at the present time. They may also or otherwise believe while, e.g., reducing or eliminating factory farming is important, primarily working in other cause areas is more important and/or tractable. Regarding the third of survey respondents who reduce their meat consumption, this could be due to how some of us are uncertain about what moral weight to grant to non-human animals. As an anecdote, I know multiple friends and peers who entered effective altruism originally favoring other causes, but became vegetarians or reduced their level of meat consumption due to their exposure to arguments and reasoning from other effective altruists advocating for animals.
Another confounding factor is many causes provided for the question on the survey overlap. “Politics” could overlap with any cause, including political advocacy for any other mentioned causes. “Cause Prioritization”, and the “far future” are also vague, arguably spurious, causes, allowing overlap with virtually any other cause. Any effective altruist could favor a specific cause (area), while also thinking “cause prioritization” and the “far future” are possibly the most important things ever, because who doesn’t want to know what the best actions to take are, or how to preserve what we value for the indefinite future? Concern for A.I. Risks alone has virtual overlap with concern for existential risks, but doesn’t why and how we’re concerned about other existential risks. The idea of “meta-charity” is one perhaps unique to effective altruism, geared towards funding and supporting organizations within effective altruism which support and advocate for other causes more among the public more broadly, or research what new causes we should favor. This latter part of “meta-charity” seems indistinguishable from “cause prioritization”.
It’s Okay to Have Complicated Opinions
These confusions and uncertainties aside, it’s difficult enough for each of us as individuals trying to make hard ethical decisions based on mixed interpretations of limited information. I myself and dozens of other effective altruists I know are not so confident in supporting any single cause (area) that they don’t believe which cause they favor most could switch in a matter of months. Also, the partnership between charity evaluator Givewell and foundation Good Ventures could return research causing us to favor new causes that currently aren’t greatly represented within effective altruism, such as concern for advocacy of various and specific policies, scientific research, or biosecurity. What drew many of us to effective altruism in the first place was the idea of figuring out how to do the most good, without already having a specific cause in mind as one promising the most opportunity for doing good. Our minds can or will change. Effective altruism is a young movement, and the landscape of what causes may not be so stable.
I believe all these considerations are frequently overlooked in conversations in effective altruism. I’ve followed many conversations within effective altruism discussing the various pros and cons of specific cause areas. These conversations don’t often leave room for us expressing uncertainty to explain how we have hope and skepticism, both unresolved, for multiple cause areas. I think those of us holding such opinions might represent a plurality of effective altruists. This may not be captured in either any statistics, or common impressions, of effective altruism. On the other hand, those of us with great confidence a single cause carries more opportunity to do the most good than any other cause may not often have way to put forward our best arguments for it.
So, I welcome you to make the case in the comments for why you believe what cause(s) do or don’t provide the best opportunity for doing the most good, or not. Remember when doing so to be supportive of others in discussion and debate.
[Important meta point]
Thanks for asking this question, and I look forward to answering it and diving in. I do agree that it’s important to get that nuance.
However, there is one problem I must first correct:
You can’t add the percentages like you did, because people were allowed to indicate support for more than one cause. So the 579 people who supported poverty and the 422 people who supported metacharity are not 1001 distinct people. (In fact, they were 683 distinct people.)
I missed that. Thanks. I’ll fix it.
If there were 813 EAs in total, that also puts those who prioritize animal rights at 36.4%. Here’s the breakdown:
Poverty: 71.22%; Metacharity: 51.91%; Rationality: 50.55%; Prioritization: 42.44%; AI Risk: 40.84%; Environmentalism: 38.99%; X-Risk: 37.02%; Animals: 36.41%; Politics: 35.79%; Far Future: 28.66%;
I don’t remember the exact phrasing of the question, but because of the flexibility you note, I would find it easy to answer something like “In what cause areas have you supported or are you planning to support interventions this year?” than “In what cause areas do you support interventions?”
The overlap between the answers is not a great problem for me as respondent, since I can select several, but some of the options will become less meaningful if some people perceive the hypernym relationships differently. Maybe these answers can be instrumented via monetary donations or direct work for charities in the areas. (I hope I’m using that word right. ^^) That strategy will probably become more promising once Open Phil has put out concrete recommendations.
The precise wording was “Which of the following causes do you think you should devote resources to?”