Cooperation in a movement supporting diverse causes
The effective altruism movement has people who support many different causes. Some think we can do the most good by alleviating poverty in the developing world; some think it’s best to focus on animal suffering; others again think we should focus on helping future generations; and some argue that the best route to any of these is to invest in the development of the EA movement itself.
Given that we agree on so many principles, this much disagreement might seem surprising. And it raises a key question: what should we do about it?
Are some causes better than others?
Some differences of opinion over which cause to support may come down to a difference in values. For example, some people think we should give similar weight to the welfare of future generations as we give to our own, and others think we should not.
But more disagreements arise from differences of opinion about matters of fact. These are often very uncertain matters, such as which of two interventions will most improve the world in the long run. They can also include differences of opinion about how we should weigh proven against speculative interventions, an ultimately empirical but unresolved question.
Suppose Alice and Bob agree on their values, but Alice thinks the best way to fulfil their values is to try to avoid extinction, whereas Bob thinks it’s better to help people in poverty today. Since they agree on values, there is a correct answer about which cause provides the most value for marginal resources—and this is unlikely to change with the resources of two people. In this sense some causes are better than others.
It doesn’t necessarily follow that they should both support the same cause. Some individual choices may come down to the availability of specialist knowledge or opportunities. But it is suggestive. At least some of the difference in opinions about the best cause likely comes down to ignorance, stubbornness, or bias.
Should we try to persuade others to switch causes?
So setting aside value differences and specialisation, one cause is probably best at the margin. For those who believe they support that cause (presumably most people), this seems to present a strong argument for trying to persuade others towards supporting it too. It could provide a substantial increase in the value from their altruistic efforts.
There is value from the act of trying to persuade others as well as from success. Having a dialogue about the virtues of different options fosters truth-seeking and the idea that we may not be correct in our current views.
However, there are also reasons not to push too hard to persuade others. The simplest is epistemic modesty. If other smart, well-meaning, and well-informed people are reaching different conclusions from us, we shouldn’t be too confident that we’re the ones who are right.
Even when do think we’re right, in some circumstances we can achieve more collectively by cooperating with people with whom we have disagreements of fact. We should preserve good relations with each other. In general, it’s worthwhile for effective altruists to be nice. Trying too hard to persuade others risks an acrimonious atmosphere which would be detrimental to the reputation of the whole movement, and its ability to collaborate and grow.
And while a single cause is probably best at the margin, there’s value for the movement as a whole in supporting a diverse portfolio of causes. Diminishing marginal returns will hardly matter for already-large causes like global health, but could have an effect for smaller ones such as movement growth. By spreading out, we can learn more and learn faster, and are less liable to fall into confirmation bias. By visibly supporting several causes, we also gain a way to enter discussions with people whose prior judgements lean towards some particular causes. It helps to demonstrate our openness to ideas and evidence, where if we all rallied around a single thing early in the growth of the movement we might more easily be pigeon-holed as the people who care about that thing.
Conclusions
A better understanding of which causes help most is really useful. So we need to continue to discuss this, sharing insights and information. But cause prioritisation isn’t a competition between the people supporting different causes where there will be winners and losers. Rather it’s a shared endeavour to uncover important truths about the world, where progress means we win collectively.
So I’ll discuss the merits of different causes with people, and I’ll try gently to persuade them of what seems best to me. But I won’t judge or think poorly of others simply because they don’t share my beliefs. I’ll take suggestions they make seriously, and be just as happy to be persuaded that I’m wrong.
Acknowledgements: thanks to Ryan Carey, Jess Whittlestone, and Rob Wiblin for comments and suggestions on drafts of this essay.- 24 Feb 2015 13:27 UTC; 13 points) 's comment on Six Ways To Get Along With People Who Are Totally Wrong* by (
- 16 Oct 2014 8:22 UTC; 1 point) 's comment on Open Thread 2 by (
A nice middle ground between “not talking about our reasons for supporting different causes at all” and “having people try to persuade others that their cause is the most important one” could be to simply encourage more truth-seeking, collaborative discussion about causes.
So rather than having people lay out their case for different causes (which risks causing people to get defensive and further entrenching peoples’ sense of affiliation to a certain cause, and a divide between different “groups” in the movement) it would be nice to see more discussion where people who support different causes explicitly try and find out where their disagreements lie, and learn from each other. I’m thinking of the kind of discussion that was had between Eliezer, Luke and Holden, for example, where they discussed their views on the far future and eventually found they didn’t disagree as much as they thought they did. This kind of thing seems really valuable, both in terms of learning and bringing people closer together.
Thanks for the great post Owen, this is an important topic. I agree that one can go too far in pushing others to change cause if they sincerely think that a different one is best, for reasons including epistemic modesty and being nice. But I also agree that the potential good done is significant enough to make some efforts in this regard. I’m personally struck by how little effort I see EAs make to persuade others of particular causes or charities, given the value that this would have given a decent chance of success (as I briefly discussed in my Where I’m giving and why post.)
Also, as I said when representing global poverty in the causes debate at last year’s CEA away weekend, I think that dialogues about which causes to focus on would be most productive if they were focused on specific actions or, even better, charities. This makes them concrete and action relevant for those of deciding whether to donate to, say, deworming or an alternative charity within a different cause area.
I think this may be right. I’d like to see more careful discussion of this—perhaps with posts on this forum laying out a clear case for various different causes. One reason that it happens less than it might is that trying it’s not just a case of trying to persuade them that the thing you like is good—you also have to persuade them that it’s better than the thing they like. This can make it seem more like an attack, which may put people off (perhaps correctly).
Something which I think would help here would be more willingness to engage in creating and critiquing cost-effectiveness estimates. While they have limitations they are ultimately one of the best methods we have for comparing between different kinds of outcome. I have the impression that the EA community may have turned away from them a little further than ideal. (I plan to write more on this and I how I think they might best be used.)
That would be great!
Agreed that this makes it tricky, and this consequence of focusing on what’s ‘best’ reminds me of what Jess described in Supportive Scepticism. Hopefully EAs can find a way to have productive discussions about these things that aren’t phrased or taken as attacks.
I share Tom’s feeling of being struck by this. As someone who is relatively young and undecided, I would appreciate more people arguing for their own causes/paths. I agree there is a happy medium here, and I would likely put it in a place of, “People who focus on a particular cause publicly state their reasoning and welcome critique, but don’t actively try to ‘convert’ others unless invited to do so.” I would love to hear such reasoning (with critique) from many EAs.
I substantially disagree with this. I do think there are some advantages to bringing it right down to the concrete at times, but I think that discussing causes is often useful for deciding things like where to investigate or wait further for specific opportunities, and asking for definite actions can inadvertently cut off consideration of such options.
I’d prefer for example to think of “we already have good knowledge about great charities in global health” as a factor in favour of it as a cause. I think this has the extra benefit that cause comparison is a hard and complicated question, so it’s best to avoid complicating it further by trying to consider how good specific charities are at the same time, if this can be factored out and considered separately.
That could be so. My focus on this concrete question partly stems from being concerned with the issue I presented at the weekend away talk: I was going to give several thousand pounds to charity that year, so needed to hear a specific alternative that was better than AMF. I also find it helpful to discuss the more tractable issue of choosing between specific charities first, where one can look at things like track record. But there are certainly other ways of looking at the issue!
A corollary of my view here is that I hope in a few year’s time the distribution of causes supported looks different to now, because I think that will be indicative of an appropriate openness to change.
Thanks for a really interesting post Owen! Movements, religions and groups of people in general seem really prone to schisms, even when they have a pretty well defined view, so it seems like disagreements leading to people falling out could be one of the big risks effective altruism faces, given the diversity of views and the emphasis on finding the very best thing. I like your emphasis on epistemic modesty and creating a collaborative culture. I also like your suggestion of forum posts carefully stating the case for different causes. Hopefully if we succeed in making a collaborative culture, that will help both at avoiding those kinds of posts being felt as an attack, and preventing comments on them from getting too vehement, which will make writers happier to post them.
One thing it might be nice to see more of is people writing about and defending different causes than the ones they usually do. I think most people interested in effective altruism think that there are many causes that are really effective, and focus on a particular one either because it seems probably even more effective than the others, or just because they can’t do everything. It would be really nice for that to be more evident outside of informal discussions, for example in the posts people write. It seems difficult to do that, partly because it might seem obvious, and partly because it’s boring to talk about things that the people in the discussion already agree on. Yet that makes it easy, at least from a distance, to get the impression that people barely agree, and for people to feel confrontational towards each other.
I wonder if it might even be good to avoid too much asking people ‘what cause do you support?’. A question like that feels like it creates an expectation that people pin their colours to a particular mast, which could create a confrontational rather than collaborative atmosphere.
I tend to tell people that I don’t know enough to support a particular cause yet. That seems to work well at avoiding pinning myself down, and it’s true.
Interesting idea. How much detail would you expect such articles to go into? It seems they run the risk of a knee-jerk, this is not what EA is about, downvote.
My reading of Michelle’s point was not that we should be writing about and defending causes that we wouldn’t normally think of as EA (although this could also be beneficial!) - I think she meant, within the space of the causes EAs generally talk about, it would be good if people wrote about and defended causes different to the ones they normally do. So, for example, if a person is known for talking about and defending animal causes, they could spend some time also writing about and defending xrisk or poverty. This would then lessen the impression that many people are “fixed” to one cause, but wouldn’t have the problem you mention. I might be reading this wrong though.
I meant Jess’ reading, sorry I wasn’t clear. I was thinking people would write about / defend causes they thought were very effective, though they weren’t the ones they usually focused on (and perhaps weren’t the one they thought very most effective). I think the knee-jerk would mostly be a problem if people wrote about causes they didn’t actually think were particularly effective, which does seem like it would be problematic.
There’s an interesting question related to this which is: Suppose Alice will work well as an EA as long as she believes she is fighting for global poverty now. Alice’s emotions are tuned to her long term goal of decreasing poverty. It is also the case that if Alice believed X-risk was more important, she would not be as motivated to work (emotionally, or due to not feeling she has relative advantage in X-risk reduction). Should Bob try to persuade Alice of the importance of X-risk? I say no. If there is strong reason to suggest that Alice is more effective at reaching the instrumental goals of all EA’s while believing something that she will be convinced against in the future, at least for the time being, we should let her be.
Giving What We Can might not have started if Toby Ord didn’t think poverty alleviation was the outstanding cause of our age. IF he is convinced now that actually Superintelligence is more important, the benefit that he created for poverty alleviation will still thrive. Also due to the propensity to having minds changed of EA’s, the creation of GWWC may well increase the number of people studying and working in Superintelligence.
I believe a substantial portion of effective altruism, like myself, are drawn by the general principles, approaches, and examples effective altruism provides of so much good being done, but don’t yet favor a particular cause. For me, I don’t which cause I consider the best to currently be working in.
I wonder how many effective altruists who have a mixed opinion between two, or more, causes. That is, effective altruists who don’t favor one cause because they can’t tell which if any seems most promising, but because the case between at least two of them is so close that it seems too difficult to decide given current evidence.