A new paper in Judgment and Decision Making finds that:
People often choose to help people from more disadvantaged groups “even when this transparently implies sacrificing lives.”
However, people are much more likely to make the decision that saves the most lives possible if they are asked to explicitly reflect on and rank which criteria they should use to make.
The paper explicitly discusses EA on the first page and the first study involves participants choosing to donate to either SCI or The END Fund, and are presented with cost-effectiveness information from GiveWell.
The studies are all based around roughly the same basic design: presenting participants a choice between a less effective charity that serves people from a more disadvantaged group or a more effective charity that serves a more advantaged group (though note that the groups in question are beneficiaries in Nigeria and Ethiopia, so both would likely be considered highly disadvantaged relative to the participants). The effects are reliably pretty large (around 25.5% pick the less effective charity in the control condition in study 1, and over 43% in the experimental condition, where which set of beneficiaries is more disadvantaged is made salient (e.g. information is provided about the GDP per capita and literacy rate of the countries where the two charities work). Similarly the differences are 15.8% → 42.4% in study 2A and 20.5% → 40.79% in 2B. In Study 3, over 46% picked the less effective charity when demographic information was made salient but they weren’t asked to reflect beforehand on what criteria to use, but only 23% did when asked to reflect beforehand about which criterion they should use to make the decision (e.g. save the most lives per donation, the average income of the country etc.) and rank their importance.
The effect of encouraging people to explicitly reflect on what decision procedure to use prior to making a decision seems of particular interest to EA. One particular advantage of this approach is that it is non-paternalistic (i.e. we don’t assume a particular conclusion is normatively correct and try to nudge people towards it). It is also presumably epistemically salutary (conditional on the assumption that more reflection tends to be beneficial, which is certainly open to question). Of course, there are lots of different ways that EAs could encourage others (and themselves) to reflect more about their decision-making in advance (I’m thinking primarily about institutional design, even if on a very small level, but of course this occurs individually as well), so perhaps we should think more about the best ways to do this.
Of note, when asked explicitly, about 92% of respondents ranked saving the most lives per donation as the most important criterion, over the markers of disadvantage. That said, I suspect this effect would have been much less impressive had the ranked options included factors which made the possibility of prioritising the disadvantaged even more salient. I would also expect the ‘disadvantage’ effect the paper found to be much stronger in a lot of cases where the pull of prioritising the worst off is more salient, for example, where more justice related intuitions are elicited. Choosing to help participants in one African country as opposed to another slightly more socio-economically disadvantaged African country, doesn’t strike me as a case that would generate a strong intuitive pull towards prioritising more disadvantaged groups.
One caveat about the paper, was that when I looked at the open comments where respondents explained why thought a particular decision criteria was important (e.g. income), a small number suggested that they wouldn’t want to donate to a country with too high an income, because they thought that in such cases the would-be beneficiaries should be able to pay for themselves, which isn’t really in line with the explanation suggested by the paper (people wanting to help the worst off, rather than not wanting to help people who don’t need help). Further research into what conditions are required for this effect and exactly what is motivating participants is required.
This was an excellent research summary! I love seeing people write up scientific studies from outside the EA-sphere (this one had some EA links, but I wasn’t familiar with either author).
This sort of thing gives Forum readers a better knowledge base on which to build theories and models; even if any individual study might be flawed, I’m still excited to see more of them get written up, since I’d hope that any given study sheds at least a bit of light on net.
If EA gets the tag feature from Lesswrong, there can easily be a repository of summaries of important papers. That seems quite valuable, given the alternative is science journalism which is geared towards random results and a more unsophisticated audience. Update: EA does have tags. I’ll ping in the relevant post for a “research summary” tag. Mentioning writing summaries of research papers on an EA advice page is a good way to spread the norm. Based on research on learning (and common sense), this will also help the author to consolidate their new insights in their own brain.