Rational altruism and risk aversion

Let’s start with a very hypothetical but tricky dilemma. The importance of this dilemma will become clear at the end.

Suppose I give you a choice between two options. If you choose option A, I will save one person. If you choose option B, I will flip a fair coin, heads means I will save 104 people, but tails means I will kill 100 people. The expected number of people saved in the second option is 104 times a probability of 50% minus 100 times a probability of 50%, which equals 2. Does this mean that option B is twice as good as option A? I expect most people have the same intuition as I have, which says that option A should be chosen. This is due to a risk aversion: we do not want to run a risk of losing e.g. 100 peoples lives. However, when it comes to rational or pure altruism, one could argue that option B is the best.

First, we face scope neglect.[1] You can see the difference between saving one person and saving no-one. But when I save a group of 104 people, you don’t see the difference with saving a group of 100 people, unless you take the effort to count. Saving 104 people feels as good as saving only 100 people. The flip-side of me saving 104 people, is me killing 100 people, and if saving 104 people is equally good as saving 100 people, saving that many people cannot morally outweigh killing 100 people. Option B does not seem to be any good in expectation.

However, pure altruism implies doing what other people want, and not imposing your own values or preferences on others against their will. The 104th person does not share your scope neglect judgment that saving his or her life has zero added value. That last person wants to be saved as much as if he or she were the first person to be saved. Saving an extra person has always the same moral value, no matter how many people are already saved. (This is different from money: an extra dollar has less value for you, the more money you already have.)

Second, we have an act-omission bias. Killing a person means causing the death of a person. But there are many interpretations of what counts as a cause. Letting someone die (not saving someone) can also count as a necessary and sufficient cause of the death of that person. In this broader interpretation of ‘cause’, both killing someone (an act) and letting someone die (an omission) causes the death of a person. The difference between killing and letting die is the reference situation: if I kill someone, we compare that situation with a reference situation where I did not act (e.g. where I was not present), which means the person would not have died. If I let someone die, we take as reference situation the situation where the person died anyway (in my absence). So if I save 104 people in option B, it means I could have saved 104 people in option A as well. If I only save one person in option A, I let 103 people die, which means I cause the death of 103 people.

However, pure altruism implies not imposing your own interpretations and preferences on others who you help. I may use a narrow interpretation of ‘cause’, in which killing causes the death but letting die does not cause the death of a person. But who is to say that my interpretation of a cause is the right one? I may value the act of killing as worse than the omission of letting die, because I prefer to take as reference situation the situation where I am absent or do not act. But who is to say that my choice of reference situation is the right one? Other people may not share my valuation of acts and omissions and my preference for the reference situation where I am absent. Other people could prefer another reference situation, which means they locate the dividing line between acts and omissions elsewhere. For them, they do not want to die and it may be irrelevant whether the cause of death was my act or my omission. This can be seen in option A where I let 103 people die. I didn’t tell you why they died, but the reason could be because they are killed by someone else (or something else such as a falling rock or a virus). So I let those people die because those people are killed. In option B, when the coin falls heads, I simply prevent that killer from killing 104 people. That is how I save those people.

Third, there is a framing effect.[2] This is related to the previous point about the reference situation. Suppose option A remains the same: I save one person for sure. But option B now becomes a different option B’ where I throw a 104-sided dice. Only if it gives the number 1, I will save 104 people, otherwise I do nothing. In expectation, option B’ saves one person, which is as good as option A. Still, being risk averse, I prefer option A, because I do not want to run the very high risk (probability 103104, which is close to 99%) that no-one is saved. But suppose in a different situation, you have to choose between options C and D. Option C means I cause the death of 103 people for sure, by letting them die (letting them be killed). In option D, I throw the same dice, and if it gives the number 1, no-one dies. A lot of people now are risk seeking: they prefer option D, because they want to avoid the certainty that 103 people die. In option D, there is still a non-zero probability that no-one dies. But as you may have noticed: options A and C are exactly the same, as are options B’ and D. The reason why options A and C appear to be different, is the choice of reference situation. In option A, when we consider one person being saved, the reference situation is where everyone dies. In option C, the reference situation is where no-one dies.

Again, pure altruism implies not imposing your own preferences (e.g. your preferred reference situation) on others. Other people don’t care about your preferred reference situation or how you frame the problem: they just want to be saved.

Fourth we have a personal contribution preference. If you choose option B, you may be unlucky by having me kill 100 people. But suppose next to you there are nine other people with whom I play this game. Everyone can chosen between options A and B. If you all choose option A, then I save 10 people for sure (one for each of the 10 players). If you all choose option B, you can calculate that with 62% likelihood, the group of 10 players saves at least 20 people. If the group becomes larger, that likelihood approaches 1. You may cause the death of 100 people, but someone else may cause the rescue of 104 people. To make it simple, suppose that the first 100 people saved are the ones that would have died by your outcome. Hence, together, you and that other person have saved 4 people. You didn’t save anyone, but that other person saved 4 people.

However, from a pure altruistic perspective, your feelings and preferences do not count. You may feel bad because you didn’t save anyone, but other people in your group saved many people. And for the people who are saved, it doesn’t matter whether they were saved by you or by anyone else. You may have a preference to save someone personally, but that preference is not shared by the ones who can be saved. When the group of players is small or when you are the only player, you may have a preference for option A, but when the group is large, you may be convinced by the above reasoning that option B is better. However, the way how your preference for option B depends on the group size (e.g. the minimum required group size for you to choose option B), is not necessarily the same way how the people that can be saved prefer option B. They always prefer the option that gives them the highest likelihood of being saved, independent of the size of the group of players. Hence, even when you are the only player, from a rational altruistic perspective, you should make the same choice that you would make when there are thousands of players (and vice versa: the choice that you make in the case of many players should be the same as when you are the only player).

Fifth, we have a zero-risk bias.[3] Let’s change our initial dilemma. You have the choice between option A* in which with 90% probability I kill 100 people and with 10% probability I save one person, and option B* in which with 95% probability I kill 100 people and with 5% probability I save 104 people. In both options, the likelihood of 100 people being killed is very high (90% and 95%), whereas the likelihood of saving at least one person is very low (10% and 5%). The difference between 10% and 5% does not seem to matter that much. But the big difference is the number of people who are saved, when we are in the lucky situation that people are saved instead of killed. In option B*, 104 times more people will be saved than in option A*. That is why my intuition says to pick option B*.

But note that options A* and B* are in some sense similar to options A and B. The choice between A* and B* can be describes as a two-stage game. In the first stage, I throw a 10-sided dice and when it lands at a value higher than 1 (which means 90% probability), I kill 100 people and the game ends. But when the dice gives a 1, you are lucky: I don’t kill anyone (yet), but instead you enter the second stage of the game in which I let you play our initial game to choose between options A and B. In this two-stage game, both options A* and B* involve the risk that 100 people are killed, so this risk is unavoidable. But if in the first stage of the game the dice gives the value 1, you can avoid the risk of 100 people being killed by choosing option A in the second stage. Hence, if you have a zero risk bias and you are lucky that you may enter the second stage of the game, you choose option A that doesn’t involve a risk of people being killed by me. You want to minimize risk, but choose option A* instead of option B* reduces the risk of 100 people being killed from 95% to 90%, which seems a bit futile. In contrast, choosing option A instead of B reduces the risk from 50% all the way to 0%. You can eliminate the risk completely. Not choosing A* because the risk reduction seems futile, is an example of futility thinking.[4]

This two-stage game is an example of Allais paradox[5]: my intuition says to choose B* above A*, but to choose A above B. This is strange, because option A is simply the same as option A*, but only considering the second stage of the game. If I consider the whole two-stage game, I prefer B*. But the first stage of the game is irrelevant, because that doesn’t involve making a choice. So we can equally consider only the second stage of the game that does involve making a choice (between options A and B). In that case I do not prefer B and hence I should not prefer B* either. From one perspective, I prefer B*, from another perspective, I prefer A*.

However, pure altruism implies not imposing your own perspective on others. Whether you look at the two-stage game as one whole game, or you consider the second stage separately, is not something that other people care about. Therefore, you should look at the choice between A and B in exactly the same way as the choice between A* and B*, which means you have to avoid zero risk bias.

Sixth, we do narrow bracketing[6], which is related to the above two-stage game. Suppose that next to the game where you can choose between options A and B, you can play a second game between options X and Y. Choosing X means I will kill 100 people, choosing Y means I throw a fair coin, heads means I kill 202 people, tails means I kill no-one. According to the abovementioned framing effect, many people are risk seeking when it comes to risky losses. That means many people choose option Y above X: in Y there is at least a non-zero probability that no-one is killed. When it comes to risky gains (lives saved), most people are risk averse, which means they choose option A above B: In option A there is certainty that at least someone is saved. Now we can combine the two games, which means there are four options. Option AX (choosing A in the first and X in the second game) means a certain death of 99 people. Option AY gives 50% probability that 201 people are killed and 50% probability that 1 person is saved. Option BX means a 50% probability of killing 200 people and a 50% probability of saving 4 people. Finally, option BY, which is not so relevant in this argument, gives 25% probability of killing 302 people, 25% of killing 100, 25% of killing 98 and 25% of saving 104 people. Now we see an irrationality: we prefer A above B and Y above X, but the combination of AY is dominated by the combination of BX. No matter what the outcomes of the coin tosses, BX is always better than AY.

Again, pure altruism implies not imposing your own perspective on others. Our preference of A above B depends on our perspective: if we consider the first game separately, we prefer A, but if we consider the first game as part of a bigger game, we prefer B. For the people who want to be saved, it shouldn’t matter whether we consider the two games as one whole, or whether we ‘narrow bracket’ and consider the two games separately.

In summary, pure altruism means that we should not impose our own values, preferences, interpretations and perspectives on others who simply want to be saved. If you want to help others purely altruistically, choosing option A instead of B would be irrational. Nevertheless, after the above long and detailed reasoning, it still feels intuitively wrong to choose B. This is a kind of moral illusion: a persistent intuitive moral judgment that is inconsistent with other, stronger moral judgments.

What lesson should we draw from this moral illusion? Of course, in reality we do not face a dilemma like the choice between options A and B, where some outcomes may seriously harm instead of help others. If we would face such a choice, we may still follow our moral gut feeling and choose option A, i.e. the option that has the least risk of causing harm. But when it comes to effective altruism, where the goal is to effectively and impartially help others purely altruistically, we should try to be at least a little bit more risk neutral instead of risk averse, especially in the many usual cases where we have to choose only between helping others in non-harmful ways.

More concretely, instead of supporting specific projects that directly help others and produce small but certain beneficial outcomes, it is worthwhile to focus more on risky bets such as scientific research, entrepreneurship, advocacy and policy change. Scientific technological research is high risk high impact: a small chance that the research is fruitful and results in a new useful technology, but when it does, the technology can do a lot of good. Similarly, a small chance that a start-up that develops a new technology will make it and succeed, but when it does, the start-up can become big and produce a lot of good with the new technology. Policy change is hard and has low probability of being successful, but when it does, it can have a huge positive impact. Investing is high risk high reward and the expected return on investment is higher when investments are riskier (the higher expected return of a risky investment compared to a safe investment, is the risk premium). This means that investing is interesting as a strategy of earning to give, which involves earning a higher income to donate more to charities.

Consider a group of effective altruists who decide to become risky investors. Many investors will lose and get very low returns and hence will not be able to donate much to charity. But a small minority will win and earn a huge return that can be donated to charity. For an effective altruist, it doesn’t matter who of the group wins and is able to donate the money. If each altruist wants to feel personal satisfaction from a personal donation to a charity, the altruists will choose safe investments such that they are sure that each of them can donate at least something. But this means that the total return of this group of altruists, and hence the total amount donated, will be lower. The group of effective altruists who choose the risky investments, is in the end able to donate much more to charity, even if an individual effective altruist of this group is very likely to have a negligible contribution.


[1] Desvousges, W. Johnson, R. Dunford, R. Boyle, K. J. Hudson, S. and Wilson K. N. (1992). Measuring non-use damages using contingent valuation: experimental evaluation accuracy. Research Triangle Institute Monograph 92-1.

[2] Tversky A. & Kahneman D. (1981). The Framing of decisions and the psychology of choice. Science 211 (4481): 453–458.

[3] Kahneman, D. & Tversky, A. (1979) Prospect theory: An analysis of decision under risk, Econometrica, 47, 263-291.

[4] Unger, P. K. (1996). Living high and letting die: Our illusion of innocence. Oxford University Press, USA.

[5] Allais, M. (1953). Le comportement de l’homme rationnel devant le risque: critique des postulats et axiomes de l’école américaine. Econometrica: Journal of the Econometric Society, 503-546.

[6] Rabin, M., & Weizsäcker, G. (2009). Narrow bracketing and dominated choices. American Economic Review, 99(4), 1508-43.

No comments.