Rawls’ veil of ignorance supports maximizing expected value
One common topic in effective altruism introductory seminars is expected value, specifically the idea that we should usually maximize it. It’s intuitive for some participants, but others are less sure. Here I will offer a simple justification for expected value maximization using a variation of the veil of ignorance thought experiment. This line of thinking has helped make my introductory seminar participants (and me) more confident in the legitimacy of expected value.
The thought experiment begins with a group of rational agents in the “original position”. Here they have no knowledge of who or what they will be when they enter the world. They could be any race, gender, species, or thing. Because they don’t know who or what they will be, they have no unfair biases, and should be able to design a just society and make just decisions.
Now for two expected value thought experiments from the Cambridge EA introductory seminar discussion guide. Suppose that a disease, or a war, or something, is killing people. And suppose you only have enough resources to implement one of the following two options:
Version A…
Save 400 lives, with certainty [EV +400/-100]
Save 500 lives, with 90% probability; save no lives, 10% probability [EV +450/-50]
Version B…
100 people die, with certainty [EV +400/-100]
90% chance no one dies; 10% chance 500 people die [EV +450/-50]
Now imagine that you’re an agent behind the veil of ignorance. You could enter the world as any of the 500 individuals. What do you want the decision-maker to choose? In both versions of the thought experiment, option 1 gives you an 80% chance of surviving, but option 2 gives you a 90% chance. The clear choice is option 2.
This framework bypasses the common objection that it’s wrong to take risks with others’ lives by turning both options into a risk. In my experience, part of this objection often has to do with understandable feelings of discomfort with risk-taking in high-stakes scenarios. But here risk-taking is the altruistic approach, so a refusal to accept risk would ultimately be for the emotional benefit of the decider. This topic can also lead to discussion about the meaning of altruism, which is a highly relevant idea for intro seminar participants.
This argument isn’t new (reviewers noted that John Harsanyi was the first to make this argument, and Holden Karnofsky discusses it in his post on one-dimensional ethics), but I hope you find this short explanation useful for your own thinking and for your communication of effective altruism.
Rawls’ veil of ignorance supports maximizing expected value
One common topic in effective altruism introductory seminars is expected value, specifically the idea that we should usually maximize it. It’s intuitive for some participants, but others are less sure. Here I will offer a simple justification for expected value maximization using a variation of the veil of ignorance thought experiment. This line of thinking has helped make my introductory seminar participants (and me) more confident in the legitimacy of expected value.
The thought experiment begins with a group of rational agents in the “original position”. Here they have no knowledge of who or what they will be when they enter the world. They could be any race, gender, species, or thing. Because they don’t know who or what they will be, they have no unfair biases, and should be able to design a just society and make just decisions.
Now for two expected value thought experiments from the Cambridge EA introductory seminar discussion guide. Suppose that a disease, or a war, or something, is killing people. And suppose you only have enough resources to implement one of the following two options:
Version A…
Save 400 lives, with certainty [EV +400/-100]
Save 500 lives, with 90% probability; save no lives, 10% probability [EV +450/-50]
Version B…
100 people die, with certainty [EV +400/-100]
90% chance no one dies; 10% chance 500 people die [EV +450/-50]
Now imagine that you’re an agent behind the veil of ignorance. You could enter the world as any of the 500 individuals. What do you want the decision-maker to choose? In both versions of the thought experiment, option 1 gives you an 80% chance of surviving, but option 2 gives you a 90% chance. The clear choice is option 2.
This framework bypasses the common objection that it’s wrong to take risks with others’ lives by turning both options into a risk. In my experience, part of this objection often has to do with understandable feelings of discomfort with risk-taking in high-stakes scenarios. But here risk-taking is the altruistic approach, so a refusal to accept risk would ultimately be for the emotional benefit of the decider. This topic can also lead to discussion about the meaning of altruism, which is a highly relevant idea for intro seminar participants.
This argument isn’t new (reviewers noted that John Harsanyi was the first to make this argument, and Holden Karnofsky discusses it in his post on one-dimensional ethics), but I hope you find this short explanation useful for your own thinking and for your communication of effective altruism.