I don’t think your argument against risk aversion fully addresses the issue. You give one argument for diversification that is based on diminishing marginal utilities, and then show that this plausibly doesn’t apply in global charities. However, there’s a separate argument for diversification that is actually about risk itself, and not diminishing marginal utility. You should look at Lara Buchak’s book, “Risk and Rationality”, which argues that there is a distinct form of rational risk-aversion (or risk-seeking-ness). On a risk neutral approach, each outcome counts in exact proportion to its probability, regardless of whether it’s the best outcome, the worst, or in between. On a risk averse approach, the relative weight of the top ten percentiles of outcomes is less than the relative weight of the bottom ten percentiles of outcomes, and vice versa for risk seeking approaches.
This turns out to precisely correspond to ways to make sense of some kinds of inequality aversion—making things better for a worse off person improves the world more than making things equally much better for a better off person.
None of the arguments you give tell against this approach rather than the risk-neutral one.
One important challenge to the risk-sensitive approach is that, if you make large numbers of uncorrelated decisions, then the law of large numbers kicks in and it ends up behaving just like risk neutral decision theory. But these cases of making a single large global-scale intervention are precisely the ones in which you aren’t making a large number of uncorrelated decisions, and so considerations of risk sensitivity can become relevant.
You’re right that I haven’t comprehensively addressed risk aversion in this piece. I’ve just tried to give an intuition for why the pro-risk-aversion intuition might be misleading.
A big difference in button 1 (small benefit for someone) and 1A (small chance of a small benefit for a large number of people) is the kind of system required for these outcomes.
Button 1 requires basically a days worth of investment by someone making a choice to give it to another. Button 1A requires… perhaps a million times as much effort? We’re talking about the equivalent of passing a national holiday act. This ends up requiring an enormous amount of coordination and investment. And the results do not scale linearly at all. That is, a person investing a day’s worth of effort to try and pass a national holiday act don’t have a 10E-8 chance of working. They have a much much smaller chance. Many many orders of magnitude less.
In other words, the worlds posited by a realistic interpretation of what these buttons mean are completely different, and the world where button 1A process succeeds is to be preferred by at least six orders of magnitude. In other words, the colloquial understanding of the “big” impact is closer to right than the multiplication suggests.
I’m not sure exactly how that impacts the overall conclusions, but I think this same dynamic applies to several odd conclusions—the flaw is that the button is doing much much much more work in some situations than in others described as identical, and that descriptive flaw is pumping our intuitions to ignore those differences rather than address them.
It’s interesting that you have that intuition! I don’t share it, and I think the intuition somewhat implies some of the “You shouldn’t leave your house” type things alluded to in the dialogue.
I’m pretty happy to bite that bullet, especially since I’m not an egoist. I should still leave my house because others are going to suffer far worse (in expectation) if I don’t do something to help, at some risk to myself. It does seem strange to say that if I didn’t have any altruistic obligations then I shouldn’t take very small risks of horrible experiences. But I have the stronger intuition that those horrible experiences are horrible in a way that the nonexistence of nice experiences isn’t. And that “I” don’t get to override the preference to avoid such experiences, when the counterfactual is that the preferences for the nice experiences just don’t exist in the first place.
I don’t necessarily disagree with your conclusion, but I don’t know how you can feel sure about weighing a chicken’s suffering vs a person.
But I definitely disagree with the initial conclusion, and I think it is because you don’t fear extreme suffering enough. If everyone behind the veil of ignorance knew what the worst suffering was, they would fear it more than they would value time at the beach.
Re: longtermism, I find the argument in Pinker’s latest book to be pretty compelling:
The optimal rate at which the discount the future is a problem that we face not just as individuals but as societies, as we decide how much public wealth we should spend benefit our older selves and future generations. Discount it we must. It’s not only that a current sacrifice would be in vain if an asteroid sends us the way of the dinosaurs. It’s also that our ignorance of what the future will bring, including advances in technology, grows exponentially the farther out we plan. It would have made little sense for our ancestors a century ago to have scrimped for our benefit—say, diverting money from schools and roads to a stockpile of iron lungs to prepare for a polio epidemic—given that we’re six times richer and have solved some of their problems while facing new ones they could not have dreamed of.
I agree with this argument for discount rates, but I think it is a practical rather than philosophical argument. That is, I don’t think it undermines the idea that if we were to avert extinction, all of the future lives thereby enabled should be given “full weight.”
Comments on Defending One-Dimensional Ethics will go here.
I don’t think your argument against risk aversion fully addresses the issue. You give one argument for diversification that is based on diminishing marginal utilities, and then show that this plausibly doesn’t apply in global charities. However, there’s a separate argument for diversification that is actually about risk itself, and not diminishing marginal utility. You should look at Lara Buchak’s book, “Risk and Rationality”, which argues that there is a distinct form of rational risk-aversion (or risk-seeking-ness). On a risk neutral approach, each outcome counts in exact proportion to its probability, regardless of whether it’s the best outcome, the worst, or in between. On a risk averse approach, the relative weight of the top ten percentiles of outcomes is less than the relative weight of the bottom ten percentiles of outcomes, and vice versa for risk seeking approaches.
This turns out to precisely correspond to ways to make sense of some kinds of inequality aversion—making things better for a worse off person improves the world more than making things equally much better for a better off person.
None of the arguments you give tell against this approach rather than the risk-neutral one.
One important challenge to the risk-sensitive approach is that, if you make large numbers of uncorrelated decisions, then the law of large numbers kicks in and it ends up behaving just like risk neutral decision theory. But these cases of making a single large global-scale intervention are precisely the ones in which you aren’t making a large number of uncorrelated decisions, and so considerations of risk sensitivity can become relevant.
You’re right that I haven’t comprehensively addressed risk aversion in this piece. I’ve just tried to give an intuition for why the pro-risk-aversion intuition might be misleading.
A big difference in button 1 (small benefit for someone) and 1A (small chance of a small benefit for a large number of people) is the kind of system required for these outcomes.
Button 1 requires basically a days worth of investment by someone making a choice to give it to another. Button 1A requires… perhaps a million times as much effort? We’re talking about the equivalent of passing a national holiday act. This ends up requiring an enormous amount of coordination and investment. And the results do not scale linearly at all. That is, a person investing a day’s worth of effort to try and pass a national holiday act don’t have a 10E-8 chance of working. They have a much much smaller chance. Many many orders of magnitude less.
In other words, the worlds posited by a realistic interpretation of what these buttons mean are completely different, and the world where button 1A process succeeds is to be preferred by at least six orders of magnitude. In other words, the colloquial understanding of the “big” impact is closer to right than the multiplication suggests.
I’m not sure exactly how that impacts the overall conclusions, but I think this same dynamic applies to several odd conclusions—the flaw is that the button is doing much much much more work in some situations than in others described as identical, and that descriptive flaw is pumping our intuitions to ignore those differences rather than address them.
I started writing a comment, then it got too long, so I put in my shortform here. :)
It’s interesting that you have that intuition! I don’t share it, and I think the intuition somewhat implies some of the “You shouldn’t leave your house” type things alluded to in the dialogue.
I’m pretty happy to bite that bullet, especially since I’m not an egoist. I should still leave my house because others are going to suffer far worse (in expectation) if I don’t do something to help, at some risk to myself. It does seem strange to say that if I didn’t have any altruistic obligations then I shouldn’t take very small risks of horrible experiences. But I have the stronger intuition that those horrible experiences are horrible in a way that the nonexistence of nice experiences isn’t. And that “I” don’t get to override the preference to avoid such experiences, when the counterfactual is that the preferences for the nice experiences just don’t exist in the first place.
I don’t necessarily disagree with your conclusion, but I don’t know how you can feel sure about weighing a chicken’s suffering vs a person.
But I definitely disagree with the initial conclusion, and I think it is because you don’t fear extreme suffering enough. If everyone behind the veil of ignorance knew what the worst suffering was, they would fear it more than they would value time at the beach.
Re: longtermism, I find the argument in Pinker’s latest book to be pretty compelling:
The optimal rate at which the discount the future is a problem that we face not just as individuals but as societies, as we decide how much public wealth we should spend benefit our older selves and future generations. Discount it we must. It’s not only that a current sacrifice would be in vain if an asteroid sends us the way of the dinosaurs. It’s also that our ignorance of what the future will bring, including advances in technology, grows exponentially the farther out we plan. It would have made little sense for our ancestors a century ago to have scrimped for our benefit—say, diverting money from schools and roads to a stockpile of iron lungs to prepare for a polio epidemic—given that we’re six times richer and have solved some of their problems while facing new ones they could not have dreamed of.
I agree with this argument for discount rates, but I think it is a practical rather than philosophical argument. That is, I don’t think it undermines the idea that if we were to avert extinction, all of the future lives thereby enabled should be given “full weight.”