Thinking from the perspective of a beneficiary, I would rather get $100 than remove a 1⁄10,000,000 risk of death. That level of risk is in line with traveling a few kilometers by walking, and a small fraction of the risk associated with a day skiing: see the Wikipedia entry on micromorts. We all make such tradeoffs every day, taking on small risks of large harm for high probabilities of smaller benefits that have better expected value.
So behind the veil of ignorance, for a fixed population size, the ‘altruistic repugnant conclusion’ is actually just what beneficiaries would want for themselves. ‘Repugnance’ would involve the donor prioritizing their scope-insensitive response over the interests of the beneficiaries.
An article by Barbara Fried makes a very strong case against this sort of anti-aggregationism based on the ubiquity of such tradeoffs.
Separately in the linked Holden blog post it seems that the comparison is made between 100 large impacts and 10,000 small impacts that are well under 1% as large. I.e. the hypothetical compares larger total and per beneficiary impacts against a smaller total benefit distributed over more beneficiaries.
That’s not a good illustration for anti-aggregationism.
(2) Provide consistent, full nutrition and health care to 100 people, such that instead of growing up malnourished (leading to lower height, lower weight, lower intelligence, and other symptoms) they spend their lives relatively healthy. (For simplicity, though not accuracy, assume this doesn’t affect their actual lifespan – they still live about 40 years.)
This sounds like improving health significantly, e.g. 10% or more, over 14,600 days each, or 1.46 million days total. Call it 146,000 disability-adjusted life-days.
(3) Prevent one case of relatively mild non-fatal malaria (say, a fever that lasts a few days) for each of 10,000 people, without having a significant impact on the rest of their lives.
Let’s say mild non fatal malaria costs half of a life-day per day, and ‘a few days’ is 6 days. Then the stakes for these 10,000 people are 30,000 disability-adjusted life-days.
146,000 adjusted life days is a lot more than 30,000 adjusted life-days.
Thinking from the perspective of a beneficiary, I would rather get $100 than remove a 1⁄10,000,000 risk of death.
Would you also volunteer to be killed so that 10,000,000 people just like you could have $100 that they could only spend to counterfactually benefit themselves?
I think the probability here matters beyond just its effect on the expected utility, contrary, of course, to EU maximization. I’d take $100 at the cost of an additional 1⁄10,000,000 risk of eternal torture (or any outcome that is finitely but arbitrarily bad). On the other hand, consider the 5 following worlds:
A. Status quo with 10,000,000 people with finite lives and utilities. This world has finite utility.
B. 9,999,999 people get an extra $100 compared to world A, and the other person is tortured for eternity. This world definitely has a total utility of negative infinity.
C. The 10,000,000 people each decide to take $100 for an independent 1⁄10,000,000 risk of eternal torture. This world, with probability ~ 1-1/e ~ 0.63 (i.e. “probably”) has a total utility of negative infinity.
D. The 10,000,000 people together decide to take $100 for a 1⁄10,000,000 risk that they all are tortured for eternity (i.e. none of them are tortured, or all of them are tortured together). This world, with probability 9,999,999⁄10,000,000 has finite utility.
E. Only one out of the 10,000,000 people decides to take $100 for a 1⁄0,000,000 risk of eternal torture. This world, with probability 9,999,999⁄10,000,000 has finite utility.
I would say D >> E > A >>>> C >> B, despite the fact that in expected total utility, A >>>> B=C=D=E. If I were convinced this world will be reproduced infinitely many times (or e.g. 10,000,000 times) independently, I’d choose A, consistently with expected utility.
So, when I take $100 for a 1⁄10,000,000 risk of death, it’s not because I’m maximizing expected utility; it’s because I don’t care about any 1⁄10,000,000 risk. I’m only going to live once, so I’d have to take that trade (or similar such trades) hundreds of times for it to even start to matter to me. However, I also (probably) wouldn’t commit to taking this trade a million times (or a single equivalent trade, with $100,000,000 for a ~0.1 probability of eternal torture; you can adjust the cash for diminishing marginal returns). Similarly, if hundreds of people took the trade (with independent risk), I’d start to be worried, and I’d (probably) want to prevent a million people from doing it.
Thinking from the perspective of a beneficiary, I would rather get $100 than remove a 1⁄10,000,000 risk of death. That level of risk is in line with traveling a few kilometers by walking, and a small fraction of the risk associated with a day skiing: see the Wikipedia entry on micromorts. We all make such tradeoffs every day, taking on small risks of large harm for high probabilities of smaller benefits that have better expected value.
So behind the veil of ignorance, for a fixed population size, the ‘altruistic repugnant conclusion’ is actually just what beneficiaries would want for themselves. ‘Repugnance’ would involve the donor prioritizing their scope-insensitive response over the interests of the beneficiaries.
An article by Barbara Fried makes a very strong case against this sort of anti-aggregationism based on the ubiquity of such tradeoffs.
Separately in the linked Holden blog post it seems that the comparison is made between 100 large impacts and 10,000 small impacts that are well under 1% as large. I.e. the hypothetical compares larger total and per beneficiary impacts against a smaller total benefit distributed over more beneficiaries.
That’s not a good illustration for anti-aggregationism.
This sounds like improving health significantly, e.g. 10% or more, over 14,600 days each, or 1.46 million days total. Call it 146,000 disability-adjusted life-days.
Let’s say mild non fatal malaria costs half of a life-day per day, and ‘a few days’ is 6 days. Then the stakes for these 10,000 people are 30,000 disability-adjusted life-days.
146,000 adjusted life days is a lot more than 30,000 adjusted life-days.
This is true. Still, for many people, intuitions against aggregation seem to stand up even if the number of people with mild ailments increases without limit (millions, billions, and beyond). For some empirical evidence, see http://eprints.lse.ac.uk/55883/1/__lse.ac.uk_storage_LIBRARY_Secondary_libfile_shared_repository_Content_Voorhoeve,%20A_How%20should%20we%20aggregate_Voorhoeve_How%20should%20we%20aggregate_2014.pdf
Would you also volunteer to be killed so that 10,000,000 people just like you could have $100 that they could only spend to counterfactually benefit themselves?
I think the probability here matters beyond just its effect on the expected utility, contrary, of course, to EU maximization. I’d take $100 at the cost of an additional 1⁄10,000,000 risk of eternal torture (or any outcome that is finitely but arbitrarily bad). On the other hand, consider the 5 following worlds:
A. Status quo with 10,000,000 people with finite lives and utilities. This world has finite utility.
B. 9,999,999 people get an extra $100 compared to world A, and the other person is tortured for eternity. This world definitely has a total utility of negative infinity.
C. The 10,000,000 people each decide to take $100 for an independent 1⁄10,000,000 risk of eternal torture. This world, with probability ~ 1-1/e ~ 0.63 (i.e. “probably”) has a total utility of negative infinity.
D. The 10,000,000 people together decide to take $100 for a 1⁄10,000,000 risk that they all are tortured for eternity (i.e. none of them are tortured, or all of them are tortured together). This world, with probability 9,999,999⁄10,000,000 has finite utility.
E. Only one out of the 10,000,000 people decides to take $100 for a 1⁄0,000,000 risk of eternal torture. This world, with probability 9,999,999⁄10,000,000 has finite utility.
I would say D >> E > A >>>> C >> B, despite the fact that in expected total utility, A >>>> B=C=D=E. If I were convinced this world will be reproduced infinitely many times (or e.g. 10,000,000 times) independently, I’d choose A, consistently with expected utility.
So, when I take $100 for a 1⁄10,000,000 risk of death, it’s not because I’m maximizing expected utility; it’s because I don’t care about any 1⁄10,000,000 risk. I’m only going to live once, so I’d have to take that trade (or similar such trades) hundreds of times for it to even start to matter to me. However, I also (probably) wouldn’t commit to taking this trade a million times (or a single equivalent trade, with $100,000,000 for a ~0.1 probability of eternal torture; you can adjust the cash for diminishing marginal returns). Similarly, if hundreds of people took the trade (with independent risk), I’d start to be worried, and I’d (probably) want to prevent a million people from doing it.