Thinking from the perspective of a beneficiary, I would rather get $100 than remove a 1⁄10,000,000 risk of death.
Would you also volunteer to be killed so that 10,000,000 people just like you could have $100 that they could only spend to counterfactually benefit themselves?
I think the probability here matters beyond just its effect on the expected utility, contrary, of course, to EU maximization. I’d take $100 at the cost of an additional 1⁄10,000,000 risk of eternal torture (or any outcome that is finitely but arbitrarily bad). On the other hand, consider the 5 following worlds:
A. Status quo with 10,000,000 people with finite lives and utilities. This world has finite utility.
B. 9,999,999 people get an extra $100 compared to world A, and the other person is tortured for eternity. This world definitely has a total utility of negative infinity.
C. The 10,000,000 people each decide to take $100 for an independent 1⁄10,000,000 risk of eternal torture. This world, with probability ~ 1-1/e ~ 0.63 (i.e. “probably”) has a total utility of negative infinity.
D. The 10,000,000 people together decide to take $100 for a 1⁄10,000,000 risk that they all are tortured for eternity (i.e. none of them are tortured, or all of them are tortured together). This world, with probability 9,999,999⁄10,000,000 has finite utility.
E. Only one out of the 10,000,000 people decides to take $100 for a 1⁄0,000,000 risk of eternal torture. This world, with probability 9,999,999⁄10,000,000 has finite utility.
I would say D >> E > A >>>> C >> B, despite the fact that in expected total utility, A >>>> B=C=D=E. If I were convinced this world will be reproduced infinitely many times (or e.g. 10,000,000 times) independently, I’d choose A, consistently with expected utility.
So, when I take $100 for a 1⁄10,000,000 risk of death, it’s not because I’m maximizing expected utility; it’s because I don’t care about any 1⁄10,000,000 risk. I’m only going to live once, so I’d have to take that trade (or similar such trades) hundreds of times for it to even start to matter to me. However, I also (probably) wouldn’t commit to taking this trade a million times (or a single equivalent trade, with $100,000,000 for a ~0.1 probability of eternal torture; you can adjust the cash for diminishing marginal returns). Similarly, if hundreds of people took the trade (with independent risk), I’d start to be worried, and I’d (probably) want to prevent a million people from doing it.
Would you also volunteer to be killed so that 10,000,000 people just like you could have $100 that they could only spend to counterfactually benefit themselves?
I think the probability here matters beyond just its effect on the expected utility, contrary, of course, to EU maximization. I’d take $100 at the cost of an additional 1⁄10,000,000 risk of eternal torture (or any outcome that is finitely but arbitrarily bad). On the other hand, consider the 5 following worlds:
A. Status quo with 10,000,000 people with finite lives and utilities. This world has finite utility.
B. 9,999,999 people get an extra $100 compared to world A, and the other person is tortured for eternity. This world definitely has a total utility of negative infinity.
C. The 10,000,000 people each decide to take $100 for an independent 1⁄10,000,000 risk of eternal torture. This world, with probability ~ 1-1/e ~ 0.63 (i.e. “probably”) has a total utility of negative infinity.
D. The 10,000,000 people together decide to take $100 for a 1⁄10,000,000 risk that they all are tortured for eternity (i.e. none of them are tortured, or all of them are tortured together). This world, with probability 9,999,999⁄10,000,000 has finite utility.
E. Only one out of the 10,000,000 people decides to take $100 for a 1⁄0,000,000 risk of eternal torture. This world, with probability 9,999,999⁄10,000,000 has finite utility.
I would say D >> E > A >>>> C >> B, despite the fact that in expected total utility, A >>>> B=C=D=E. If I were convinced this world will be reproduced infinitely many times (or e.g. 10,000,000 times) independently, I’d choose A, consistently with expected utility.
So, when I take $100 for a 1⁄10,000,000 risk of death, it’s not because I’m maximizing expected utility; it’s because I don’t care about any 1⁄10,000,000 risk. I’m only going to live once, so I’d have to take that trade (or similar such trades) hundreds of times for it to even start to matter to me. However, I also (probably) wouldn’t commit to taking this trade a million times (or a single equivalent trade, with $100,000,000 for a ~0.1 probability of eternal torture; you can adjust the cash for diminishing marginal returns). Similarly, if hundreds of people took the trade (with independent risk), I’d start to be worried, and I’d (probably) want to prevent a million people from doing it.