re: (1), can you point me to a good introductory reference on this? From a quick glance at the Allais paradox, it looks like the issue is that the implicit ācertainty biasā isnāt any consistent form of risk aversion either, but maybe more like an aversion to the distinctive disutility of regret when you know you otherwise could have won a sure thing? But maybe I just need to read more about these ābroaderā cases!
re: (2), the obvious motivation would be to avoid āoverturning unanimous preferencesā! It seems like a natural way to respect different peopleās attitudes to risk would be to allow them to choose (between fairly weighted options, that neither systematically advantage nor disadvantage them relative to others) how to weight potential costs vs benefits as applied to them personally.
On the main objection: sure, but traditional EU isnāt motivated merely on grounds of being āintuitiveā. Insofar as thatās the only thing going for REU, it seems that being counterintuitive is a much greater cost for REU specifically!
the standard response by ordinary people might reflect the fact that theyāre not total hedonist utilitarians more than it does the fact that they are not Buchakians.
How so? The relevant axiological claim here is just that the worst dystopian futures are at least as bad as the best utopian futures are good. You donāt have to be a total hedonist utilitarian (as indeed, I am not) in order to believe that.
I mean, do you really imagine people responding, āSure, in principle itād totally be worth destroying the world to prevent a 1 in 10 million risk of a sufficiently dystopian long-term future, if that future was truly as bad as the more-likely utopian alternative was good; but I just donāt accept that evaluative claim that the principle is conditioned on here. A billion years of suffering for all humanity just isnāt that bad!ā
Thanks!
re: (1), can you point me to a good introductory reference on this? From a quick glance at the Allais paradox, it looks like the issue is that the implicit ācertainty biasā isnāt any consistent form of risk aversion either, but maybe more like an aversion to the distinctive disutility of regret when you know you otherwise could have won a sure thing? But maybe I just need to read more about these ābroaderā cases!
re: (2), the obvious motivation would be to avoid āoverturning unanimous preferencesā! It seems like a natural way to respect different peopleās attitudes to risk would be to allow them to choose (between fairly weighted options, that neither systematically advantage nor disadvantage them relative to others) how to weight potential costs vs benefits as applied to them personally.
On the main objection: sure, but traditional EU isnāt motivated merely on grounds of being āintuitiveā. Insofar as thatās the only thing going for REU, it seems that being counterintuitive is a much greater cost for REU specifically!
How so? The relevant axiological claim here is just that the worst dystopian futures are at least as bad as the best utopian futures are good. You donāt have to be a total hedonist utilitarian (as indeed, I am not) in order to believe that.
I mean, do you really imagine people responding, āSure, in principle itād totally be worth destroying the world to prevent a 1 in 10 million risk of a sufficiently dystopian long-term future, if that future was truly as bad as the more-likely utopian alternative was good; but I just donāt accept that evaluative claim that the principle is conditioned on here. A billion years of suffering for all humanity just isnāt that bad!ā
Seems dubious.