On (1): the standard response here is that this won’t work across the board because of something like the Allais preferences. In that case, there just isn’t any way to assign utilities to the outcomes in such a way that ordering by expected utility gives you the Allais preferences. So, while the Sheila case is a simple way to illustrate the risk-averse phenomenon, it’s much broader, and there are cases in which diminishing marginal utility of pleasure won’t account for our intuitive responses.
On (2): it’s possible you might do something like this, but it seems a strange thing to put into axiology. Why should benefits to Bob contribute less to the goodness of a situation just because of the risk attitudes he has?
On the main objection: I think you’re probably right about the response many would have to this question, but that’s also true if you ask them ‘Should we do something that increases the probability of our billion-year existence by 1 in 10^14 rather than saving a million lives right now?’ I think expected utility theory comes out as pretty unintuitive when we’re thinking about longterm scenarios too. It’s not just a problem for Buchak. And, in any case, the standard response by ordinary people might reflect the fact that they’re not total hedonist utilitarians more than it does the fact that they are not Buchakians.
re: (1), can you point me to a good introductory reference on this? From a quick glance at the Allais paradox, it looks like the issue is that the implicit “certainty bias” isn’t any consistent form of risk aversion either, but maybe more like an aversion to the distinctive disutility of regret when you know you otherwise could have won a sure thing? But maybe I just need to read more about these “broader” cases!
re: (2), the obvious motivation would be to avoid “overturning unanimous preferences”! It seems like a natural way to respect different people’s attitudes to risk would be to allow them to choose (between fairly weighted options, that neither systematically advantage nor disadvantage them relative to others) how to weight potential costs vs benefits as applied to them personally.
On the main objection: sure, but traditional EU isn’t motivated merely on grounds of being “intuitive”. Insofar as that’s the only thing going for REU, it seems that being counterintuitive is a much greater cost for REU specifically!
the standard response by ordinary people might reflect the fact that they’re not total hedonist utilitarians more than it does the fact that they are not Buchakians.
How so? The relevant axiological claim here is just that the worst dystopian futures are at least as bad as the best utopian futures are good. You don’t have to be a total hedonist utilitarian (as indeed, I am not) in order to believe that.
I mean, do you really imagine people responding, “Sure, in principle it’d totally be worth destroying the world to prevent a 1 in 10 million risk of a sufficiently dystopian long-term future, if that future was truly as bad as the more-likely utopian alternative was good; but I just don’t accept that evaluative claim that the principle is conditioned on here. A billion years of suffering for all humanity just isn’t that bad!”
Thanks for the comments, Richard!
On (1): the standard response here is that this won’t work across the board because of something like the Allais preferences. In that case, there just isn’t any way to assign utilities to the outcomes in such a way that ordering by expected utility gives you the Allais preferences. So, while the Sheila case is a simple way to illustrate the risk-averse phenomenon, it’s much broader, and there are cases in which diminishing marginal utility of pleasure won’t account for our intuitive responses.
On (2): it’s possible you might do something like this, but it seems a strange thing to put into axiology. Why should benefits to Bob contribute less to the goodness of a situation just because of the risk attitudes he has?
On the main objection: I think you’re probably right about the response many would have to this question, but that’s also true if you ask them ‘Should we do something that increases the probability of our billion-year existence by 1 in 10^14 rather than saving a million lives right now?’ I think expected utility theory comes out as pretty unintuitive when we’re thinking about longterm scenarios too. It’s not just a problem for Buchak. And, in any case, the standard response by ordinary people might reflect the fact that they’re not total hedonist utilitarians more than it does the fact that they are not Buchakians.
Thanks!
re: (1), can you point me to a good introductory reference on this? From a quick glance at the Allais paradox, it looks like the issue is that the implicit “certainty bias” isn’t any consistent form of risk aversion either, but maybe more like an aversion to the distinctive disutility of regret when you know you otherwise could have won a sure thing? But maybe I just need to read more about these “broader” cases!
re: (2), the obvious motivation would be to avoid “overturning unanimous preferences”! It seems like a natural way to respect different people’s attitudes to risk would be to allow them to choose (between fairly weighted options, that neither systematically advantage nor disadvantage them relative to others) how to weight potential costs vs benefits as applied to them personally.
On the main objection: sure, but traditional EU isn’t motivated merely on grounds of being “intuitive”. Insofar as that’s the only thing going for REU, it seems that being counterintuitive is a much greater cost for REU specifically!
How so? The relevant axiological claim here is just that the worst dystopian futures are at least as bad as the best utopian futures are good. You don’t have to be a total hedonist utilitarian (as indeed, I am not) in order to believe that.
I mean, do you really imagine people responding, “Sure, in principle it’d totally be worth destroying the world to prevent a 1 in 10 million risk of a sufficiently dystopian long-term future, if that future was truly as bad as the more-likely utopian alternative was good; but I just don’t accept that evaluative claim that the principle is conditioned on here. A billion years of suffering for all humanity just isn’t that bad!”
Seems dubious.