Wow, what an interestingāand disturbingāpaper!
My initial response is to think that it provides a powerful argument for why we should reject (Buchakās version of) risk-averse decision theory. A couple of quick clarificatory questions before getting to my main objection:
(1)
If Sheila chooses to go to Shapwick Heath, we might say that she is risk-averse.
How do we distinguish risk-aversion from, say, assigning diminishing marginal value to pleasure? I know you previously stipulated that Sheila has the utility function of a hedonistic utilitarian, but Iām wondering if you can really stipulate that. If she really prefers the certainty of 49 hedons over a 50ā50 chance of 100, then it seems to me that she doesnāt really value 100 hedons as being more than twice as good (for her) as 49. Intuitively, that makes more sense to me than risk aversion per se.
(2)
it doesnāt count against the Risk Principle* or the use of risk-weighted expected utility theory for moral choice that they lead to violations of the Ex Ante Pareto Principle. Any plausible decision theory will do likewise.
Can you say a bit more about this? In particular, whatās the barrier to aggregating attitude-adjusted individual utilities, such that harms to Bob count for more, and benefits to Bob count for less, yielding a greater total moral value to outcome A than to B? (As before, I guess Iām just really suspicious about how youāre assigning utilities in these sorts of cases, and want the appropriate adjustments to be built into our axiology instead. Are there compelling objections to this alternative approach?)
(Main objection)
It sounds like the main motivation for REU is to ācaptureā the responses of apparently risk-averse people. But then it seems to me that your argument in this paper undercuts the claim that Buchakās model is adequate to this task. Because Iām pretty confident that if you go up to an ordinary person, and ask them, āShould we destroy the world in order to avoid a 1 in 10 million risk of a dystopian long-term future, on the assumption that the future is vastly more likely to be extremely wonderful?ā they would think you are insane.
So why should we give any credibility whatsoever to this model of rational choice? If we want to capture ordinary sorts of risk aversion, there must be a better way to do so. (Maybe discounting low-probability events, and giving extra weight to āsure thingsā, for exampleāthough that sure does just seem plainly irrational. A better approach, I suspect, would be something like Alejandro suggested in terms of properly accounting for the disutility of regret.)
On (1): the standard response here is that this wonāt work across the board because of something like the Allais preferences. In that case, there just isnāt any way to assign utilities to the outcomes in such a way that ordering by expected utility gives you the Allais preferences. So, while the Sheila case is a simple way to illustrate the risk-averse phenomenon, itās much broader, and there are cases in which diminishing marginal utility of pleasure wonāt account for our intuitive responses.
On (2): itās possible you might do something like this, but it seems a strange thing to put into axiology. Why should benefits to Bob contribute less to the goodness of a situation just because of the risk attitudes he has?
On the main objection: I think youāre probably right about the response many would have to this question, but thatās also true if you ask them āShould we do something that increases the probability of our billion-year existence by 1 in 10^14 rather than saving a million lives right now?ā I think expected utility theory comes out as pretty unintuitive when weāre thinking about longterm scenarios too. Itās not just a problem for Buchak. And, in any case, the standard response by ordinary people might reflect the fact that theyāre not total hedonist utilitarians more than it does the fact that they are not Buchakians.
re: (1), can you point me to a good introductory reference on this? From a quick glance at the Allais paradox, it looks like the issue is that the implicit ācertainty biasā isnāt any consistent form of risk aversion either, but maybe more like an aversion to the distinctive disutility of regret when you know you otherwise could have won a sure thing? But maybe I just need to read more about these ābroaderā cases!
re: (2), the obvious motivation would be to avoid āoverturning unanimous preferencesā! It seems like a natural way to respect different peopleās attitudes to risk would be to allow them to choose (between fairly weighted options, that neither systematically advantage nor disadvantage them relative to others) how to weight potential costs vs benefits as applied to them personally.
On the main objection: sure, but traditional EU isnāt motivated merely on grounds of being āintuitiveā. Insofar as thatās the only thing going for REU, it seems that being counterintuitive is a much greater cost for REU specifically!
the standard response by ordinary people might reflect the fact that theyāre not total hedonist utilitarians more than it does the fact that they are not Buchakians.
How so? The relevant axiological claim here is just that the worst dystopian futures are at least as bad as the best utopian futures are good. You donāt have to be a total hedonist utilitarian (as indeed, I am not) in order to believe that.
I mean, do you really imagine people responding, āSure, in principle itād totally be worth destroying the world to prevent a 1 in 10 million risk of a sufficiently dystopian long-term future, if that future was truly as bad as the more-likely utopian alternative was good; but I just donāt accept that evaluative claim that the principle is conditioned on here. A billion years of suffering for all humanity just isnāt that bad!ā
Iāll say the long happy future (i.e. lh) is a thousand times less likely than extinctionā¦ and the long miserable future (i.e. lm) is a hundred times less likely than that
Maybe Iām unduly optimistic, but I have trouble wrapping my head around how lm could be even that likely. (E.g. it seems like suicide provides at least some protection against worst-case scenarios, unless weāre somehow imagining such totalitarian control that the mistreated canāt even kill themselves? But if such control is possible, why wouldnāt the controllers just bliss out their subjects? The scenario makes no sense to me.)
How robust is the modelās conclusions to large changes in the probability of lm (e.g. reducing its probability by 3 ā 6 orders of magnitude)?
Yes, itās reasonably sensitive to this, though as you increase how risk averse you are, you also get extinction winning out even for lower and lower probabilities of lm. Itās really a tradeoff between those two.
On your concerns about the probability of lm: I think people very often donāt commit suicide even when their life falls below the level at which itās worth living. This might be because of optimism about the future, or connection to others and the feeling of obligation towards them, or because of an instinct for survival.
Wow, what an interestingāand disturbingāpaper!
My initial response is to think that it provides a powerful argument for why we should reject (Buchakās version of) risk-averse decision theory. A couple of quick clarificatory questions before getting to my main objection:
(1)
How do we distinguish risk-aversion from, say, assigning diminishing marginal value to pleasure? I know you previously stipulated that Sheila has the utility function of a hedonistic utilitarian, but Iām wondering if you can really stipulate that. If she really prefers the certainty of 49 hedons over a 50ā50 chance of 100, then it seems to me that she doesnāt really value 100 hedons as being more than twice as good (for her) as 49. Intuitively, that makes more sense to me than risk aversion per se.
(2)
Can you say a bit more about this? In particular, whatās the barrier to aggregating attitude-adjusted individual utilities, such that harms to Bob count for more, and benefits to Bob count for less, yielding a greater total moral value to outcome A than to B? (As before, I guess Iām just really suspicious about how youāre assigning utilities in these sorts of cases, and want the appropriate adjustments to be built into our axiology instead. Are there compelling objections to this alternative approach?)
(Main objection)
It sounds like the main motivation for REU is to ācaptureā the responses of apparently risk-averse people. But then it seems to me that your argument in this paper undercuts the claim that Buchakās model is adequate to this task. Because Iām pretty confident that if you go up to an ordinary person, and ask them, āShould we destroy the world in order to avoid a 1 in 10 million risk of a dystopian long-term future, on the assumption that the future is vastly more likely to be extremely wonderful?ā they would think you are insane.
So why should we give any credibility whatsoever to this model of rational choice? If we want to capture ordinary sorts of risk aversion, there must be a better way to do so. (Maybe discounting low-probability events, and giving extra weight to āsure thingsā, for exampleāthough that sure does just seem plainly irrational. A better approach, I suspect, would be something like Alejandro suggested in terms of properly accounting for the disutility of regret.)
Thanks for the comments, Richard!
On (1): the standard response here is that this wonāt work across the board because of something like the Allais preferences. In that case, there just isnāt any way to assign utilities to the outcomes in such a way that ordering by expected utility gives you the Allais preferences. So, while the Sheila case is a simple way to illustrate the risk-averse phenomenon, itās much broader, and there are cases in which diminishing marginal utility of pleasure wonāt account for our intuitive responses.
On (2): itās possible you might do something like this, but it seems a strange thing to put into axiology. Why should benefits to Bob contribute less to the goodness of a situation just because of the risk attitudes he has?
On the main objection: I think youāre probably right about the response many would have to this question, but thatās also true if you ask them āShould we do something that increases the probability of our billion-year existence by 1 in 10^14 rather than saving a million lives right now?ā I think expected utility theory comes out as pretty unintuitive when weāre thinking about longterm scenarios too. Itās not just a problem for Buchak. And, in any case, the standard response by ordinary people might reflect the fact that theyāre not total hedonist utilitarians more than it does the fact that they are not Buchakians.
Thanks!
re: (1), can you point me to a good introductory reference on this? From a quick glance at the Allais paradox, it looks like the issue is that the implicit ācertainty biasā isnāt any consistent form of risk aversion either, but maybe more like an aversion to the distinctive disutility of regret when you know you otherwise could have won a sure thing? But maybe I just need to read more about these ābroaderā cases!
re: (2), the obvious motivation would be to avoid āoverturning unanimous preferencesā! It seems like a natural way to respect different peopleās attitudes to risk would be to allow them to choose (between fairly weighted options, that neither systematically advantage nor disadvantage them relative to others) how to weight potential costs vs benefits as applied to them personally.
On the main objection: sure, but traditional EU isnāt motivated merely on grounds of being āintuitiveā. Insofar as thatās the only thing going for REU, it seems that being counterintuitive is a much greater cost for REU specifically!
How so? The relevant axiological claim here is just that the worst dystopian futures are at least as bad as the best utopian futures are good. You donāt have to be a total hedonist utilitarian (as indeed, I am not) in order to believe that.
I mean, do you really imagine people responding, āSure, in principle itād totally be worth destroying the world to prevent a 1 in 10 million risk of a sufficiently dystopian long-term future, if that future was truly as bad as the more-likely utopian alternative was good; but I just donāt accept that evaluative claim that the principle is conditioned on here. A billion years of suffering for all humanity just isnāt that bad!ā
Seems dubious.
And, an admittedly more boring objection:
Maybe Iām unduly optimistic, but I have trouble wrapping my head around how lm could be even that likely. (E.g. it seems like suicide provides at least some protection against worst-case scenarios, unless weāre somehow imagining such totalitarian control that the mistreated canāt even kill themselves? But if such control is possible, why wouldnāt the controllers just bliss out their subjects? The scenario makes no sense to me.)
How robust is the modelās conclusions to large changes in the probability of lm (e.g. reducing its probability by 3 ā 6 orders of magnitude)?
Yes, itās reasonably sensitive to this, though as you increase how risk averse you are, you also get extinction winning out even for lower and lower probabilities of lm. Itās really a tradeoff between those two.
On your concerns about the probability of lm: I think people very often donāt commit suicide even when their life falls below the level at which itās worth living. This might be because of optimism about the future, or connection to others and the feeling of obligation towards them, or because of an instinct for survival.