There is some debate as to whether one must consider the “background” amount of utility in the world to implement WLU; that is, is WLU a state-of-the-world-based theory or a difference-making theory? I adopt the latter version, which is equivalent to a state-of-the-world-based theory that sets the “background” utility of the world at zero. I do so because finding the aggregate utility of the entire world seems currently intractable, fraught with even more ethical questions, and out of the scope of this project.
Couldn’t you just use your utility assignments from the section Defining the Baseline Utility of Each State of the World for a state-of-the-world-based version, like I think you do for (state-of-the-world or non-difference-making) REU? If this is a problem for WLU, it would also be one for REU.
That being said, the possible states you consider in that section seem pretty limited, too, only “the human-equivalent DALY burden of each harm”, with the harms of “Malaria, x-risk, hens, shrimp (ammonia + harvest/slaughter)”, and so could limit the applicability of the results for (non-difference-making) risk and ambiguity aversion.
It seems the higher ranked options under risk neutral EV maximization are also ranked higher under (non-difference-making) REU here (ambiguity neutral). Including background utility might not change this, based on
the fact that the stochastic dominance preorder (with background utility) is weaker than REU and risk neutral EV maximization (each also with background utility), i.e. if X strictly/weakly stochastically dominates Y, then X is strictly/weakly better than Y under REU and risk neutral EV maximization.
However, one major limitation of Tarsney’s paper is that there doesn’t seem to be enough statistically independent background value to which to apply the argument, because, for example, we have uncertainty across stances that apply to ~all sources of moral value, like how moral value scales with brain size (number of neurons, synapses, etc.), functions or states, e.g. each view considered for RP’s moral weight estimates. We could also have correlated views about the sign of background value and the sign of the value of extinction risk reduction, e.g. a stance that’s relatively more pessimistic about both vs one relatively more optimistic about both. In particular, if we expect the background utility to be negative, we should be more pessimistic about the difference-making value of extinction risk reduction, and, it, too, should be more likely to be negative. So, it would probably be better to do a simulation like you’ve done and directly check.[2]
The value of what you don’t causally affect could also make a difference for your results for difference-making risk aversion, assuming acausal influence matters with non-tiny probability (e.g. Carlsmith, 2021, section II) or matters substantially in expectation (e.g. MacAskill et al., 2021[3]). See:
In Tarsney (2020), I try to develop a principled anti-fanatical view, based on stochastic dominance, that tells us when it’s permissible to deviate from expected value maximization and ignore small probabilities. The thresholds for ‘small’ probability that this view generates depend on various features of the choice situation, including in particular the agent’s degree of ‘background uncertainty’ about sources of value in the world unaffected by her choices. (Greater background uncertainty generates stronger stochastic dominance constraints that narrow the class of Pascalian choices in which deviations from expected value maximization are permitted.) §5.4 of the paper considers the implications for choices like our working example, between existential risk mitigation and interventions with more certain, near-term payoffs. My own conclusion is that our background uncertainty is probably great enough that even the anti-fanatic is required to prioritize existential risk mitigation when it maximizes expected value. But this conclusion is not particularly robust—other reasonable estimates of the relevant epistemic probabilities might lead to the opposite conclusion. So by the lights of my own preferred anti-fanatical view, cases like our working example are borderline; deciding whether expected value maximization is mandatory or optional in such cases will require more precise estimates of the relevant probabilities and stakes.
Or, maybe the argument can be saved by conditioning on each stance that jointly determines the moral value of our impacts and what we don’t affect, and either checking if the recommendations agree or combining them some other way. However, I suspect they often won’t agree or combine nicely, so I’m skeptical of this approach.
In general, I’m skeptical about applying expected value reasoning over normative uncertainty. Still, the case for intertheoretic comparisons between EDT and CDT over the same moral views is stronger (although I still have general reservations), by assigning the same moral value to each definite outcome:
Therefore, we will argue that the problem of intertheoretic value comparisons is solved by requiring that VEDT=VCDT.
Where they disagree is how to assign probabilities to outcomes.
On the other hand, a similar argument could favour reweightings of probabilities or outcome values that lead to greater stakes over risk neutral expected value maximization.
Time ago, I took part in an exchange on risk aversion that can be useful.
“I remind you that “risk aversion” is a big deal in economics/finance because of the decreasing marginal utility of income. In fact, in economics and finance, risk aversion for rational agents is not a primitive parameter, but a consequence of the CRRA parameter of your consumption function. So I think risk aversion turns quite meaningless for non monetary types of loss.”
The concept is slippery: you really need to clarify “why utility is non convex” in the relevant variable? Risk aversion in economics is not primitive: it is derived from decreasing marginal utility of consumption...
Couldn’t you just use your utility assignments from the section Defining the Baseline Utility of Each State of the World for a state-of-the-world-based version, like I think you do for (state-of-the-world or non-difference-making) REU? If this is a problem for WLU, it would also be one for REU.
That being said, the possible states you consider in that section seem pretty limited, too, only “the human-equivalent DALY burden of each harm”, with the harms of “Malaria, x-risk, hens, shrimp (ammonia + harvest/slaughter)”, and so could limit the applicability of the results for (non-difference-making) risk and ambiguity aversion.
It seems the higher ranked options under risk neutral EV maximization are also ranked higher under (non-difference-making) REU here (ambiguity neutral). Including background utility might not change this, based on
Christian Tarsney’s Exceeding Expectations: Stochastic Dominance as a General Decision Theory, which argues for riskier higher EV bets under wide uncertainty about background utility under the stochastic dominance preorder, and
the fact that the stochastic dominance preorder (with background utility) is weaker than REU and risk neutral EV maximization (each also with background utility), i.e. if X strictly/weakly stochastically dominates Y, then X is strictly/weakly better than Y under REU and risk neutral EV maximization.
Tarsney uses the broad conclusions from that paper in The epistemic challenge to longtermism, footnote 43.[1]
However, one major limitation of Tarsney’s paper is that there doesn’t seem to be enough statistically independent background value to which to apply the argument, because, for example, we have uncertainty across stances that apply to ~all sources of moral value, like how moral value scales with brain size (number of neurons, synapses, etc.), functions or states, e.g. each view considered for RP’s moral weight estimates. We could also have correlated views about the sign of background value and the sign of the value of extinction risk reduction, e.g. a stance that’s relatively more pessimistic about both vs one relatively more optimistic about both. In particular, if we expect the background utility to be negative, we should be more pessimistic about the difference-making value of extinction risk reduction, and, it, too, should be more likely to be negative. So, it would probably be better to do a simulation like you’ve done and directly check.[2]
The value of what you don’t causally affect could also make a difference for your results for difference-making risk aversion, assuming acausal influence matters with non-tiny probability (e.g. Carlsmith, 2021, section II) or matters substantially in expectation (e.g. MacAskill et al., 2021[3]). See:
Hayden Wilkinson’s Can an evidentialist be risk-averse?
Brian Tomasik’s How the Simulation Argument Dampens Future Fanaticism
This thread by Magnus Vinding
Acausal trade
Evidential Cooperation in Large Worlds (https://longtermrisk.org/ecl)
Or, maybe the argument can be saved by conditioning on each stance that jointly determines the moral value of our impacts and what we don’t affect, and either checking if the recommendations agree or combining them some other way. However, I suspect they often won’t agree or combine nicely, so I’m skeptical of this approach.
In general, I’m skeptical about applying expected value reasoning over normative uncertainty. Still, the case for intertheoretic comparisons between EDT and CDT over the same moral views is stronger (although I still have general reservations), by assigning the same moral value to each definite outcome:
Where they disagree is how to assign probabilities to outcomes.
On the other hand, a similar argument could favour reweightings of probabilities or outcome values that lead to greater stakes over risk neutral expected value maximization.
Time ago, I took part in an exchange on risk aversion that can be useful.
“I remind you that “risk aversion” is a big deal in economics/finance because of the decreasing marginal utility of income. In fact, in economics and finance, risk aversion for rational agents is not a primitive parameter, but a consequence of the CRRA parameter of your consumption function. So I think risk aversion turns quite meaningless for non monetary types of loss.”
See here the complete exchange:
https://forum.effectivealtruism.org/posts/mJwZ3pTgwyTon2xmw/?commentId=grW3y8JCmNK2rjFPA
The concept is slippery: you really need to clarify “why utility is non convex” in the relevant variable? Risk aversion in economics is not primitive: it is derived from decreasing marginal utility of consumption...