EDIT: I think I may have been mixing risk-aversion with respect to welfare and risk-aversion with respect to the difference made by one’s intervention, as discussed in section 4.2 here. Usually, although not necessarily, a bounded utility function will be concave above some point, say 0, and convex below. Concavity implies risk-aversion and would lead you to give extra weight to avoiding particularly bad scenarios (e.g. close to or below 0) in the concave region compared to improving scenarios that are already good in the concave region. This explains why we buy insurance, and is consistent with the maxipok rule to maximize the probability of an OK outcome (which doesn’t distinguish between bad outcomes, some could be far worse than just “not okay”, as this paper discusses.)
Consistent with what I said below, a small chance of making the future really great is not as compelling as it would be if you’re risk-averse/concave above 0. However, ensuring the future is good rather than at best neutral (say extinction of all moral patients, with symmetric population ethics, or human extinction and net suffering in the wild for a long time) is more compelling than otherwise if you’re risk-averse/concave above 0.
If you think the universe is large, has extreme net utility (either negative or positive) regardless of what we do and there are orders of magnitude more moral patients we can’t affect, then it gets messier again.
Original comment follows:
I suspect the best fundamental response to Pascalian problems is to actually have your utility function bounded above and below. Whether longtermist interventions are Pascalian or not, astronomical stakes become much less compelling, and this leads to a preference for higher probabilities of making a difference that’s incompatible with risk-neutrality. I guess this is a kind of risk-aversion, although preventing extremely unlikely horrible outcomes (or making a tiny difference to their probability of occurrence) isn’t as compelling either.
I think respecting autonomy and individuals’ preferred tradeoffs is a reason for additivity/separability/independence (see Harsanyi’s argument here and some more accessible discussions here and here, and there are other similar theorems), but not more compelling than my intuitions against it.
1. although it can be up to an increasing transformation, e.g. tan and arctan. The social welfare function arctan(∑iui) is bounded, and when there’s no uncertainty, it is just utilitarianism and produces the same rankings of choices, but with this function, you can’t in general ignore unaffected (identically distributed) individuals between choices if you have uncertainty about their utilities (and numbers in existence).
2. also called independence of unconcerned agents or independence of the utilities of the unconcerned
If we simply aim to maximize expected value, and don’t mind premising our choices on minuscule probabilities of astronomical payoffs, the case for longtermism looks robust. But on some prima facie plausible empirical worldviews, the expectational superiority of longtermist interventions depends heavily on these “Pascalian” probabilities. So the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism.
For what it’s worth, this doesn’t really justify the case for any particular longtermist intervention, so the case for longtermism only looks robust here if you can predictably make a net positive difference with some small but large enough probability. This probability could actually be negligible, unless you have good reason to believe otherwise.
Also, whether you think the probabilities involved are Pascalian or not, or even care, this work is super cool, and I think the talk is pretty accessible if you’re comfortable with 2nd-year undergrad probability. I definitely recommend watching/reading.
Thanks for this. (I should say I don’t completely understand it). My intuitions are much more sympathetic to additivity over prioritarianism but I see where you’re coming from and it does help to answer my question (and updates me a bit).
I wonder if you’ve seen this. I didn’t take the time to understand it fully but it looks like the kind of thing you might be interested in. (Also curious to hear whether you agree with the conclusions).
The blog post was great, thanks for sharing! I’ve come across the paper that blog post is based on, although I didn’t read through the parts on background uncertainty, which is basically the main contribution (other than arguing for stochastic dominance, which was convincing to me). I agree that stochastic dominance is the most important rationality axiom, maybe the only important one, and so whatever follows from it + background uncertainty precedes all other rationality assumptions (the work also assumes utilitarianism, which may be false). The paper is also by Christian Tarsney, and he references it in The Epistemic Challenge to Longermism and claims that the probabilities are plausibly low enough that background uncertainty dominates and we should go with the near-term intervention (from footnote 31 on pages 29-30 here):
The line between “intermediate” and “minuscule” probabilities depends on our degree of background uncertainty and on other features of the choice situation, but for total utilitarians in ordinary choice situations, it is probably no greater than 10−9 (and may be considerably smaller). So, if the stochastic dominance approach is correct, the probabilities we have considered in this paper—starting with p=2×10−14—are on the threshold, from a utilitarian point of view: It could turn out, on further analysis, that the utilitarian case for longtermism is on very firm decision-theoretic footing (requiring no decision-theoretic assumptions beyond first-order stochastic dominance), but it could also turn out that even though longtermist interventions have greater expected value than short-termist interventions, they are nevertheless rationally optional.
Some other remarks on Tarsney’s stochastic dominance approach:
I think the von Neumann-Morgenstern rationality axioms (except Continuity) are actually justified based on stochastic dominance and certain (usually unstated) assumptions about how to treat certain sequences of decisions, using money pumps/Dutch books. The point is to trick you into choosing an option that’s stochastically dominated by another. If we accept these assumptions + Continuity, then we should have a bounded vNM utility function. Tarsney’s results don’t conflict with this, but if you want to avoid Pascal’s wager (or similar with tiny probabilities of infinite payoffs according to utilitarianism) and still satisfy the assumptions, then you need to accept Continuity, and your vNM utility function must be bounded.
It also gives up the kind of additivity over uncertainty I described in point 1 in my comment. How good an action is can depend on your beliefs about parts of the universe that are totally unaffected by your action, even outside the observable universe. Tarsney defends this in section 7.
The value in the entire universe (not just the observable part) is undefined or infinite (positive or negative, but can’t be affected) with high probability, since the universe is infinite/unbounded spatially with high probability, so if you have symmetric views, there’s both infinite positive value and infinite negative value, and the order in which you sum matters. Stochastic dominance either breaks down or forces us to ignore this part of the probability space if our impact is finite at most. Additivity with uncertainty as I described in point 1 allows us to ignore parts of the universe we can’t affect.
EDIT: I think I may have been mixing risk-aversion with respect to welfare and risk-aversion with respect to the difference made by one’s intervention, as discussed in section 4.2 here. Usually, although not necessarily, a bounded utility function will be concave above some point, say 0, and convex below. Concavity implies risk-aversion and would lead you to give extra weight to avoiding particularly bad scenarios (e.g. close to or below 0) in the concave region compared to improving scenarios that are already good in the concave region. This explains why we buy insurance, and is consistent with the maxipok rule to maximize the probability of an OK outcome (which doesn’t distinguish between bad outcomes, some could be far worse than just “not okay”, as this paper discusses.)
Consistent with what I said below, a small chance of making the future really great is not as compelling as it would be if you’re risk-averse/concave above 0. However, ensuring the future is good rather than at best neutral (say extinction of all moral patients, with symmetric population ethics, or human extinction and net suffering in the wild for a long time) is more compelling than otherwise if you’re risk-averse/concave above 0.
If you think the universe is large, has extreme net utility (either negative or positive) regardless of what we do and there are orders of magnitude more moral patients we can’t affect, then it gets messier again.
Original comment follows:
I suspect the best fundamental response to Pascalian problems is to actually have your utility function bounded above and below. Whether longtermist interventions are Pascalian or not, astronomical stakes become much less compelling, and this leads to a preference for higher probabilities of making a difference that’s incompatible with risk-neutrality. I guess this is a kind of risk-aversion, although preventing extremely unlikely horrible outcomes (or making a tiny difference to their probability of occurrence) isn’t as compelling either.
A bounded utility function can’t be additive1. Lives vs headaches (or torture vs dust specks) also gives me reason to believe the value of the whole is not the sum of the value of its parts. I’d rather give up additivity (or separability or independence2) than continuity or my strong prioritarianism. See also this theorem on social welfare functions (up until axiom 5), CLR’s writing on value lexicality and its references, and Stuart Armstrong on the sadistic conclusion.
I think respecting autonomy and individuals’ preferred tradeoffs is a reason for additivity/separability/independence (see Harsanyi’s argument here and some more accessible discussions here and here, and there are other similar theorems), but not more compelling than my intuitions against it.
1. although it can be up to an increasing transformation, e.g. tan and arctan. The social welfare function arctan(∑iui) is bounded, and when there’s no uncertainty, it is just utilitarianism and produces the same rankings of choices, but with this function, you can’t in general ignore unaffected (identically distributed) individuals between choices if you have uncertainty about their utilities (and numbers in existence).
2. also called independence of unconcerned agents or independence of the utilities of the unconcerned
Also, from “The Epistemic Challenge to Longtermism” by Christian Tarsney for the Global Priorities Institute:
For what it’s worth, this doesn’t really justify the case for any particular longtermist intervention, so the case for longtermism only looks robust here if you can predictably make a net positive difference with some small but large enough probability. This probability could actually be negligible, unless you have good reason to believe otherwise.
Also, whether you think the probabilities involved are Pascalian or not, or even care, this work is super cool, and I think the talk is pretty accessible if you’re comfortable with 2nd-year undergrad probability. I definitely recommend watching/reading.
Thanks for this. (I should say I don’t completely understand it). My intuitions are much more sympathetic to additivity over prioritarianism but I see where you’re coming from and it does help to answer my question (and updates me a bit).
I wonder if you’ve seen this. I didn’t take the time to understand it fully but it looks like the kind of thing you might be interested in. (Also curious to hear whether you agree with the conclusions).
The blog post was great, thanks for sharing! I’ve come across the paper that blog post is based on, although I didn’t read through the parts on background uncertainty, which is basically the main contribution (other than arguing for stochastic dominance, which was convincing to me). I agree that stochastic dominance is the most important rationality axiom, maybe the only important one, and so whatever follows from it + background uncertainty precedes all other rationality assumptions (the work also assumes utilitarianism, which may be false). The paper is also by Christian Tarsney, and he references it in The Epistemic Challenge to Longermism and claims that the probabilities are plausibly low enough that background uncertainty dominates and we should go with the near-term intervention (from footnote 31 on pages 29-30 here):
Some other remarks on Tarsney’s stochastic dominance approach:
I think the von Neumann-Morgenstern rationality axioms (except Continuity) are actually justified based on stochastic dominance and certain (usually unstated) assumptions about how to treat certain sequences of decisions, using money pumps/Dutch books. The point is to trick you into choosing an option that’s stochastically dominated by another. If we accept these assumptions + Continuity, then we should have a bounded vNM utility function. Tarsney’s results don’t conflict with this, but if you want to avoid Pascal’s wager (or similar with tiny probabilities of infinite payoffs according to utilitarianism) and still satisfy the assumptions, then you need to accept Continuity, and your vNM utility function must be bounded.
It also gives up the kind of additivity over uncertainty I described in point 1 in my comment. How good an action is can depend on your beliefs about parts of the universe that are totally unaffected by your action, even outside the observable universe. Tarsney defends this in section 7.
The value in the entire universe (not just the observable part) is undefined or infinite (positive or negative, but can’t be affected) with high probability, since the universe is infinite/unbounded spatially with high probability, so if you have symmetric views, there’s both infinite positive value and infinite negative value, and the order in which you sum matters. Stochastic dominance either breaks down or forces us to ignore this part of the probability space if our impact is finite at most. Additivity with uncertainty as I described in point 1 allows us to ignore parts of the universe we can’t affect.