Thanks for this. (I should say I don’t completely understand it). My intuitions are much more sympathetic to additivity over prioritarianism but I see where you’re coming from and it does help to answer my question (and updates me a bit).
I wonder if you’ve seen this. I didn’t take the time to understand it fully but it looks like the kind of thing you might be interested in. (Also curious to hear whether you agree with the conclusions).
The blog post was great, thanks for sharing! I’ve come across the paper that blog post is based on, although I didn’t read through the parts on background uncertainty, which is basically the main contribution (other than arguing for stochastic dominance, which was convincing to me). I agree that stochastic dominance is the most important rationality axiom, maybe the only important one, and so whatever follows from it + background uncertainty precedes all other rationality assumptions (the work also assumes utilitarianism, which may be false). The paper is also by Christian Tarsney, and he references it in The Epistemic Challenge to Longermism and claims that the probabilities are plausibly low enough that background uncertainty dominates and we should go with the near-term intervention (from footnote 31 on pages 29-30 here):
The line between “intermediate” and “minuscule” probabilities depends on our degree of background uncertainty and on other features of the choice situation, but for total utilitarians in ordinary choice situations, it is probably no greater than 10−9 (and may be considerably smaller). So, if the stochastic dominance approach is correct, the probabilities we have considered in this paper—starting with p=2×10−14—are on the threshold, from a utilitarian point of view: It could turn out, on further analysis, that the utilitarian case for longtermism is on very firm decision-theoretic footing (requiring no decision-theoretic assumptions beyond first-order stochastic dominance), but it could also turn out that even though longtermist interventions have greater expected value than short-termist interventions, they are nevertheless rationally optional.
Some other remarks on Tarsney’s stochastic dominance approach:
I think the von Neumann-Morgenstern rationality axioms (except Continuity) are actually justified based on stochastic dominance and certain (usually unstated) assumptions about how to treat certain sequences of decisions, using money pumps/Dutch books. The point is to trick you into choosing an option that’s stochastically dominated by another. If we accept these assumptions + Continuity, then we should have a bounded vNM utility function. Tarsney’s results don’t conflict with this, but if you want to avoid Pascal’s wager (or similar with tiny probabilities of infinite payoffs according to utilitarianism) and still satisfy the assumptions, then you need to accept Continuity, and your vNM utility function must be bounded.
It also gives up the kind of additivity over uncertainty I described in point 1 in my comment. How good an action is can depend on your beliefs about parts of the universe that are totally unaffected by your action, even outside the observable universe. Tarsney defends this in section 7.
The value in the entire universe (not just the observable part) is undefined or infinite (positive or negative, but can’t be affected) with high probability, since the universe is infinite/unbounded spatially with high probability, so if you have symmetric views, there’s both infinite positive value and infinite negative value, and the order in which you sum matters. Stochastic dominance either breaks down or forces us to ignore this part of the probability space if our impact is finite at most. Additivity with uncertainty as I described in point 1 allows us to ignore parts of the universe we can’t affect.
Thanks for this. (I should say I don’t completely understand it). My intuitions are much more sympathetic to additivity over prioritarianism but I see where you’re coming from and it does help to answer my question (and updates me a bit).
I wonder if you’ve seen this. I didn’t take the time to understand it fully but it looks like the kind of thing you might be interested in. (Also curious to hear whether you agree with the conclusions).
The blog post was great, thanks for sharing! I’ve come across the paper that blog post is based on, although I didn’t read through the parts on background uncertainty, which is basically the main contribution (other than arguing for stochastic dominance, which was convincing to me). I agree that stochastic dominance is the most important rationality axiom, maybe the only important one, and so whatever follows from it + background uncertainty precedes all other rationality assumptions (the work also assumes utilitarianism, which may be false). The paper is also by Christian Tarsney, and he references it in The Epistemic Challenge to Longermism and claims that the probabilities are plausibly low enough that background uncertainty dominates and we should go with the near-term intervention (from footnote 31 on pages 29-30 here):
Some other remarks on Tarsney’s stochastic dominance approach:
I think the von Neumann-Morgenstern rationality axioms (except Continuity) are actually justified based on stochastic dominance and certain (usually unstated) assumptions about how to treat certain sequences of decisions, using money pumps/Dutch books. The point is to trick you into choosing an option that’s stochastically dominated by another. If we accept these assumptions + Continuity, then we should have a bounded vNM utility function. Tarsney’s results don’t conflict with this, but if you want to avoid Pascal’s wager (or similar with tiny probabilities of infinite payoffs according to utilitarianism) and still satisfy the assumptions, then you need to accept Continuity, and your vNM utility function must be bounded.
It also gives up the kind of additivity over uncertainty I described in point 1 in my comment. How good an action is can depend on your beliefs about parts of the universe that are totally unaffected by your action, even outside the observable universe. Tarsney defends this in section 7.
The value in the entire universe (not just the observable part) is undefined or infinite (positive or negative, but can’t be affected) with high probability, since the universe is infinite/unbounded spatially with high probability, so if you have symmetric views, there’s both infinite positive value and infinite negative value, and the order in which you sum matters. Stochastic dominance either breaks down or forces us to ignore this part of the probability space if our impact is finite at most. Additivity with uncertainty as I described in point 1 allows us to ignore parts of the universe we can’t affect.