I just wanted to flag some other positive cases for exponential discounting of the interests of future moral patients just for their temporal locations, as in discounted utilitarianism:
Asheim, 2010 has representation theorems with forms of discounted utilitarianism (+ an asymptotic part) under certain fairly intuitive and plausible assumptions. These results are also discussed in West, 2015.
Russell, 2022 (45:40 to the end) shows that discounted utilitarianism seems to be much better behaved (satisfies more desirable principles) in many ways than other versions of utilitarianism, although obviously giving up impartiality. Some impossibility results also guarantee the inconsistency of impartiality with other highly intuitive principles in cases involving infinitely many possible moral patients. West, 2015 discusses some in cases with infinitely many actual moral patients, and there’s of course the inconsistency of Strong Pareto and full (infinite) impartiality. There are also some impossibility theorems for when only finitely many will ever exist, as long as their number (or aggregate value) is unbounded and heavy-tailed in some prospect; see Goodsell, 2021 and Russell, 2023 (and I discuss these results in my Anti-utilitarian theorems section of a post). “If you held that all future lives were only worth 1% of present lives, with this discount being constant over time” also satisfies Compensation,[1] which is jointly inconsistent with Separability and Stochastic Dominance, according to Theorem 4 of Russell, 2023.
Of course, giving up impartiality seems like a very significant cost to me.
Compensation is roughly the principle “that we can always compensate somehow for making things worse nearby, by making things sufficiently better far away (and vice versa)” (Russell, 2023, where it’s also stated formally). It is satisfied pretty generally by theories that are impartial in deterministic finite cases, including total utilitarianism, average utilitarianism, variable value theories, prioritarianism, critical-level utilitarianism, egalitarianism and even person-affecting versions of any of these views. In particular, theoretically “moving” everyone to nearby or “moving” everyone to far away without changing their welfare levels suffices.
I just wanted to flag some other positive cases for exponential discounting of the interests of future moral patients just for their temporal locations, as in discounted utilitarianism:
Asheim, 2010 has representation theorems with forms of discounted utilitarianism (+ an asymptotic part) under certain fairly intuitive and plausible assumptions. These results are also discussed in West, 2015.
Russell, 2022 (45:40 to the end) shows that discounted utilitarianism seems to be much better behaved (satisfies more desirable principles) in many ways than other versions of utilitarianism, although obviously giving up impartiality. Some impossibility results also guarantee the inconsistency of impartiality with other highly intuitive principles in cases involving infinitely many possible moral patients. West, 2015 discusses some in cases with infinitely many actual moral patients, and there’s of course the inconsistency of Strong Pareto and full (infinite) impartiality. There are also some impossibility theorems for when only finitely many will ever exist, as long as their number (or aggregate value) is unbounded and heavy-tailed in some prospect; see Goodsell, 2021 and Russell, 2023 (and I discuss these results in my Anti-utilitarian theorems section of a post). “If you held that all future lives were only worth 1% of present lives, with this discount being constant over time” also satisfies Compensation,[1] which is jointly inconsistent with Separability and Stochastic Dominance, according to Theorem 4 of Russell, 2023.
Of course, giving up impartiality seems like a very significant cost to me.
Compensation is roughly the principle “that we can always compensate somehow for making things worse nearby, by making things sufficiently better far away (and vice versa)” (Russell, 2023, where it’s also stated formally). It is satisfied pretty generally by theories that are impartial in deterministic finite cases, including total utilitarianism, average utilitarianism, variable value theories, prioritarianism, critical-level utilitarianism, egalitarianism and even person-affecting versions of any of these views. In particular, theoretically “moving” everyone to nearby or “moving” everyone to far away without changing their welfare levels suffices.