as reductions to the probability of an outcome can always be compensated for by proportional increases in its value
It’s worth noting that this depends on the particular value function being used: holding some other standard assumptions constant, it works if and only if value is unbounded (above). There are bounded value (utility) functions whose expected value we could maximize instead. Among options that approximate total utilitarianism, there are the expected value of
(time and space-)discounted total welfare,
rank-discounted total welfare,
any bounded increasing function (like arctan) of total welfare.
In fact, this last one is also total utilitarianism: it agrees with total utilitarianism on all rankings between (uncertainty-free) outcomes. It’s just not expectational total utilitarianism, which requires directly maximizing the expected value of total welfare.
And none of this has to give up the standard axioms of expected utility theory. Further, it satisfies very natural and similarly defensible extensions of those very rationality axioms that standard expectational total utilitarianism instead violates, specifically the Sure-Thing Principle and Independence axiom extended to infinitely (countably) many outcomes in prospects (due to St Petersburg prospects or similar, see Russell and Isaacs, 2021). I’ve also argued here that utilitarianism is irrational or self-undermining based on those results and other similar ones with prospects with infinitely many possible outcomes.
We shouldn’t cede standard expected utility theory to expectational total utilitarians or others maximizing the expected value of unbounded utility functions. They have to accept acting apparently irrationally (getting money pumped, Dutch booked, paying to avoid information, etc.) in hypothetical cases where those with bounded utility functions wouldn’t.
(Standard risk aversion with respect to total welfare would be maximizing the expected value of convex (EDIT) concave function of total welfare, but that would still be fanatical about avoiding worst cases and an unbounded utility function, so vulnerable to the generic objectios to them.)
Thanks for your comment, Michael. Our team started working through your super helpful recent post last week! We discuss some of these issues (including the last point you mention) in a document where we summarize some of the philosophical background issues. However, we only mention bounded utility very briefly and don’t discuss infinite cases at all. We focus instead on rounding down low probabilities, for two reasons: first, we think that’s what people are probably actually doing in practice, and second, it avoids the seeming conflict between bounded utility and theories of value. I’m sure you have answers to that problem, so let us know!
I think there probably is no conflict between bounded utility (or capturing risk aversion with concave increasing utility functions) and theories of deterministic value, because without uncertainty/risk, bounded utility functions can agree with unbounded ones on all rankings of outcomes. The utility function just captures risk attitudes wrt to deterministic value.
Furthermore, bounded and concave utility functions can be captured as weighting functions, much like WLU. Suppose you have a utility function u of the value v, which is a function of outcomes. Then, whether u is bounded or concave or whatever, we can still write:
u(v(x))=u(v(x))v(x)v(x)=w(x)v(x)
where w(x)=u(v(x))v(x).[1] Then, for a random variable X over outcomes:
E[u(v(X))]=E[w(X)v(X)]
Compare to WLU, with some weighting function w of outcomes:
WLU(X)=E[w(X)v(X)]E[w(X)]
The difference is that WLU renormalizes.
By the way, because of this renormalizing, WLU can also be seen as adjusting the probabilities in X to obtain a new prospect. If p is the original probability distribution (for X∼p, i.e.P(X∈A)=p(A) for each set of outcomes A), then we can define a new one by:[2]
It’s worth noting that this depends on the particular value function being used: holding some other standard assumptions constant, it works if and only if value is unbounded (above). There are bounded value (utility) functions whose expected value we could maximize instead. Among options that approximate total utilitarianism, there are the expected value of
(time and space-)discounted total welfare,
rank-discounted total welfare,
any bounded increasing function (like arctan) of total welfare.
In fact, this last one is also total utilitarianism: it agrees with total utilitarianism on all rankings between (uncertainty-free) outcomes. It’s just not expectational total utilitarianism, which requires directly maximizing the expected value of total welfare.
And none of this has to give up the standard axioms of expected utility theory. Further, it satisfies very natural and similarly defensible extensions of those very rationality axioms that standard expectational total utilitarianism instead violates, specifically the Sure-Thing Principle and Independence axiom extended to infinitely (countably) many outcomes in prospects (due to St Petersburg prospects or similar, see Russell and Isaacs, 2021). I’ve also argued here that utilitarianism is irrational or self-undermining based on those results and other similar ones with prospects with infinitely many possible outcomes.
We shouldn’t cede standard expected utility theory to expectational total utilitarians or others maximizing the expected value of unbounded utility functions. They have to accept acting apparently irrationally (getting money pumped, Dutch booked, paying to avoid information, etc.) in hypothetical cases where those with bounded utility functions wouldn’t.
(Standard risk aversion with respect to total welfare would be maximizing the expected value of
convex(EDIT) concave function of total welfare, but that would still be fanatical about avoiding worst cases and an unbounded utility function, so vulnerable to the generic objectios to them.)Thanks for your comment, Michael. Our team started working through your super helpful recent post last week! We discuss some of these issues (including the last point you mention) in a document where we summarize some of the philosophical background issues. However, we only mention bounded utility very briefly and don’t discuss infinite cases at all. We focus instead on rounding down low probabilities, for two reasons: first, we think that’s what people are probably actually doing in practice, and second, it avoids the seeming conflict between bounded utility and theories of value. I’m sure you have answers to that problem, so let us know!
I got a bit more time to think about this.
I think there probably is no conflict between bounded utility (or capturing risk aversion with concave increasing utility functions) and theories of deterministic value, because without uncertainty/risk, bounded utility functions can agree with unbounded ones on all rankings of outcomes. The utility function just captures risk attitudes wrt to deterministic value.
Furthermore, bounded and concave utility functions can be captured as weighting functions, much like WLU. Suppose you have a utility function u of the value v, which is a function of outcomes. Then, whether u is bounded or concave or whatever, we can still write:
u(v(x))=u(v(x))v(x)v(x)=w(x)v(x)where w(x)=u(v(x))v(x).[1] Then, for a random variable X over outcomes:
E[u(v(X))]=E[w(X)v(X)]Compare to WLU, with some weighting function w of outcomes:
WLU(X)=E[w(X)v(X)]E[w(X)]The difference is that WLU renormalizes.
By the way, because of this renormalizing, WLU can also be seen as adjusting the probabilities in X to obtain a new prospect. If p is the original probability distribution (for X∼p, i.e.P(X∈A)=p(A) for each set of outcomes A), then we can define a new one by:[2]
q(A)=1∫w(x)dp(x)∫Aw(x)dp(x)=1EX∼p[w(X)]∫Aw(x)dp(x)so
WLU(X)=E[w(X)v(X)]E[w(X)]=1∫w(x)dp(x)∫Aw(x)dp(x)=∫Av(y)dq(y)=EY∼q[v(Y)]We can define w(x) arbitrarily when v(x)=0 to avoid division by 0.
You can replace the integrals with sums for discrete distributions, but integral notation is more general in measure theory.