(I previously posted this on LessWrong, and someone in the comments suggested that it might be of interest to readers here.)
Consequentialists (including utilitarians) claim that the goodness of an action should be judged based on the goodness of its consequences. The word utility is often usedto refer to the quantified goodness of a particular outcome. [1]
When the consequences of an action are uncertain, it is often taken for granted that consequentialists should choose the action which has the highest expected utility. The expected utility is the sum of the utilities of each possible outcome, weighted by their probability. For a lottery which gives outcome utilities ui with respective probabilities pi, the expected utility is:
E[U]=∑ipiui.
There are several good reasons to use the maximization of expected utility as a normative rule. I’ll talk about some of them here, but I recommend Joe Carlsmith’s series of posts ‘On Expected Utility’ as a good survey.
Here, I’m going to consider what ethical decisions might look like if we instead chose to maximize the geometric expectation of utility (which I’ll also refer to as the geometric average), as given by the formula:
G[U]=∏iupii.
I’m going to look at a few reasons why the maximizing the geometric expectation of utility is appealing and some other reasons why it is less appealing.
For the sake of exploring the difference between the geometric expectation and the expected value, I’ll mostly assume that ‘utility’/goodness is a property of each possible state of the world, without going into huge detail about the question of ‘what is goodness?’.
Geometric Expectation ≠ Logarithmic Utility
I want to to get this out of the way before starting.
Maximizing the geometric expectation is mathematically equivalent to maximizing the expected value of the logarithm of utility[2]. This leads some people to use ‘geometric averaging’ and ‘logarithmic utility’ interchangeably. I don’t like this and I’ll explain why. First: just because they are equivalent mathematically, this doesn’t mean that they encode the same intuitions (as Scott Garrabrant writes: “you wouldn’t define x×y as eln(x)+ln(y)” even though they give the same result). Writing the geometric expectation emphasises that wherever two terms are added in the expected value, they are multiplied in the geometric expectation.
Second: there are two ‘variables’ at play here: the utility function (which assigns a utility to each outcome) and the averaging method (which is used to decide between lotteries with uncertain outcomes). If Alice and Bob agree on the utilities of each outcome (ie. they have the same utility function) but Alice chooses to maximize expected utility and Bob chooses to maximize the geometric expectation, they will behave differently. It seems weird to say that Bob is really just maximizing logarithmic utility, since he and Alice both agreed on their utility functions beforehand. Choosing a utility function and choosing an averaging method (or another way of deciding in uncertain situations) are two different decisions that shouldn’t be smuggled together.
Finally: I prefer the fact that geometric averaging can more easily deal with outcomes of zero utility. Log(0) is not well-defined, but 0p is. There are ways around this, but I find the geometric averaging approach more intuitive.
While reading this, I encourage you to view the geometric expectation as a different way of averaging and deciding between uncertain outcomes, not just a different utility function. If you find yourself thinking about the differences between geometric expected utility and expected utility in terms of utility functions, remind yourself that, for any non-negative utility function, one can choose either averaging method.
I’ll now survey some arguments for and against using the geometric expectation in ethical decision-making, compared to the expected value.
Arguments for using the Geometric Expectation
The Time-averaged Growth rate
Maximizing the geometric average is the same a maximizing the time-averaged multiplicative growth rate of utility.
If your initial utility is v0 then getting a utility ui is the same as multiplying your utility by a factor ri=ui/v0. If a lottery has a set of utility payoffs {ui} with corresponding probabilities pi then you can equivalently view it as a lottery where the payoffs are given in terms of multipliers {ri} of your initial utility. Imagine repeating this lottery so that each time it is repeated, your current utility is multiplied by ri with probability pi. If vN is your utility after N repetitions of the lottery, then the average factor that your utility grows by each repetition is N√vNv0. If ni is the number of times that outcome i occurs, then in the limit of large N: ni/N→pi. The time-averaged growth rate in the limit of an infinite number of repetitions is therefore:
limN→∞∏i(uivo)niN=1v0∏iupii=1v0G[U]
Thus, maximizing the geometric average of utility can be view maximizing the time-averaged growth rate of your utility, if a lottery is repeated multiplicatively. Sometimes, a lottery might have a positive expected value, but a time-averaged growth rate of less than one, (see this footnote[3] for Ole Peters oft-repeated coin toss example of this phenomenon).
The Kelly Criterion
The Kelly Criterion is a strategy for sizing bets in gambles which is widely used by professional sports bettors and investors. It is equivalent to sizing bets such that they maximize the geometric expectation. It has been shown that a Kelly-bettor will, in the long-run, outperform bettors using any other ‘essentially different’ strategy (including expected value maximization) for sizing their bets. In particular the ratio between the bankroll of an agent using the Kelly strategy and the bankroll of an agent using a different strategy, this ratio will tend to infinity as the number of repeated bets tends to infinity. This holds even when the odds of the gambles change each repetition. See Kelly’s paper here or this paper by Edward Thorpe for proof of this, and some other, similar claims.
If you assume utility is unbounded, these arguments in favour of Kelly betting also count as arguments in favour of geometric expectation maximization. If you care about utility, then choosing the strategy which will, in the long run get you more utility than other strategies, then this is a pretty compelling reason to use that strategy. If utility is bounded, then the proofs are not as strong , but they still approximately hold in situations where the current utility is small compared to the maximum possible utility.
Intuitions around Extinction
Suppose you are a total utilitarian and you believe that the earth and the lives of its inhabitants is net positive. You also believe that there is no other life in the universe. Would you accept a 51% chance of creating a new, fully populated earth if there was a 49% chance of destroying this earth[4]? Setting aside considerations about the suffering involved if earth disappeared, an expected value-maximizer is obliged to accept this gamble. A geometric expectation maximizer is not. Personally, geometric expectation maximization fits my intuitions better in this situation.
Pascal’s Mugging
Consider the following lottery: there is a small probability p of receiving a large utility payoff Δ and a large probability of having to pay a small utility cost δ. The expected value of this lottery is pΔ−(1−p)δ . An expected utility maximizer will accept this lottery if this expression is positive, regardless of how small p is made. When p is very small, Δ is very large and δ relatively small, this situation is sometimes called Pascal’s mugging.
Expected utility maximization compels one to accept Pascal’s mugging, but some find it unappealing [5]. Geometric expectation maximizers can also be Pascal-mugged, but are generally more reluctant to accept the gamble. For a starting utility v0, a geometric expectation maximizer will accept the Pascal-mugging if
Δ>vp0(v0−δ)1−pp−v0.
Note that this diverges as δ (the cost of losing) approaches v0 (your utility before the gamble). If δ=v0, then there is no payoff Δ which would justify risking accepting the gamble. It is harder to Pascal-mug a geometric utility maximizer.
Arguments Against using the Geometric Expectation
Violates Von Neumann-Morgenstern Rationality
A geometric utility maximizer rejects the VNM axiom of Continuity, which states that for any three lotteries with preference ordering L≼N≼M, there must exist a probability p such that pL+(1−p)M∼N. In words: there is some probability with which you can ‘mix’ L and M such that the resulting lottery is equally preferable to N.
Geometric utility maximization rejects this axiom, since, if L is a zero utility outcome, then the geometric expected utility of any lottery involving L will also be zero, regardless of how large you make the payoff of M. In terms of money: a geometric expectation maximizer will never accept the tiniest risk of absolute bankruptcy, even if it comes with an arbitrarily large probability of an arbitrarily large payoff.
Violating the Continuity Axiom is bad because it allows you to be money pumped. Violations of the other VNM axioms allow you to be money pumped (ie. accept a series of lotteries which are guaranteed to make you lose utility) with certainty, but violations of the continuity axiom can only make you worse off with arbitrarily high probability. If you refuse pL+(1−p)M and instead pick N, then you will end up worse off with probability (1−p), including when 1−p is really high. Furthermore, if L is zero utility, N only needs to be the tiniest bit above zero in order to get a geometric utility maximizer to choose it.
This is pretty bad, but is it much worse than accepting Pascal’s mugging? In Pascal’s mugging, you also accept a situation which is almost guaranteed (with arbitrarily high probability) to make you worse off. But people don’t refer to this as a money pump, as they think the small probability of very high utility compensates for this.
Expected value maximizers fanatically pursue high utility, geometric utility maximizers fanatically avoid low/zero utility. Both are willing to accept almost guaranteed losses in order to pursue these preferences. These seem unappealing in symmetrical ways. Neither (to my mind) comes out better in this comparison.
Another way of highlighting the violation of VNM rationality is point out that any lottery with any nonzero probability of zero utility has geometric expectation of zero, meaning that all such lotteries are equivalently desirable, according to geometric utility maximization. This is especially concerning if we, as good Bayesians, refuse to assign a zero probability to any event, including zero utility ones. This would make all real world lotteries indistinguishable.
There is simple workaround to this which is to treat a zero utility outcome as a finite utility uϵ, compare different lotteries by taking their ratio in the limit uϵ→0. This amounts to the rule “choose the lottery with the lowest probability of zero utility, if they have the same probability of zero utility, choose the lottery with the highest geometric expectation of the remaining nonzero utility outcomes”. Its a bit hacky but it works.
The Veil of Ignorance (aka the Original Position)
Arguments going back to Harsanyi[6] consider situations where rational humans (for some definition of ‘rational’) have to choose a course of action affecting a group, without knowing which position in the group they will occupy. Acting self-interestedly, they will choose the option with the highest expected utility. Carlsmith has pointed out that, if a large number of people are drowning and there are several lotteries with payoffs involving saving different numbers of people chosen at random, the lottery which gives each individual the best chance of surviving is also the lottery with the highest expected value of lives saved. Thus, if each person (self-interestedly) voted on a course of action, they would vote to maximize the expected value.
There are situations where maximizing the geometric expectation of lives saved will go against the votes of people behind the veil of ignorance. In many situations this is bad, however, the veil of ignorance is an intuition pump, not an infallible guide which applies in all situations.
Suppose there are only 1000 people left on earth and they are given a choice between two lotteries. In lottery A there is a 51% chance that they all survive and a 49% chance they all die. In lottery B, 500 of them are randomly chosen to survive and the others will die. Under the veil of ignorance, lottery A gives better individual odds of survival, but the geometric expectation favours lottery B. Intuitions pull differently for different people, but it is not clear to me that lottery B is obviously wrong.
Ensemble Averaging
If a large ‘ensemble’ of people all independently accept a lottery and agree to share any profits/losses between them equally, then the amount they will each receive will approach the expected value of the lottery (as the size of the ensemble approaches infinity and the law of large numbers applies). The geometric expectation of the lottery does not provide a guide to your utility in this situation. If people in the ensemble used the geometric expectation to choose their lotteries, they would all end up worse off.
In some ways, this is the counterpart to multiplicative time-averaging we encountered above. Both repeat the gamble many times independently, either sequentially or in parallel. When repeated multiplicatively in sequence, the geometric expectation is the best guide for your wealth, when repeated in parallel, in an ensemble, the expected value is better.
Background Independence
If you take a lottery with outcomes ui and you add a constant utility x to each of them, the expected value of this new lottery is simply x plus the expected value of the original lottery. If you are comparing multiple lotteries, the lottery with the highest expected utility doesn’t change if you add a constant x to each outcome of each lottery. This property sometimes comes under the umbrella term of ‘background independence’. The preference ordering over lotteries (as decided by the expected value) is not affected by things that are unchanged by the outcome of the lotteries.
This is not the case for the geometric expectation, which is said to reject background independence. What matters for geometric maximization is the proportional change in utility, not the absolute change. For an expected utility maximizer, saving 10 lives is equivalent whether they are 10 people among 8 billion others, of whether they are the last 10 people in the universe. For a geometric utility maximizer, the latter situation represents a larger proportional change.
Rejecting background independence has been criticised by Wilkinson, in his paper ‘In Defence of Fanaticism’, using a variation of Parfit’s Egyptology argument. Roughly, if you reject background independence then there are some lotteries for which choosing between them entirely depends on your knowledge of the ‘background’ which is unaffected by the lotteries, rather than the outcomes that are actually at stake. For example, if you believe in assigning moral weight to aliens on the other side of the universe, then whether or not they exist would not affect the decisions made by an expected utility maximizer, but would affect the decisions made by a geometric utility maximizer. Thus a geometric utility maximizer could conceivably spend large amounts of resources researching astrobiology and distant galaxies in order to make a decision which only affects people on earth.
This is often taken to just be absurd, in the same way that Parfit’s average utilitarian who researches Egyptology in order to decide whether to have children is absurd. But to me its not so obvious. If we’re making decisions about a particular reference class, its not crazy that knowing more about that reference class will change the kind of decisions we make. Accepting a lottery which has a 10% chance of killing 10 people is more significant if there are only 10 people left on earth. Also, rejecting background independence is not unique to geometric averaging: it also applies to any expected utility maximizer who has a utility function which is nonlinear in any quantity (this applies to, for example, any VNM utility function, which by definition must be bounded and therefore nonlinear in some quantity).
Conclusion
When I started this piece, I was hoping that geometric utility maximization would prove to be a satisfactory replacement for expected utility maximization, about which I have some lingering dissatisfaction. Instead, like pushing around a lump under the carpet, it seems to resolve issues in some situations, which then pop up in a different form somewhere else. Geometric utility maximization fits some of my some of my intuitions regarding ethical decision making but not others. The same is true of expected utility maximization. Maybe searching for a version of consequentialism that fits all intuitions is hopeless. But I find viewing expected utility as the default ‘obviously correct’ option unappealing. I can imagine a world where people thought more in terms of the geometric expectation and geometric utility maximization was considered the default model of rational behaviour, as opposed to expected utility. It’s a bit weird but this imaginary world doesn’t look too crazy.
Here, I will use the word ‘utility’ in this normative sense, to describe something that we ought to aim for. This is to be distinguished from the ‘descriptive’ sense of the word ‘utility’, which is inferred from behaviour. See the wikipedia page for ‘utility’ for more on this distinction.
Imagine that a fair coin will be tossed. If it lands heads, your utility will be multiplied by a factor 1.5. If tails, it will be multiplied by a factor of 0.6. Imagine that this lottery is repeated many times. What is the average factor that your utility will be multiplied by, each time? If your initial utility is v0 and your utility after N repetitions is vN, then, on average your utility has be multiplied by a factor of N√vNv0 each repetition. Call nH the number of times the coin lands heads and nH the number of times it lands tails. In the limit that N goes to infinity, invoking the law of large numbers we can say that nH/N will approach the 1⁄2 (ie. the probability that the coin lands heads). Thus, in the limit, the average factor that utility is multiplied by is 1.51/2×0.61/2≈0.949. We call this the ‘time-averaged growth rate’. Note that this expression is the geometric expectation of the utility of the initial lottery, divided by the initial value of your utility. Thus, the geometric expectation of the lottery tells us the time averaged growth rate of the gamble if it is repeated multiplicatively. In this case, while the expected utility gain is positive (12×1.5v0+12×0.6v0=1.05v0), if we repeat this gamble enough times, we are almost guaranteed to end up with lower utility.
This example is often given by Ole Peters when discussing his ‘ergodicity economics’.
This is the question that Tyler Cowen asked Sam Bankman-Fried to understand his famously ‘risk-neutral’ approach to utilitarianism. SBF, as an expected utility maximizer, bit the bullet and said he would accept the gamble.
Expected utility advocates normally get around Pascal muggings by advocating for bounded utility functions. However, provided that utility is currently low compared to the upper bound, one can always come up with a lottery with a very small probability of a large utility payoff. The only way to avoid this is to say that utility is currently at a significant fraction of the upper bound (eg. you could say that it is impossible to increase utility from its current by more than a factor of 100x). To me, this seems to indicate a lack of imagination regarding how much better the world could be.
Should we Maximize the Geometric Expectation?
(I previously posted this on LessWrong, and someone in the comments suggested that it might be of interest to readers here.)
Consequentialists (including utilitarians) claim that the goodness of an action should be judged based on the goodness of its consequences. The word utility is often used to refer to the quantified goodness of a particular outcome. [1]
When the consequences of an action are uncertain, it is often taken for granted that consequentialists should choose the action which has the highest expected utility. The expected utility is the sum of the utilities of each possible outcome, weighted by their probability. For a lottery which gives outcome utilities ui with respective probabilities pi, the expected utility is:
E[U]=∑ipiui.There are several good reasons to use the maximization of expected utility as a normative rule. I’ll talk about some of them here, but I recommend Joe Carlsmith’s series of posts ‘On Expected Utility’ as a good survey.
Here, I’m going to consider what ethical decisions might look like if we instead chose to maximize the geometric expectation of utility (which I’ll also refer to as the geometric average), as given by the formula:
G[U]=∏iupii.I’m going to look at a few reasons why the maximizing the geometric expectation of utility is appealing and some other reasons why it is less appealing.
For the sake of exploring the difference between the geometric expectation and the expected value, I’ll mostly assume that ‘utility’/goodness is a property of each possible state of the world, without going into huge detail about the question of ‘what is goodness?’.
Geometric Expectation ≠ Logarithmic Utility
I want to to get this out of the way before starting.
Maximizing the geometric expectation is mathematically equivalent to maximizing the expected value of the logarithm of utility[2]. This leads some people to use ‘geometric averaging’ and ‘logarithmic utility’ interchangeably. I don’t like this and I’ll explain why. First: just because they are equivalent mathematically, this doesn’t mean that they encode the same intuitions (as Scott Garrabrant writes: “you wouldn’t define x×y as eln(x)+ln(y)” even though they give the same result). Writing the geometric expectation emphasises that wherever two terms are added in the expected value, they are multiplied in the geometric expectation.
Second: there are two ‘variables’ at play here: the utility function (which assigns a utility to each outcome) and the averaging method (which is used to decide between lotteries with uncertain outcomes). If Alice and Bob agree on the utilities of each outcome (ie. they have the same utility function) but Alice chooses to maximize expected utility and Bob chooses to maximize the geometric expectation, they will behave differently. It seems weird to say that Bob is really just maximizing logarithmic utility, since he and Alice both agreed on their utility functions beforehand. Choosing a utility function and choosing an averaging method (or another way of deciding in uncertain situations) are two different decisions that shouldn’t be smuggled together.
Finally: I prefer the fact that geometric averaging can more easily deal with outcomes of zero utility. Log(0) is not well-defined, but 0p is. There are ways around this, but I find the geometric averaging approach more intuitive.
While reading this, I encourage you to view the geometric expectation as a different way of averaging and deciding between uncertain outcomes, not just a different utility function. If you find yourself thinking about the differences between geometric expected utility and expected utility in terms of utility functions, remind yourself that, for any non-negative utility function, one can choose either averaging method.
I’ll now survey some arguments for and against using the geometric expectation in ethical decision-making, compared to the expected value.
Arguments for using the Geometric Expectation
The Time-averaged Growth rate
Maximizing the geometric average is the same a maximizing the time-averaged multiplicative growth rate of utility.
If your initial utility is v0 then getting a utility ui is the same as multiplying your utility by a factor ri=ui/v0. If a lottery has a set of utility payoffs {ui} with corresponding probabilities pi then you can equivalently view it as a lottery where the payoffs are given in terms of multipliers {ri} of your initial utility. Imagine repeating this lottery so that each time it is repeated, your current utility is multiplied by ri with probability pi. If vN is your utility after N repetitions of the lottery, then the average factor that your utility grows by each repetition is N√vNv0. If ni is the number of times that outcome i occurs, then in the limit of large N: ni/N→pi. The time-averaged growth rate in the limit of an infinite number of repetitions is therefore:
limN→∞∏i(uivo)niN=1v0∏iupii=1v0G[U]Thus, maximizing the geometric average of utility can be view maximizing the time-averaged growth rate of your utility, if a lottery is repeated multiplicatively. Sometimes, a lottery might have a positive expected value, but a time-averaged growth rate of less than one, (see this footnote[3] for Ole Peters oft-repeated coin toss example of this phenomenon).
The Kelly Criterion
The Kelly Criterion is a strategy for sizing bets in gambles which is widely used by professional sports bettors and investors. It is equivalent to sizing bets such that they maximize the geometric expectation. It has been shown that a Kelly-bettor will, in the long-run, outperform bettors using any other ‘essentially different’ strategy (including expected value maximization) for sizing their bets. In particular the ratio between the bankroll of an agent using the Kelly strategy and the bankroll of an agent using a different strategy, this ratio will tend to infinity as the number of repeated bets tends to infinity. This holds even when the odds of the gambles change each repetition. See Kelly’s paper here or this paper by Edward Thorpe for proof of this, and some other, similar claims.
If you assume utility is unbounded, these arguments in favour of Kelly betting also count as arguments in favour of geometric expectation maximization. If you care about utility, then choosing the strategy which will, in the long run get you more utility than other strategies, then this is a pretty compelling reason to use that strategy. If utility is bounded, then the proofs are not as strong , but they still approximately hold in situations where the current utility is small compared to the maximum possible utility.
Intuitions around Extinction
Suppose you are a total utilitarian and you believe that the earth and the lives of its inhabitants is net positive. You also believe that there is no other life in the universe. Would you accept a 51% chance of creating a new, fully populated earth if there was a 49% chance of destroying this earth[4]? Setting aside considerations about the suffering involved if earth disappeared, an expected value-maximizer is obliged to accept this gamble. A geometric expectation maximizer is not. Personally, geometric expectation maximization fits my intuitions better in this situation.
Pascal’s Mugging
Consider the following lottery: there is a small probability p of receiving a large utility payoff Δ and a large probability of having to pay a small utility cost δ. The expected value of this lottery is pΔ−(1−p)δ . An expected utility maximizer will accept this lottery if this expression is positive, regardless of how small p is made. When p is very small, Δ is very large and δ relatively small, this situation is sometimes called Pascal’s mugging.
Expected utility maximization compels one to accept Pascal’s mugging, but some find it unappealing [5]. Geometric expectation maximizers can also be Pascal-mugged, but are generally more reluctant to accept the gamble. For a starting utility v0, a geometric expectation maximizer will accept the Pascal-mugging if
Δ>vp0(v0−δ)1−pp−v0.Note that this diverges as δ (the cost of losing) approaches v0 (your utility before the gamble). If δ=v0, then there is no payoff Δ which would justify risking accepting the gamble. It is harder to Pascal-mug a geometric utility maximizer.
Arguments Against using the Geometric Expectation
Violates Von Neumann-Morgenstern Rationality
A geometric utility maximizer rejects the VNM axiom of Continuity, which states that for any three lotteries with preference ordering L≼N≼M, there must exist a probability p such that pL+(1−p)M∼N. In words: there is some probability with which you can ‘mix’ L and M such that the resulting lottery is equally preferable to N.
Geometric utility maximization rejects this axiom, since, if L is a zero utility outcome, then the geometric expected utility of any lottery involving L will also be zero, regardless of how large you make the payoff of M. In terms of money: a geometric expectation maximizer will never accept the tiniest risk of absolute bankruptcy, even if it comes with an arbitrarily large probability of an arbitrarily large payoff.
Violating the Continuity Axiom is bad because it allows you to be money pumped. Violations of the other VNM axioms allow you to be money pumped (ie. accept a series of lotteries which are guaranteed to make you lose utility) with certainty, but violations of the continuity axiom can only make you worse off with arbitrarily high probability. If you refuse pL+(1−p)M and instead pick N, then you will end up worse off with probability (1−p), including when 1−p is really high. Furthermore, if L is zero utility, N only needs to be the tiniest bit above zero in order to get a geometric utility maximizer to choose it.
This is pretty bad, but is it much worse than accepting Pascal’s mugging? In Pascal’s mugging, you also accept a situation which is almost guaranteed (with arbitrarily high probability) to make you worse off. But people don’t refer to this as a money pump, as they think the small probability of very high utility compensates for this.
Expected value maximizers fanatically pursue high utility, geometric utility maximizers fanatically avoid low/zero utility. Both are willing to accept almost guaranteed losses in order to pursue these preferences. These seem unappealing in symmetrical ways. Neither (to my mind) comes out better in this comparison.
Another way of highlighting the violation of VNM rationality is point out that any lottery with any nonzero probability of zero utility has geometric expectation of zero, meaning that all such lotteries are equivalently desirable, according to geometric utility maximization. This is especially concerning if we, as good Bayesians, refuse to assign a zero probability to any event, including zero utility ones. This would make all real world lotteries indistinguishable.
There is simple workaround to this which is to treat a zero utility outcome as a finite utility uϵ, compare different lotteries by taking their ratio in the limit uϵ→0. This amounts to the rule “choose the lottery with the lowest probability of zero utility, if they have the same probability of zero utility, choose the lottery with the highest geometric expectation of the remaining nonzero utility outcomes”. Its a bit hacky but it works.
The Veil of Ignorance (aka the Original Position)
Arguments going back to Harsanyi[6] consider situations where rational humans (for some definition of ‘rational’) have to choose a course of action affecting a group, without knowing which position in the group they will occupy. Acting self-interestedly, they will choose the option with the highest expected utility. Carlsmith has pointed out that, if a large number of people are drowning and there are several lotteries with payoffs involving saving different numbers of people chosen at random, the lottery which gives each individual the best chance of surviving is also the lottery with the highest expected value of lives saved. Thus, if each person (self-interestedly) voted on a course of action, they would vote to maximize the expected value.
There are situations where maximizing the geometric expectation of lives saved will go against the votes of people behind the veil of ignorance. In many situations this is bad, however, the veil of ignorance is an intuition pump, not an infallible guide which applies in all situations.
Suppose there are only 1000 people left on earth and they are given a choice between two lotteries. In lottery A there is a 51% chance that they all survive and a 49% chance they all die. In lottery B, 500 of them are randomly chosen to survive and the others will die. Under the veil of ignorance, lottery A gives better individual odds of survival, but the geometric expectation favours lottery B. Intuitions pull differently for different people, but it is not clear to me that lottery B is obviously wrong.
Ensemble Averaging
If a large ‘ensemble’ of people all independently accept a lottery and agree to share any profits/losses between them equally, then the amount they will each receive will approach the expected value of the lottery (as the size of the ensemble approaches infinity and the law of large numbers applies). The geometric expectation of the lottery does not provide a guide to your utility in this situation. If people in the ensemble used the geometric expectation to choose their lotteries, they would all end up worse off.
In some ways, this is the counterpart to multiplicative time-averaging we encountered above. Both repeat the gamble many times independently, either sequentially or in parallel. When repeated multiplicatively in sequence, the geometric expectation is the best guide for your wealth, when repeated in parallel, in an ensemble, the expected value is better.
Background Independence
If you take a lottery with outcomes ui and you add a constant utility x to each of them, the expected value of this new lottery is simply x plus the expected value of the original lottery. If you are comparing multiple lotteries, the lottery with the highest expected utility doesn’t change if you add a constant x to each outcome of each lottery. This property sometimes comes under the umbrella term of ‘background independence’. The preference ordering over lotteries (as decided by the expected value) is not affected by things that are unchanged by the outcome of the lotteries.
This is not the case for the geometric expectation, which is said to reject background independence. What matters for geometric maximization is the proportional change in utility, not the absolute change. For an expected utility maximizer, saving 10 lives is equivalent whether they are 10 people among 8 billion others, of whether they are the last 10 people in the universe. For a geometric utility maximizer, the latter situation represents a larger proportional change.
Rejecting background independence has been criticised by Wilkinson, in his paper ‘In Defence of Fanaticism’, using a variation of Parfit’s Egyptology argument. Roughly, if you reject background independence then there are some lotteries for which choosing between them entirely depends on your knowledge of the ‘background’ which is unaffected by the lotteries, rather than the outcomes that are actually at stake. For example, if you believe in assigning moral weight to aliens on the other side of the universe, then whether or not they exist would not affect the decisions made by an expected utility maximizer, but would affect the decisions made by a geometric utility maximizer. Thus a geometric utility maximizer could conceivably spend large amounts of resources researching astrobiology and distant galaxies in order to make a decision which only affects people on earth.
This is often taken to just be absurd, in the same way that Parfit’s average utilitarian who researches Egyptology in order to decide whether to have children is absurd. But to me its not so obvious. If we’re making decisions about a particular reference class, its not crazy that knowing more about that reference class will change the kind of decisions we make. Accepting a lottery which has a 10% chance of killing 10 people is more significant if there are only 10 people left on earth. Also, rejecting background independence is not unique to geometric averaging: it also applies to any expected utility maximizer who has a utility function which is nonlinear in any quantity (this applies to, for example, any VNM utility function, which by definition must be bounded and therefore nonlinear in some quantity).
Conclusion
When I started this piece, I was hoping that geometric utility maximization would prove to be a satisfactory replacement for expected utility maximization, about which I have some lingering dissatisfaction. Instead, like pushing around a lump under the carpet, it seems to resolve issues in some situations, which then pop up in a different form somewhere else. Geometric utility maximization fits some of my some of my intuitions regarding ethical decision making but not others. The same is true of expected utility maximization. Maybe searching for a version of consequentialism that fits all intuitions is hopeless. But I find viewing expected utility as the default ‘obviously correct’ option unappealing. I can imagine a world where people thought more in terms of the geometric expectation and geometric utility maximization was considered the default model of rational behaviour, as opposed to expected utility. It’s a bit weird but this imaginary world doesn’t look too crazy.
Here, I will use the word ‘utility’ in this normative sense, to describe something that we ought to aim for. This is to be distinguished from the ‘descriptive’ sense of the word ‘utility’, which is inferred from behaviour. See the wikipedia page for ‘utility’ for more on this distinction.
log(G[U])=log(∏iupii)=∑ipilog(ui)=E[log(U)]
Since log is a monotonically increasing function, log(G[U]) and thus E[log(U)] encodes the same preference ordering as G[U].
Imagine that a fair coin will be tossed. If it lands heads, your utility will be multiplied by a factor 1.5. If tails, it will be multiplied by a factor of 0.6. Imagine that this lottery is repeated many times. What is the average factor that your utility will be multiplied by, each time? If your initial utility is v0 and your utility after N repetitions is vN, then, on average your utility has be multiplied by a factor of N√vNv0 each repetition. Call nH the number of times the coin lands heads and nH the number of times it lands tails. In the limit that N goes to infinity, invoking the law of large numbers we can say that nH/N will approach the 1⁄2 (ie. the probability that the coin lands heads). Thus, in the limit, the average factor that utility is multiplied by is 1.51/2×0.61/2≈0.949. We call this the ‘time-averaged growth rate’. Note that this expression is the geometric expectation of the utility of the initial lottery, divided by the initial value of your utility. Thus, the geometric expectation of the lottery tells us the time averaged growth rate of the gamble if it is repeated multiplicatively. In this case, while the expected utility gain is positive (12×1.5v0+12×0.6v0=1.05v0), if we repeat this gamble enough times, we are almost guaranteed to end up with lower utility.
This example is often given by Ole Peters when discussing his ‘ergodicity economics’.
This is the question that Tyler Cowen asked Sam Bankman-Fried to understand his famously ‘risk-neutral’ approach to utilitarianism. SBF, as an expected utility maximizer, bit the bullet and said he would accept the gamble.
Expected utility advocates normally get around Pascal muggings by advocating for bounded utility functions. However, provided that utility is currently low compared to the upper bound, one can always come up with a lottery with a very small probability of a large utility payoff. The only way to avoid this is to say that utility is currently at a significant fraction of the upper bound (eg. you could say that it is impossible to increase utility from its current by more than a factor of 100x). To me, this seems to indicate a lack of imagination regarding how much better the world could be.
Cardinal Utility in Welfare Economics and in the Theory of Risk-taking (1953) paywalled link here.