Like most defenses of ergodicity economics that I have seen, this is just an argument against risk neutral utility.
Edit: I never defined risk neutrality. Expected utility theory says that people maximize the expectation of their utility function, E[U(c)]. Risk neutrality means that U(c) is linear, so that maximizing expected utility is the exact same thing as maximizing expected value of the outcome c. That is, E[U(c)]=U(E[c]). However, this is not true in general. If U(c) is concave, meaning that it satisfies diminishing marginal utility, then E[U(c)]<U(E[c]) - in other words, the expected utility of a bet is less than the utility of its expected value as a sure thing. This is known as risk aversion.
Consider for instance whether you are willing to take the following gamble: you’re offered to press a button with a 51% chance of doubling the world’s happiness but a 49% chance of ending it. This problem, also known as Thomas Hurka’s St Petersburg Paradox, highlights the following dilemma: Maximizing expected utility suggests you should press it, as it promises a net positive outcome.
No. A risk neutral agent would press the button because they are maximizing expected happiness. A risk averse agent will get less utility from the doubled happiness than the utility they would lose from losing all of the existing happiness. For example, if your utility function is U(c)=√c and current happiness is 1, then the expected utility of this bet is E[U(c)]=0.51∗√2+0.49∗√0≈0.72. Whereas the current utility is √1=1 which is superior to the bet.
When you are risk averse, your current level of happiness/income determines whether a bet is optimal. This is a simple and natural way to incorporate the sequence-dependence that you emphasize. After winning a few bets, your income/happiness have grown so much that the value of marginal income/happiness is much lower than what you already have, so losing it is not worthwhile. Expected utility theory is totally compatible with this; no ergodicity economics needed to resolve this puzzle.
Now, risk aversion is unappealing to some utilitarians because it implies that there is diminishing value to saving lives, which is its own bullet to bite. But any framework that takes the current state of the world into account when deciding whether a bet is worthwhile has to bite that bullet, so it’s not like ergodicity economics is an improvement in that regard.
I agree with one subtle difference: the ergodicity framework allows one to decide when to apply risk neutrality and when to apply risk aversion.
Following, Cowen and Parfit, let’s make the normative claim one wants to be risk neutral because one should not assume a diminishing value to saving lives. The EE framework allows one to divert from this view when multiplicative and repetitive dynamics are in play (i.e. wealth dynamic bets and TH St. Petersburg Paradox). Thus, not being risk-averse based on some pre-defined utility function, but because the actual long-term outcome is worse (going bankrupt and destroying the world). An actor can therefore decide to be risk neutral in scenario A (i.e. neartermist questions) and risk averse in scenario B (i.e. longtermist questions).
PS: you’re completely right on the ‘risk neutral agent’ part my wording was ambigious.
If you are truly risk neutral, ruin games are good. The long term outcome is not worse, because the 99% of times when the world is destroyed are outweighed by the fact that it’s so much better 1% of the time. If you believe in risk neutrality as a normative stance, then you should be okay with that.
Put another way; if someone offers you a 99% bet for 1000x your money with a 1% chance to lose it all, you might want to take it once or twice. You don’t have to choose between “never take it” and “take it forever”. But if you find the idea of sequence dependence to be desirable in this situation, then you shouldn’t be risk neutral.
Deciding to apply risk aversion in some cases and risk neutrality in others is not special to ergodicity either. If you have a risk averse utility function the curvature increases with the level of value. I claim that for “small” values of lives at stake, my utility function is only slightly curved, so it’s approximately linear and risk neutrality describes my optimal choice better. However, for “large” values, the curvature dominates and risk neutrality fails.
Like most defenses of ergodicity economics that I have seen, this is just an argument against risk neutral utility.
Edit: I never defined risk neutrality. Expected utility theory says that people maximize the expectation of their utility function, E[U(c)]. Risk neutrality means that U(c) is linear, so that maximizing expected utility is the exact same thing as maximizing expected value of the outcome c. That is, E[U(c)]=U(E[c]). However, this is not true in general. If U(c) is concave, meaning that it satisfies diminishing marginal utility, then E[U(c)]<U(E[c]) - in other words, the expected utility of a bet is less than the utility of its expected value as a sure thing. This is known as risk aversion.
No. A risk neutral agent would press the button because they are maximizing expected happiness. A risk averse agent will get less utility from the doubled happiness than the utility they would lose from losing all of the existing happiness. For example, if your utility function is U(c)=√c and current happiness is 1, then the expected utility of this bet is E[U(c)]=0.51∗√2+0.49∗√0≈0.72. Whereas the current utility is √1=1 which is superior to the bet.
When you are risk averse, your current level of happiness/income determines whether a bet is optimal. This is a simple and natural way to incorporate the sequence-dependence that you emphasize. After winning a few bets, your income/happiness have grown so much that the value of marginal income/happiness is much lower than what you already have, so losing it is not worthwhile. Expected utility theory is totally compatible with this; no ergodicity economics needed to resolve this puzzle.
Now, risk aversion is unappealing to some utilitarians because it implies that there is diminishing value to saving lives, which is its own bullet to bite. But any framework that takes the current state of the world into account when deciding whether a bet is worthwhile has to bite that bullet, so it’s not like ergodicity economics is an improvement in that regard.
Thanks for reading and the insightful reply!
I agree with one subtle difference: the ergodicity framework allows one to decide when to apply risk neutrality and when to apply risk aversion.
Following, Cowen and Parfit, let’s make the normative claim one wants to be risk neutral because one should not assume a diminishing value to saving lives. The EE framework allows one to divert from this view when multiplicative and repetitive dynamics are in play (i.e. wealth dynamic bets and TH St. Petersburg Paradox). Thus, not being risk-averse based on some pre-defined utility function, but because the actual long-term outcome is worse (going bankrupt and destroying the world). An actor can therefore decide to be risk neutral in scenario A (i.e. neartermist questions) and risk averse in scenario B (i.e. longtermist questions).
PS: you’re completely right on the ‘risk neutral agent’ part my wording was ambigious.
If you are truly risk neutral, ruin games are good. The long term outcome is not worse, because the 99% of times when the world is destroyed are outweighed by the fact that it’s so much better 1% of the time. If you believe in risk neutrality as a normative stance, then you should be okay with that.
Put another way; if someone offers you a 99% bet for 1000x your money with a 1% chance to lose it all, you might want to take it once or twice. You don’t have to choose between “never take it” and “take it forever”. But if you find the idea of sequence dependence to be desirable in this situation, then you shouldn’t be risk neutral.
Deciding to apply risk aversion in some cases and risk neutrality in others is not special to ergodicity either. If you have a risk averse utility function the curvature increases with the level of value. I claim that for “small” values of lives at stake, my utility function is only slightly curved, so it’s approximately linear and risk neutrality describes my optimal choice better. However, for “large” values, the curvature dominates and risk neutrality fails.