In normative decision theory, risk aversion means a very specific thing. It means using a different aggregating function from expected utility maximisation to combine the value of disjunctive states.
Rather than multiplying the realised utility in each state by the probability of that state occurring, these models apply a non-linear weighting to each of the states which depends on the global properties of the lottery, not just what happens in that state.
Most philosophers and economists agree risk aversion over utilities is irrational because it violates the independence axiom /ā sure-thing principle which is one of the foundations of objective /ā subjective expected utility theory.
One way a person could rationally have seemingly risk averse preferences is by placing a higher value on the first bit of good they do than on the second bit of good they do, perhaps because doing some good makes you feel better. This would technically be selfish.
But Iām pretty sure this isnāt what most people who justify donating to global poverty out of risk aversion actually mean. They generally mean something like āwe should place a lot of weight on evidence because we arenāt actually very good at abstract reasoningā. This would mean their subjective probability that an x-risk intervention is effective is very low. So itās not technically risk aversion. Itās just having a different subjective probability. This may be an epistemic failure. But thereās nothing selfish about it.
I just want to push back against your statement that āeconomists believe that risk aversion is irrationalā. In development economics in particular, risk aversion is often seen as a perfectly rational approach to life, especially in cases where the risk is irreversible.
To explain this, I just want to quickly point out that, from an economic standpoint, thereās no correct formal way of measuring risk aversion among utils. Utility is an ordinal, not cardinal, measure. Risk aversion is something that is applied to real measures, like crop yields, in order to better estimate peopleās revealed preferencesāin essence, risk aversion is a way of taking utility into account when measuring non-utility values.
So, to put this in context, letās say you are a subsistence farmer, and have an expected yield of X from growing Sorghum or a tuber, and you know that youāll always roughly get a yield X (since Sorghum and many tubers are crazily resilient), but now someone offers you an āimproved Maizeā growth package that will get you an expected yield of 2X, but thereās a 10% chance that youāre crops will fail completely. A rational person at the poverty line should always choose the Sorghum/ātuber. This is because that 10% chance of a failed crop is much, much worse than could be revealed by expected yieldāyou could starve, have to sell productive assets, etc. Risk aversion is a way of formalizing the thought process behind this perfectly rational decision. If we could measure expected utility in a cardinal way, we would just do that, and get the correct answer without using risk aversionābut because we canāt measure it cardinally, we have to use risk aversion to account for things like this.
As a last fun point, risk aversion can also be used to formalize the idea of diminishing marginal utility without using cardinal utility functions, which is one of the many ways that weāre able to āproveā that diminishing marginal utility exists, even if we canāt measure it directly.
I agree that dmu over crop yields is perfectly rational. I mean a slightly different thing. Risk aversion over utilities. Which is why people fail the Allais pradadox. Rational choice theory is dominated by expected utility theory (exceptions Buchak, McClennen) which suggests risk aversion over utilities is irrational. Risk aversion over utilities seems pertinent here because most moral views donāt have dmu of peopleās lives.
I think that this discussion really comes from the larger discussion about the degree to which we should consider rational choice theory (RCT) to be a normative, as opposed to a positive, theory (for a good overview of the history of this debate, I would highly suggest this article by Wade Hands, especially the example on page 9). As someone with an economics background, I very heavily skew toward seeing it as a positive theory (which is why I pushed back against your statement about economistsā view of risk aversion). In my original reply I wasnāt very specific about what I was saying, so hopefully this will help clarify where Iām coming from!
I just want to say that I agree that rational choice theory (RCT) is dominated by expected utility (EU) theory. However, I disagree with your portrayal of risk aversion. In particular, I agree that risk aversion over expected utility is irrationalābut my reasoning for saying this is very different. From an economic standpoint, risk aversion over utils is, by its very definition, irrational. When you define ārationalā to mean āthat which maximizes expected utilityā (as it is defined in EU and RCT models), then of course being risk averse over utils is irrationalāunder this framework, risk neutrality over utils is a necessary pre-requisite for the model to work at all. This is why, in cases where risk aversion is important (such as the yield example), expected utility calculations take risk aversion into account when calculating the utils associated with each situationāthus making risk aversion over the utils themselves redundant.
Put in a slightly different way, we need to remember that utils do not existāthey are an artifact of our modeling efforts. Risk neutrality over utils is a necessary assumption of RCT in order to develop models that accurately describe decision-making (since RCT was developed as a positive theory). Because of this, the phrase ārisk aversion over utilityā has not real-world interpretation.
With that in mind, people donāt fail the Allais paradox because of risk aversion over utils, since there is no such thing as being risk averse over utils. Instead, the Allais paradox is a case showing that older RCT models are insufficient for describing the actions of humansāsince the empirical results appear to show, in a way, something akin to risk aversion over utils, which in turn breaks the model. This is an important pointāput differently, risk neutrality over utils is a necessary assumption of the model, and empirical results that disprove this assumption do not mean that humans are wrong (even though that may be true), it means that the model fails to capture reality. It was because the model broke (in this case, and in others), that economics developed newer positive theories of choice, such as behavioral economics and bounded rationality models, that better describe decision-making.
At most, you can say that the Allais paradox is a case showing that peopleās heuristics associated with risk aversion are systematically biased toward decisions that they would not choose if they thought the problem through a bit more. This is definitely a case showing that people are irrational sometimes, and that maybe they should think through these decisions a little more thoroughly, but it does not have anything to do with risk aversion over utility.
Anyways, to bring this back to the main discussionāfrom this perspective, risk aversion is a completely fine thing to put into models, and it would not be irrational to Alex to factor in risk aversion. This would especially be fine if Alex is worried about the validity of their model itself (which, Alex not being an expert on modeling nor AI risk, should consider to be a real concern). As a last point, I do personally think that we should be more averse to the risks associated with supporting work on far-future stuff and X-risks (which Iāve discussed partially here), but thatās a whole other issue entirely.
By this argument, someone who is risk-averse should buy insurance, even though you lose money in expectation. Most of the time, this money is wasted. Interestingly, X risk research is like buying insurance for humanity as a whole. It might very well be wasted, but the downside of not having such insurance is so much worse than the cost of insurance that it makes sense (if you are risk neutral and especially if you are risk-averse).
I agree. Although some forms of personal insurance are also rational. Eg health insurance in the US because the downside of not having it is so bad. But donāt insure your toaster.
In normative decision theory, risk aversion means a very specific thing. It means using a different aggregating function from expected utility maximisation to combine the value of disjunctive states.
Rather than multiplying the realised utility in each state by the probability of that state occurring, these models apply a non-linear weighting to each of the states which depends on the global properties of the lottery, not just what happens in that state.
Most philosophers and economists agree risk aversion over utilities is irrational because it violates the independence axiom /ā sure-thing principle which is one of the foundations of objective /ā subjective expected utility theory.
One way a person could rationally have seemingly risk averse preferences is by placing a higher value on the first bit of good they do than on the second bit of good they do, perhaps because doing some good makes you feel better. This would technically be selfish.
But Iām pretty sure this isnāt what most people who justify donating to global poverty out of risk aversion actually mean. They generally mean something like āwe should place a lot of weight on evidence because we arenāt actually very good at abstract reasoningā. This would mean their subjective probability that an x-risk intervention is effective is very low. So itās not technically risk aversion. Itās just having a different subjective probability. This may be an epistemic failure. But thereās nothing selfish about it.
I wrote a paper on this a while back in the context of risk aversion justifying donating to multiple charities. This is a shameless plug. https://āādocs.google.com/āādocument/āād/āā1CHAjFzTRJZ054KanYj5thWuYPdp8b3WJJb8Z4fIaaR0/āāedit#heading=h.gjdgxs
I just want to push back against your statement that āeconomists believe that risk aversion is irrationalā. In development economics in particular, risk aversion is often seen as a perfectly rational approach to life, especially in cases where the risk is irreversible.
To explain this, I just want to quickly point out that, from an economic standpoint, thereās no correct formal way of measuring risk aversion among utils. Utility is an ordinal, not cardinal, measure. Risk aversion is something that is applied to real measures, like crop yields, in order to better estimate peopleās revealed preferencesāin essence, risk aversion is a way of taking utility into account when measuring non-utility values.
So, to put this in context, letās say you are a subsistence farmer, and have an expected yield of X from growing Sorghum or a tuber, and you know that youāll always roughly get a yield X (since Sorghum and many tubers are crazily resilient), but now someone offers you an āimproved Maizeā growth package that will get you an expected yield of 2X, but thereās a 10% chance that youāre crops will fail completely. A rational person at the poverty line should always choose the Sorghum/ātuber. This is because that 10% chance of a failed crop is much, much worse than could be revealed by expected yieldāyou could starve, have to sell productive assets, etc. Risk aversion is a way of formalizing the thought process behind this perfectly rational decision. If we could measure expected utility in a cardinal way, we would just do that, and get the correct answer without using risk aversionābut because we canāt measure it cardinally, we have to use risk aversion to account for things like this.
As a last fun point, risk aversion can also be used to formalize the idea of diminishing marginal utility without using cardinal utility functions, which is one of the many ways that weāre able to āproveā that diminishing marginal utility exists, even if we canāt measure it directly.
I agree that dmu over crop yields is perfectly rational. I mean a slightly different thing. Risk aversion over utilities. Which is why people fail the Allais pradadox. Rational choice theory is dominated by expected utility theory (exceptions Buchak, McClennen) which suggests risk aversion over utilities is irrational. Risk aversion over utilities seems pertinent here because most moral views donāt have dmu of peopleās lives.
I think that this discussion really comes from the larger discussion about the degree to which we should consider rational choice theory (RCT) to be a normative, as opposed to a positive, theory (for a good overview of the history of this debate, I would highly suggest this article by Wade Hands, especially the example on page 9). As someone with an economics background, I very heavily skew toward seeing it as a positive theory (which is why I pushed back against your statement about economistsā view of risk aversion). In my original reply I wasnāt very specific about what I was saying, so hopefully this will help clarify where Iām coming from!
I just want to say that I agree that rational choice theory (RCT) is dominated by expected utility (EU) theory. However, I disagree with your portrayal of risk aversion. In particular, I agree that risk aversion over expected utility is irrationalābut my reasoning for saying this is very different. From an economic standpoint, risk aversion over utils is, by its very definition, irrational. When you define ārationalā to mean āthat which maximizes expected utilityā (as it is defined in EU and RCT models), then of course being risk averse over utils is irrationalāunder this framework, risk neutrality over utils is a necessary pre-requisite for the model to work at all. This is why, in cases where risk aversion is important (such as the yield example), expected utility calculations take risk aversion into account when calculating the utils associated with each situationāthus making risk aversion over the utils themselves redundant.
Put in a slightly different way, we need to remember that utils do not existāthey are an artifact of our modeling efforts. Risk neutrality over utils is a necessary assumption of RCT in order to develop models that accurately describe decision-making (since RCT was developed as a positive theory). Because of this, the phrase ārisk aversion over utilityā has not real-world interpretation.
With that in mind, people donāt fail the Allais paradox because of risk aversion over utils, since there is no such thing as being risk averse over utils. Instead, the Allais paradox is a case showing that older RCT models are insufficient for describing the actions of humansāsince the empirical results appear to show, in a way, something akin to risk aversion over utils, which in turn breaks the model. This is an important pointāput differently, risk neutrality over utils is a necessary assumption of the model, and empirical results that disprove this assumption do not mean that humans are wrong (even though that may be true), it means that the model fails to capture reality. It was because the model broke (in this case, and in others), that economics developed newer positive theories of choice, such as behavioral economics and bounded rationality models, that better describe decision-making.
At most, you can say that the Allais paradox is a case showing that peopleās heuristics associated with risk aversion are systematically biased toward decisions that they would not choose if they thought the problem through a bit more. This is definitely a case showing that people are irrational sometimes, and that maybe they should think through these decisions a little more thoroughly, but it does not have anything to do with risk aversion over utility.
Anyways, to bring this back to the main discussionāfrom this perspective, risk aversion is a completely fine thing to put into models, and it would not be irrational to Alex to factor in risk aversion. This would especially be fine if Alex is worried about the validity of their model itself (which, Alex not being an expert on modeling nor AI risk, should consider to be a real concern). As a last point, I do personally think that we should be more averse to the risks associated with supporting work on far-future stuff and X-risks (which Iāve discussed partially here), but thatās a whole other issue entirely.
Hope that helps clarify my position!
By this argument, someone who is risk-averse should buy insurance, even though you lose money in expectation. Most of the time, this money is wasted. Interestingly, X risk research is like buying insurance for humanity as a whole. It might very well be wasted, but the downside of not having such insurance is so much worse than the cost of insurance that it makes sense (if you are risk neutral and especially if you are risk-averse).
Edit: And actually some forms of global catastrophic risks are surprisingly likely, for instance a 10% global agricultural shortfall has about an 80% probability this century. So preparation for this would most likely not be wasted.
I agree. Although some forms of personal insurance are also rational. Eg health insurance in the US because the downside of not having it is so bad. But donāt insure your toaster.