Excellent post, and I agree with much of it. (In fact, I was planning to write something similar about the perils of expected value thinking.) I agree that SBF seems to have been misguided more by expected value thinking than by utilitarianism per se.
In particular, I think there’s been a very naive over-reliance in both EA and the LessWrong rationalist community on the Tversky & Kahneman ‘heuristics and biases’ program of research on ‘cognitive biases’. That’s the program that convinced many smart hyper-systematizers that ‘loss aversion’ and ‘risk aversion’ are irrational biases that should be overcome by ‘debiasing’.
Much of what SBF said in the interviews you quoted seems inspired by that ‘cognitive biases’ view that (1) expected utility theory is a valid normative model for decision making, (2) humans should strive to overcome their biases and conform more to expected utility theory.
I understand the appeal of that thinking. I took Amos Tversky’s decision-making class at Stanford back in the late 1980s. I worked a fair amount on judgment and decision-making, and game theory, back in the day. However from the late 1980s onwards, the cognitive biases research has been challenged repeatedly and incisively by other behavioral sciences researchers, including the ecological rationality field (e.g. Gerd Gigerenzer, Ralph Hertwig, Peter Todd), the evolutionary biology work on animal behavior (e.g. risk-sensitive foraging theory), and the evolutionary psychology field.
All of those fields converged on a view that loss aversion and risk aversion are NOT always irrational. In fact, for mortal animals that face existential risks to their survival and reproduction prospects, they are very often appropriate. This is the problem of the ‘lower boundary’ of ruination and disaster that the OP here mentioned. When animals—including humans—are under selection to live a long time, they do not evolve to maximize expected utility (e.g. calorie intake per hour of foraging). Instead, the evolve to minimize likelihood of catastrophic risk (e.g. starvation during a cold night). The result: loss aversion and, often, risk aversion. (Of course, risk-seeking often makes sense in many domains of animal behavior such as male-male competition for mates. But loss-seeking almost never makes sense.)
So, I think EAs should spend a lot more time re-thinking our emphasis on expected utility maximization, and our contempt for ‘cognitive biases’—which often evolved as adaptive solutions to real-life dangers of catastrophic failure, not just as ‘failures of rationality’, as often portrayed in the Rationalist community. We should also be extremely wary of trying to ‘debias’ people, without understanding the evolutionary origins and adaptive functions of our decision-making ‘biases’.
A good start would be to read the many great books about decision making by Gerd Gigerenzer (including his critiques of Daniel Kahneman’s research and expected utility theory), and to learn a bit more about optimal foraging theory.
PS I’m especially concerned that AI safety research relies on expected value thinking about the benefits and costs of developing transformational AI. As if a huge potential upside from AI (prosperity, longevity, etc) can counter-balance the existential risks of AI. That kind of reasoning strikes me as more orders of magnitude more dangerous than anything SBF did.
I don’t think “risk aversion” was labelled as a cognitive bias by anyone in the economics orbit. It just flows from diminishing marginal utility of income. But please let me know if you have some references for this.
I don’t know about economics, but ‘risk aversion’ is standardly treated as a ‘cognitive bias’ in psychology, e.g. here
And the interviews with SBF (in the OP) seem to hint that he viewed risk aversion as more-or-less irrational, from the perspective of expected value theory.
I agree with your point that risk aversion regarding income is not ‘irrational’ given diminishing marginal utility of income.
I disagree, and I view Joseph Richardson’s comment as why it’s limited to finance rather than indicating a systemic problem:
Although EA risk attitudes may have played a role in FTX’s demise, I think to the extent that is true it is due to the peculiar nature of finance rather than EA advice being wrong in most instances. Specifically, impact in most areas (e.g., media, innovations, charitable impact) is heavily right-tailed but financial exchanges have a major left-tailed risk of collapse. As human expectations of success are heavily formed and biased by our most recent similar experiences, this will cause people to not take enough risk when the value is in the right tail (as median<mean) and take on too much when there are major failures in the left tail (as median>mean).
If this is true, we may need to consider which specific situations have these left-tailed properties and to be cautious about discouraging too much risk taking in those domains. However, I suspect that this situation may be very rare and has few implications for what EAs should do going forwards.
If this issue is limited to finance, why do you think that animals of most species studied so far seem to show loss aversion, and often show risk aversion?
Why would these ‘cognitive biases’ have evolved so widely?
Also, we are severely misaligned with evolution, to the point that in certain areas, we (according to evolution), are inner misaligned and outer misaligned, thus our goals can be arbitrarily different goals than what evolution has as it’s goal.
It’s a textbook inner alignment and outer alignment failure.
Are you saying that humans show too much loss aversion and risk aversion, and these ‘biases’ are maladaptive (unaligned with evolution)? Or that humans don’t show enough loss aversion and risk aversion, compared to what evolution would have favored?
‘Inner alignment’ and ‘outer alignment’ aren’t very helpful bits of AI safety jargon in this context, IMHO.
Are you saying that humans show too much loss aversion and risk aversion, and these ‘biases’ are maladaptive (unaligned with evolution)? Or that humans don’t show enough loss aversion and risk aversion, compared to what evolution would have favored?
Yes, in both cases.
The basic issue is ignoring heavy tails to the right is going to give you too much risk-averseness, while heavy tails to the left will give you too much risk-seeking.
An example of a heavy tail to the left is finance, where riskiness blows you up, but doesn’t give you that much more to donate. Thus, SBF too much risk, and took too much wrong-way risk in particular.
An example of a heavy tail to the right is job performance, where the worst is a mediocre job performance, while the best can be amazing. This, there is likely not enough risk aversion.
And we need to be clear: The world we are living in with complicated societies and newfangled phones now would be totally against evolution’s values, so that’s why I brought up the misalignment talk from AI safety.
Excellent post, and I agree with much of it. (In fact, I was planning to write something similar about the perils of expected value thinking.) I agree that SBF seems to have been misguided more by expected value thinking than by utilitarianism per se.
In particular, I think there’s been a very naive over-reliance in both EA and the LessWrong rationalist community on the Tversky & Kahneman ‘heuristics and biases’ program of research on ‘cognitive biases’. That’s the program that convinced many smart hyper-systematizers that ‘loss aversion’ and ‘risk aversion’ are irrational biases that should be overcome by ‘debiasing’.
Much of what SBF said in the interviews you quoted seems inspired by that ‘cognitive biases’ view that (1) expected utility theory is a valid normative model for decision making, (2) humans should strive to overcome their biases and conform more to expected utility theory.
I understand the appeal of that thinking. I took Amos Tversky’s decision-making class at Stanford back in the late 1980s. I worked a fair amount on judgment and decision-making, and game theory, back in the day. However from the late 1980s onwards, the cognitive biases research has been challenged repeatedly and incisively by other behavioral sciences researchers, including the ecological rationality field (e.g. Gerd Gigerenzer, Ralph Hertwig, Peter Todd), the evolutionary biology work on animal behavior (e.g. risk-sensitive foraging theory), and the evolutionary psychology field.
All of those fields converged on a view that loss aversion and risk aversion are NOT always irrational. In fact, for mortal animals that face existential risks to their survival and reproduction prospects, they are very often appropriate. This is the problem of the ‘lower boundary’ of ruination and disaster that the OP here mentioned. When animals—including humans—are under selection to live a long time, they do not evolve to maximize expected utility (e.g. calorie intake per hour of foraging). Instead, the evolve to minimize likelihood of catastrophic risk (e.g. starvation during a cold night). The result: loss aversion and, often, risk aversion. (Of course, risk-seeking often makes sense in many domains of animal behavior such as male-male competition for mates. But loss-seeking almost never makes sense.)
So, I think EAs should spend a lot more time re-thinking our emphasis on expected utility maximization, and our contempt for ‘cognitive biases’—which often evolved as adaptive solutions to real-life dangers of catastrophic failure, not just as ‘failures of rationality’, as often portrayed in the Rationalist community. We should also be extremely wary of trying to ‘debias’ people, without understanding the evolutionary origins and adaptive functions of our decision-making ‘biases’.
A good start would be to read the many great books about decision making by Gerd Gigerenzer (including his critiques of Daniel Kahneman’s research and expected utility theory), and to learn a bit more about optimal foraging theory.
PS I’m especially concerned that AI safety research relies on expected value thinking about the benefits and costs of developing transformational AI. As if a huge potential upside from AI (prosperity, longevity, etc) can counter-balance the existential risks of AI. That kind of reasoning strikes me as more orders of magnitude more dangerous than anything SBF did.
I don’t think “risk aversion” was labelled as a cognitive bias by anyone in the economics orbit. It just flows from diminishing marginal utility of income. But please let me know if you have some references for this.
I don’t know about economics, but ‘risk aversion’ is standardly treated as a ‘cognitive bias’ in psychology, e.g. here
And the interviews with SBF (in the OP) seem to hint that he viewed risk aversion as more-or-less irrational, from the perspective of expected value theory.
I agree with your point that risk aversion regarding income is not ‘irrational’ given diminishing marginal utility of income.
I disagree, and I view Joseph Richardson’s comment as why it’s limited to finance rather than indicating a systemic problem:
If this issue is limited to finance, why do you think that animals of most species studied so far seem to show loss aversion, and often show risk aversion?
Why would these ‘cognitive biases’ have evolved so widely?
I have the answer, and it is right in my quote.
Also, we are severely misaligned with evolution, to the point that in certain areas, we (according to evolution), are inner misaligned and outer misaligned, thus our goals can be arbitrarily different goals than what evolution has as it’s goal.
It’s a textbook inner alignment and outer alignment failure.
Sorry, but I don’t understand your reply.
Are you saying that humans show too much loss aversion and risk aversion, and these ‘biases’ are maladaptive (unaligned with evolution)? Or that humans don’t show enough loss aversion and risk aversion, compared to what evolution would have favored?
‘Inner alignment’ and ‘outer alignment’ aren’t very helpful bits of AI safety jargon in this context, IMHO.
Yes, in both cases.
The basic issue is ignoring heavy tails to the right is going to give you too much risk-averseness, while heavy tails to the left will give you too much risk-seeking.
An example of a heavy tail to the left is finance, where riskiness blows you up, but doesn’t give you that much more to donate. Thus, SBF too much risk, and took too much wrong-way risk in particular.
An example of a heavy tail to the right is job performance, where the worst is a mediocre job performance, while the best can be amazing. This, there is likely not enough risk aversion.
Link here:
https://forum.effectivealtruism.org/posts/ntLmCbHE2XKhfbzaX/how-much-does-performance-differ-between-people
And we need to be clear: The world we are living in with complicated societies and newfangled phones now would be totally against evolution’s values, so that’s why I brought up the misalignment talk from AI safety.