I disagree, and I view Joseph Richardson’s comment as why it’s limited to finance rather than indicating a systemic problem:
Although EA risk attitudes may have played a role in FTX’s demise, I think to the extent that is true it is due to the peculiar nature of finance rather than EA advice being wrong in most instances. Specifically, impact in most areas (e.g., media, innovations, charitable impact) is heavily right-tailed but financial exchanges have a major left-tailed risk of collapse. As human expectations of success are heavily formed and biased by our most recent similar experiences, this will cause people to not take enough risk when the value is in the right tail (as median<mean) and take on too much when there are major failures in the left tail (as median>mean).
If this is true, we may need to consider which specific situations have these left-tailed properties and to be cautious about discouraging too much risk taking in those domains. However, I suspect that this situation may be very rare and has few implications for what EAs should do going forwards.
If this issue is limited to finance, why do you think that animals of most species studied so far seem to show loss aversion, and often show risk aversion?
Why would these ‘cognitive biases’ have evolved so widely?
Also, we are severely misaligned with evolution, to the point that in certain areas, we (according to evolution), are inner misaligned and outer misaligned, thus our goals can be arbitrarily different goals than what evolution has as it’s goal.
It’s a textbook inner alignment and outer alignment failure.
Are you saying that humans show too much loss aversion and risk aversion, and these ‘biases’ are maladaptive (unaligned with evolution)? Or that humans don’t show enough loss aversion and risk aversion, compared to what evolution would have favored?
‘Inner alignment’ and ‘outer alignment’ aren’t very helpful bits of AI safety jargon in this context, IMHO.
Are you saying that humans show too much loss aversion and risk aversion, and these ‘biases’ are maladaptive (unaligned with evolution)? Or that humans don’t show enough loss aversion and risk aversion, compared to what evolution would have favored?
Yes, in both cases.
The basic issue is ignoring heavy tails to the right is going to give you too much risk-averseness, while heavy tails to the left will give you too much risk-seeking.
An example of a heavy tail to the left is finance, where riskiness blows you up, but doesn’t give you that much more to donate. Thus, SBF too much risk, and took too much wrong-way risk in particular.
An example of a heavy tail to the right is job performance, where the worst is a mediocre job performance, while the best can be amazing. This, there is likely not enough risk aversion.
And we need to be clear: The world we are living in with complicated societies and newfangled phones now would be totally against evolution’s values, so that’s why I brought up the misalignment talk from AI safety.
I disagree, and I view Joseph Richardson’s comment as why it’s limited to finance rather than indicating a systemic problem:
If this issue is limited to finance, why do you think that animals of most species studied so far seem to show loss aversion, and often show risk aversion?
Why would these ‘cognitive biases’ have evolved so widely?
I have the answer, and it is right in my quote.
Also, we are severely misaligned with evolution, to the point that in certain areas, we (according to evolution), are inner misaligned and outer misaligned, thus our goals can be arbitrarily different goals than what evolution has as it’s goal.
It’s a textbook inner alignment and outer alignment failure.
Sorry, but I don’t understand your reply.
Are you saying that humans show too much loss aversion and risk aversion, and these ‘biases’ are maladaptive (unaligned with evolution)? Or that humans don’t show enough loss aversion and risk aversion, compared to what evolution would have favored?
‘Inner alignment’ and ‘outer alignment’ aren’t very helpful bits of AI safety jargon in this context, IMHO.
Yes, in both cases.
The basic issue is ignoring heavy tails to the right is going to give you too much risk-averseness, while heavy tails to the left will give you too much risk-seeking.
An example of a heavy tail to the left is finance, where riskiness blows you up, but doesn’t give you that much more to donate. Thus, SBF too much risk, and took too much wrong-way risk in particular.
An example of a heavy tail to the right is job performance, where the worst is a mediocre job performance, while the best can be amazing. This, there is likely not enough risk aversion.
Link here:
https://forum.effectivealtruism.org/posts/ntLmCbHE2XKhfbzaX/how-much-does-performance-differ-between-people
And we need to be clear: The world we are living in with complicated societies and newfangled phones now would be totally against evolution’s values, so that’s why I brought up the misalignment talk from AI safety.