Also, we are severely misaligned with evolution, to the point that in certain areas, we (according to evolution), are inner misaligned and outer misaligned, thus our goals can be arbitrarily different goals than what evolution has as it’s goal.
It’s a textbook inner alignment and outer alignment failure.
Are you saying that humans show too much loss aversion and risk aversion, and these ‘biases’ are maladaptive (unaligned with evolution)? Or that humans don’t show enough loss aversion and risk aversion, compared to what evolution would have favored?
‘Inner alignment’ and ‘outer alignment’ aren’t very helpful bits of AI safety jargon in this context, IMHO.
Are you saying that humans show too much loss aversion and risk aversion, and these ‘biases’ are maladaptive (unaligned with evolution)? Or that humans don’t show enough loss aversion and risk aversion, compared to what evolution would have favored?
Yes, in both cases.
The basic issue is ignoring heavy tails to the right is going to give you too much risk-averseness, while heavy tails to the left will give you too much risk-seeking.
An example of a heavy tail to the left is finance, where riskiness blows you up, but doesn’t give you that much more to donate. Thus, SBF too much risk, and took too much wrong-way risk in particular.
An example of a heavy tail to the right is job performance, where the worst is a mediocre job performance, while the best can be amazing. This, there is likely not enough risk aversion.
And we need to be clear: The world we are living in with complicated societies and newfangled phones now would be totally against evolution’s values, so that’s why I brought up the misalignment talk from AI safety.
I have the answer, and it is right in my quote.
Also, we are severely misaligned with evolution, to the point that in certain areas, we (according to evolution), are inner misaligned and outer misaligned, thus our goals can be arbitrarily different goals than what evolution has as it’s goal.
It’s a textbook inner alignment and outer alignment failure.
Sorry, but I don’t understand your reply.
Are you saying that humans show too much loss aversion and risk aversion, and these ‘biases’ are maladaptive (unaligned with evolution)? Or that humans don’t show enough loss aversion and risk aversion, compared to what evolution would have favored?
‘Inner alignment’ and ‘outer alignment’ aren’t very helpful bits of AI safety jargon in this context, IMHO.
Yes, in both cases.
The basic issue is ignoring heavy tails to the right is going to give you too much risk-averseness, while heavy tails to the left will give you too much risk-seeking.
An example of a heavy tail to the left is finance, where riskiness blows you up, but doesn’t give you that much more to donate. Thus, SBF too much risk, and took too much wrong-way risk in particular.
An example of a heavy tail to the right is job performance, where the worst is a mediocre job performance, while the best can be amazing. This, there is likely not enough risk aversion.
Link here:
https://forum.effectivealtruism.org/posts/ntLmCbHE2XKhfbzaX/how-much-does-performance-differ-between-people
And we need to be clear: The world we are living in with complicated societies and newfangled phones now would be totally against evolution’s values, so that’s why I brought up the misalignment talk from AI safety.