It seems to me that risk aversion and selfishness are orthogonal to each other—i.e., they are different axes. Based on the case study of Alex, it seems that Alex does not truly—with their System 1 - believe that a far-future cause is 10X better than a current cause. Their System 1 has a lower expected utility on donating to a far future cause than poverty relief, and the “risk aversion” is a post-factum rationalization of a System 1, subconscious mental calculus.
I’d suggest for Alex to sit down and see if they have any emotional doubts about the 10X figure for the far-future cause. Then, figure out any emotional doubts they have, and place accurate weights on far-future donations versus poverty relief. Once Alex has their System 1 and System 2 aligned, then proceed.
It seems to me that risk aversion and selfishness are orthogonal to each other—i.e., they are different axes. Based on the case study of Alex, it seems that Alex does not truly—with their System 1 - believe that a far-future cause is 10X better than a current cause. Their System 1 has a lower expected utility on donating to a far future cause than poverty relief, and the “risk aversion” is a post-factum rationalization of a System 1, subconscious mental calculus.
I’d suggest for Alex to sit down and see if they have any emotional doubts about the 10X figure for the far-future cause. Then, figure out any emotional doubts they have, and place accurate weights on far-future donations versus poverty relief. Once Alex has their System 1 and System 2 aligned, then proceed.