I think this critique misses how EV maximization works in a world with many actors taking uncorrelated risks.
Consider your Scenario 2: Individual actors choosing between Option A (0% harm chance, +9.9999 utility) vs Option B (20% harm chance, +10 utility). If we have 1000 altruistic actors each making independent choices with similar profiles, and they all choose Option B (higher EV), weâd expect:
800 successful outcomes (+8000 utility)
200 harmful outcomes (negative utility)
Net positive impact far exceeding what weâd get if everyone chose the âsafeâ option
This is portfolio theory applied to altruism. Just as index funds maximize returns by holding many uncorrelated assets, the altruistic community maximizes impact when individuals make risk-neutral EV calculations on independent projects.
The key caveats:
For large actors (major foundations, governments, AI companies, etc.), risk aversion makes more sense since their failures arenât offset by othersâ successes
For correlated risks (like your animal welfare example where many actors might simultaneously cause harm based on shared wrong beliefs), we need more caution
But for most EA individuals working on diverse, independent projects? Risk-neutral EV maximization is exactly what we want everyone doing. The portfolio effect means weâll get the best aggregate outcome even though some individual bets will fail.
Are the projects of most EA individuals truly independent in the sense of their EVs being essentially uncorrelated with each other? That would be surprising to me, given that many of those projects are conditional on positive evaluation from a small number of funders, and many arose out of the same meta (so would be expected to have meaningful correlations with other active projects).
So my prediction is that most EA stuff falls into one of your two caveats. What I donât have a good sense of is how correlated the average EA work is, and thus the degree of caution /â risk aversion implied by the caveats.
In theory I agree with this, but in practise I personally think âRisk-neutral EV maximisationâ can lead to bets which are far worse than they appear to be. This is because I think we often massively overrate the EV of âhits based approachesâ.
Generally I think the lower probability of a bet, the higher chance there is of that EV being wrong and lower than stated. Iâm keen to see evidence of high risk bets turning out well once in a while before Iâm convinced that they really do have the claimed EVs...
Then your issue is with systemically flawed reasoning overestimating the likelihood of low-probability events. The solution for that would be to adjust by some factor that adjusts for this systemic epistemic bias, and then proceed with risk-neutral EV maximization (again, with the caveats that I had mentioned in my initial comment).
I think this is true as a response in certain cases, but many philanthropic interventions probably arenât tried enough times to get the sample size and lots of communities are small. Itâs pretty easy to imagine a situation like:
You and a handful of other people make some positive EV bets.
The median outcome from doing this is the world is worse, and all of the attempts at these bets end up neutral or negative.
The positive EV is never realized and the world is worse on average, despite both the individuals and the ecosystem being +EV.
It seems like this response would imply you should only do EV maximization if your movement is large (or that its impact is reliably predictable if the movement is large).
But I do think this is a fair point overall â though you could imagine a large system of interventions with the same features I describe that would have the same issues as a whole.
I think this critique misses how EV maximization works in a world with many actors taking uncorrelated risks.
Consider your Scenario 2: Individual actors choosing between Option A (0% harm chance, +9.9999 utility) vs Option B (20% harm chance, +10 utility). If we have 1000 altruistic actors each making independent choices with similar profiles, and they all choose Option B (higher EV), weâd expect:
800 successful outcomes (+8000 utility)
200 harmful outcomes (negative utility)
Net positive impact far exceeding what weâd get if everyone chose the âsafeâ option
This is portfolio theory applied to altruism. Just as index funds maximize returns by holding many uncorrelated assets, the altruistic community maximizes impact when individuals make risk-neutral EV calculations on independent projects.
The key caveats:
For large actors (major foundations, governments, AI companies, etc.), risk aversion makes more sense since their failures arenât offset by othersâ successes
For correlated risks (like your animal welfare example where many actors might simultaneously cause harm based on shared wrong beliefs), we need more caution
But for most EA individuals working on diverse, independent projects? Risk-neutral EV maximization is exactly what we want everyone doing. The portfolio effect means weâll get the best aggregate outcome even though some individual bets will fail.
Are the projects of most EA individuals truly independent in the sense of their EVs being essentially uncorrelated with each other? That would be surprising to me, given that many of those projects are conditional on positive evaluation from a small number of funders, and many arose out of the same meta (so would be expected to have meaningful correlations with other active projects).
So my prediction is that most EA stuff falls into one of your two caveats. What I donât have a good sense of is how correlated the average EA work is, and thus the degree of caution /â risk aversion implied by the caveats.
In theory I agree with this, but in practise I personally think âRisk-neutral EV maximisationâ can lead to bets which are far worse than they appear to be. This is because I think we often massively overrate the EV of âhits based approachesâ.
Generally I think the lower probability of a bet, the higher chance there is of that EV being wrong and lower than stated. Iâm keen to see evidence of high risk bets turning out well once in a while before Iâm convinced that they really do have the claimed EVs...
Then your issue is with systemically flawed reasoning overestimating the likelihood of low-probability events. The solution for that would be to adjust by some factor that adjusts for this systemic epistemic bias, and then proceed with risk-neutral EV maximization (again, with the caveats that I had mentioned in my initial comment).
I think this is true as a response in certain cases, but many philanthropic interventions probably arenât tried enough times to get the sample size and lots of communities are small. Itâs pretty easy to imagine a situation like:
You and a handful of other people make some positive EV bets.
The median outcome from doing this is the world is worse, and all of the attempts at these bets end up neutral or negative.
The positive EV is never realized and the world is worse on average, despite both the individuals and the ecosystem being +EV.
It seems like this response would imply you should only do EV maximization if your movement is large (or that its impact is reliably predictable if the movement is large).
But I do think this is a fair point overall â though you could imagine a large system of interventions with the same features I describe that would have the same issues as a whole.