Is that 3% an absolute percentage point reduction in risk? If so, that doesnât seem very low if your baseline risk estimate is low, like 5-20%, or youâre as pessimistic about aligning AI as MIRI is.
No, 3% is âchance of successâ. After adding a bunch of multipliers, it comes to about 0.6% reduction in existential risk over the next century, for $8B to $20B.
2 nitpicks that end up arguing in favor of your high-level point
2.7% (which youâre rounding up to 3%) is chance of having an effect, and 70% x 2.7% = 1.9% is chance of positive effect (âsuccessâ by your wording)
your Squiggle calc doesnât include the CCMâs âintervention backfiringâ part of the calc
Is that 3% an absolute percentage point reduction in risk? If so, that doesnât seem very low if your baseline risk estimate is low, like 5-20%, or youâre as pessimistic about aligning AI as MIRI is.
No, 3% is âchance of successâ. After adding a bunch of multipliers, it comes to about 0.6% reduction in existential risk over the next century, for $8B to $20B.
2 nitpicks that end up arguing in favor of your high-level point
2.7% (which youâre rounding up to 3%) is chance of having an effect, and 70% x 2.7% = 1.9% is chance of positive effect (âsuccessâ by your wording)
your Squiggle calc doesnât include the CCMâs âintervention backfiringâ part of the calc