You’re preaching to the choir here on the EA forum but I think most people outside this community will intuit the slippery slope that this takes you down:
0.1% x 5 million lives saved is the same EV as 0.0000001% chance of 5 trillion lives saved
Somewhere between those two this becomes a Pascal’s Mugging that we seem to generally agree is a bad reason to do something.
Feels like there’s some line where your numbers are getting so tiny and speculative that many other considerations start dominating, like “are your numbers actually right?” E.g. I’d be pretty skeptical of many proposed ”.000001% of huge number” interventions (especially skeptical on the on the .000001% side).
In practice, the line could be where “are your numbers actually right” starts becoming the dominant consideration. At that point, proving your numbers are plausible is the main challenge that needs to be overcome—and is honestly where I suspect most people’s anti-low-probabilities intuitions come from in the first place.
You’re preaching to the choir here on the EA forum but I think most people outside this community will intuit the slippery slope that this takes you down:
0.1% x 5 million lives saved is the same EV as 0.0000001% chance of 5 trillion lives saved
Somewhere between those two this becomes a Pascal’s Mugging that we seem to generally agree is a bad reason to do something.
Where’s the line?
Feels like there’s some line where your numbers are getting so tiny and speculative that many other considerations start dominating, like “are your numbers actually right?” E.g. I’d be pretty skeptical of many proposed ”.000001% of huge number” interventions (especially skeptical on the on the .000001% side).
In practice, the line could be where “are your numbers actually right” starts becoming the dominant consideration. At that point, proving your numbers are plausible is the main challenge that needs to be overcome—and is honestly where I suspect most people’s anti-low-probabilities intuitions come from in the first place.