reducing existential risk by .00001 percent to protect 1018 future humans
Very-small-probability of very-large-impact is a straw man. People who think AGI risk is an important cause area think that because they also think that the probability is large.
I don’t see how that matters exactly? OP is talking about their effect, and I don’t think any work on AI safety to date has lowered the chance of catastrophe by more than a tiny amount.
Very-small-probability of very-large-impact is a straw man. People who think AGI risk is an important cause area think that because they also think that the probability is large.
I don’t see how that matters exactly? OP is talking about their effect, and I don’t think any work on AI safety to date has lowered the chance of catastrophe by more than a tiny amount.