It could end up best if you think improving general human empowerment or doing “common sense good” (or something like that) is the best way to reduce existential risk, though personally it seems unclear because many existential risks are man-made, and there seems to be more specific things we can do about them.
GiveWell also selects charities on the basis of room for more funding, team quality and transparency—things you’d want in any charity no matter your outcome metric—and that might raise the probability above 1%.
There might be a strong argument made about the existential risk coming from people in poverty contributing to social instability, and the resulting potential for various forms of terrorism, sabotage, and other Black Swan scenarios.
It could end up best if you think improving general human empowerment or doing “common sense good” (or something like that) is the best way to reduce existential risk, though personally it seems unclear because many existential risks are man-made, and there seems to be more specific things we can do about them.
GiveWell also selects charities on the basis of room for more funding, team quality and transparency—things you’d want in any charity no matter your outcome metric—and that might raise the probability above 1%.
Indeed. Valuation of outcomes is one of several multiplicative factors.
There might be a strong argument made about the existential risk coming from people in poverty contributing to social instability, and the resulting potential for various forms of terrorism, sabotage, and other Black Swan scenarios.