cause areas related to existential risk reduction, such as AI safety, should be virtually infinitely preferred to other cause areas such as global poverty
You then proceed by discussing considerations that are somewhat specific to the specific types of interventions you’re comparing—i.e., reducing extinction risk versus speeding up growth.
You might be interested in another type of argument questioning this view. These arguments attack the “virtually infinitely” part of the view, in a way that’s agnostic about the interventions being compared. For such arguments, see e.g.:
You describe the view you’re examining as:
You then proceed by discussing considerations that are somewhat specific to the specific types of interventions you’re comparing—i.e., reducing extinction risk versus speeding up growth.
You might be interested in another type of argument questioning this view. These arguments attack the “virtually infinitely” part of the view, in a way that’s agnostic about the interventions being compared. For such arguments, see e.g.:
Brian Tomasik, Why Charities Usually Don’t Differ Astronomically in Expected Cost-Effectiveness
Tobias Baumann, Uncertainty smooths out differences in impact
Thanks a lot, this looks all very useful. I found these texts by Tomasik and Baumann particularly interesting, and was not aware of them.