I’m not keen on the recent trend of arguments that persuading people of longtermism is unnecessary, or even counterproductive, for encouraging them to work on certain cause areas (e.g., here, here). This is for a few reasons:
It’s not enough to believe that extinction risks within our lifetimes are high, and that extinction would constitute a significant moral problem purely on the grounds of harms to existing beings. Arguments for the tractability of reducing those risks, sufficient to outweigh the nearterm good done by focusing on global human health or animal welfare, seem lacking in the arguments I’ve seen for prioritizing extinction risk reduction on non-longtermist grounds.
Take the AI alignment problem as one example (among the possible extinction risks, I’m most familiar with this one). I think it’s plausible that the collective efforts of alignment researchers and people working on governance will prevent extinction, though I’m not prepared to put a number on this. But as far as I’ve seen, there haven’t been compelling cost-effectiveness estimates suggesting that the marginal dollar or work-hour invested in alignment is competitive with GiveWell charities or interventions against factory farming, from a purely neartermist perspective. (Shulman discusses this in this interview, but without specifics about tractability that I would find persuasive.)
More importantly, not all longtermist cause areas are risks that would befall currently existing beings. MacAskill discusses this a bit here, including the importance of shaping the values of the future rather than (I would say “complacently”) supposing things will converge towards a utopia by default. Near-term extinction risks do seem likely to be the most time-sensitive thing that non-downside-focused longtermists would want to prioritize. But again, tractability makes a difference, and for those who are downside-focused, there simply isn’t this convenient convergence between near- and long-term interventions. As far I can tell, s-risks affecting beings in the near future fortunately seem highly unlikely.
No, longtermism is not redundant
I’m not keen on the recent trend of arguments that persuading people of longtermism is unnecessary, or even counterproductive, for encouraging them to work on certain cause areas (e.g., here, here). This is for a few reasons:
It’s not enough to believe that extinction risks within our lifetimes are high, and that extinction would constitute a significant moral problem purely on the grounds of harms to existing beings. Arguments for the tractability of reducing those risks, sufficient to outweigh the nearterm good done by focusing on global human health or animal welfare, seem lacking in the arguments I’ve seen for prioritizing extinction risk reduction on non-longtermist grounds.
Take the AI alignment problem as one example (among the possible extinction risks, I’m most familiar with this one). I think it’s plausible that the collective efforts of alignment researchers and people working on governance will prevent extinction, though I’m not prepared to put a number on this. But as far as I’ve seen, there haven’t been compelling cost-effectiveness estimates suggesting that the marginal dollar or work-hour invested in alignment is competitive with GiveWell charities or interventions against factory farming, from a purely neartermist perspective. (Shulman discusses this in this interview, but without specifics about tractability that I would find persuasive.)
More importantly, not all longtermist cause areas are risks that would befall currently existing beings. MacAskill discusses this a bit here, including the importance of shaping the values of the future rather than (I would say “complacently”) supposing things will converge towards a utopia by default. Near-term extinction risks do seem likely to be the most time-sensitive thing that non-downside-focused longtermists would want to prioritize. But again, tractability makes a difference, and for those who are downside-focused, there simply isn’t this convenient convergence between near- and long-term interventions. As far I can tell, s-risks affecting beings in the near future fortunately seem highly unlikely.