Thanks for the post. I agree that those who embrace the asymmetry should be concerned about risks of future suffering.
I would guess that few EAs have a pure time preference for the short term. Rather, I suspect that most short-term-focused EAs are uncertain of the tractability of far-future work (due to long, complex, hard-to-predict causal chains), and some (such as a coalition within my own moral parliament) may be risk-averse. You’re right that these considerations also apply to non-suffering-focused utilitarians.
It’s tempting to say that it implies that the expected value of a miniscule increase in existential risk to all sentient life is astronomical.
As you mention, there are complexities that need to be accounted for. For example, one should think about how catastrophic risks (almost all of which would not cause human extinction) would affect the trajectory of the far future.
It’s much easier to get people behind not spreading astronomical amounts of suffering in the future than behind eliminating all current humans, so a more moderate approach is probably better. (Of course, it’s also difficult to steer humanity’s future trajectory in ways that ensure that suffering-averting measures are actually carried out.)
Just to add to this, in my anecdotal experience, it seems like the most common argument amongst EAs for not focusing on X-risk or the far future is risk aversion.
Thanks for the post. I agree that those who embrace the asymmetry should be concerned about risks of future suffering.
I would guess that few EAs have a pure time preference for the short term. Rather, I suspect that most short-term-focused EAs are uncertain of the tractability of far-future work (due to long, complex, hard-to-predict causal chains), and some (such as a coalition within my own moral parliament) may be risk-averse. You’re right that these considerations also apply to non-suffering-focused utilitarians.
As you mention, there are complexities that need to be accounted for. For example, one should think about how catastrophic risks (almost all of which would not cause human extinction) would affect the trajectory of the far future.
It’s much easier to get people behind not spreading astronomical amounts of suffering in the future than behind eliminating all current humans, so a more moderate approach is probably better. (Of course, it’s also difficult to steer humanity’s future trajectory in ways that ensure that suffering-averting measures are actually carried out.)
Just to add to this, in my anecdotal experience, it seems like the most common argument amongst EAs for not focusing on X-risk or the far future is risk aversion.
Thanks for this. It’d be interesting if there were survey evidence on this. Some anecdotal stuff the other way… On the EA funds page, Beckstead mentions person-affecting views as one of the reasons that one might not go into far future causes (https://app.effectivealtruism.org/funds/far-future). Some Givewell staffers apparently endorse person-affecting views and avoid the far future stuff on that basis—http://blog.givewell.org/2016/03/10/march-2016-open-thread/#comment-939058.