I think of longtermism as a type of Effective Altruism (EA). I’ve seen some people talking about longtermism as (almost) an alternative to EA, so this is a quick statement of my position.
EA says to allocate the total community budget to interventions with the highest marginal expected value. In other words, allocate your next dollar to the best intervention, where ‘best’ is evaluated conditional on current funding levels. This is important, because with diminishing marginal returns, an intervention’s marginal expected value falls as it is funded. So the best intervention could change as funding is allocated.
Longtermism says to calculate expected value while treating lives as morally equal no matter when they occur. Longtermists do not discount the lives of future generations. In general, calculating the expected value of an action over the entire potential future is quite difficult, because we run into the cluelessness problem, where we just don’t know what effects an action will have far into the future. But there is a subset of actions where long-term effects are predictable: actions affecting lock-in events like extinction or misaligned AGI spreading throughout the universe. (Cluelessness seems like an open problem: what should we do about actions with unpredictable long-term effects?)
Longtermist EA, then, says to allocate the community budget according to marginal expected value, without discounting future generations. Given humanity’s neglect of existential risks, the interventions with the highest marginal expected value may be those aimed at reducing such risks. And even with diminishing returns, these could still be the best interventions after large amounts of funding are allocated. But longtermist EAs are not committed only to interventions aimed at improving the far future. If a neartermist intervention turned out to have the highest marginal expected value, they would fund that, and then recalculate marginal expected value and reassess for the next round of funding allocation.
Longtermism as Effective Altruism
I think of longtermism as a type of Effective Altruism (EA). I’ve seen some people talking about longtermism as (almost) an alternative to EA, so this is a quick statement of my position.
EA says to allocate the total community budget to interventions with the highest marginal expected value. In other words, allocate your next dollar to the best intervention, where ‘best’ is evaluated conditional on current funding levels. This is important, because with diminishing marginal returns, an intervention’s marginal expected value falls as it is funded. So the best intervention could change as funding is allocated.
Longtermism says to calculate expected value while treating lives as morally equal no matter when they occur. Longtermists do not discount the lives of future generations. In general, calculating the expected value of an action over the entire potential future is quite difficult, because we run into the cluelessness problem, where we just don’t know what effects an action will have far into the future. But there is a subset of actions where long-term effects are predictable: actions affecting lock-in events like extinction or misaligned AGI spreading throughout the universe. (Cluelessness seems like an open problem: what should we do about actions with unpredictable long-term effects?)
Longtermist EA, then, says to allocate the community budget according to marginal expected value, without discounting future generations. Given humanity’s neglect of existential risks, the interventions with the highest marginal expected value may be those aimed at reducing such risks. And even with diminishing returns, these could still be the best interventions after large amounts of funding are allocated. But longtermist EAs are not committed only to interventions aimed at improving the far future. If a neartermist intervention turned out to have the highest marginal expected value, they would fund that, and then recalculate marginal expected value and reassess for the next round of funding allocation.