I think that makes you a longtermist though… having read What We Owe the Future anyway, unless I missed something:
I think a longtermist would say that the effects on future moral people should dominate our moral calculus due to their vast number, not necessarily that they can right now. But we should keep an eye out for how to impact the longrun future positively and take such a chance if we ever see it. Some people think they see the chance now, so they are taking it.* Maybe you will never see something plausible-to-you within your lifetime. But that doesn’t mean a chance will never occur.
For example if we could run amazing simulations to test longrun outcomes, I think a longtermist, if they believed in the tech, would want to give the best-predicted longterm action a go (like, the best sum of experiences had over time, summed at the end), using neutral moral weights for beings living today vs 3 years from now vs 1000 years from now. Of course there will be wider ranges and larger confidence intervals but you’d factor those too, as to what option you’d want to do. By contrast a neartermist would add moral weight to the consequences and experiences for beings existing in the nearterm on top of the differing ranges and confidence intervals which are just sensible to use for both neartermists and longtermists.
*I’ll note that some extinction risks seem low enough percentage chance of happening to me that they may not be worth working on unless you do add neutrally-weighted future generations into the calculus. But it depends on the moral discount rate you’d use, like if your expected rate of population growth is large if everything goes well, and your moral discount rate is gradual enough, you might still end up preferring a “longtermist” intervention, even though you might not really be a true “longtermist” philosophically because you are still claiming that future generations are morally worth less (regardless of confidence), but focusing on the longterm just passed your bar anyway because of the scale.
Longtermism includes the claim that improving the future is tractable. I think that was probably a mistake, and it should just be a claim about values.
Daaaang yeah that seems wrong to me too unfortunately. I can imagine we have passed the threshold on coordination and technology that changing the long-run future somewhat predictably is a tractable cause now (and I can also imagine we haven’t), but I’m pretty sure there were many years in history where it would have been impossible to predict past 5 years ahead. It seems to me that a philosophical position (which longtermism claims to be) should be able to have existed then too. But if it included tractability, that position was likely impossible to hold (correctly) at some moments in history, or in many single-actor thought experiments. But I’m no philosopher
I think that makes you a longtermist though… having read What We Owe the Future anyway, unless I missed something:
I think a longtermist would say that the effects on future moral people should dominate our moral calculus due to their vast number, not necessarily that they can right now. But we should keep an eye out for how to impact the longrun future positively and take such a chance if we ever see it. Some people think they see the chance now, so they are taking it.* Maybe you will never see something plausible-to-you within your lifetime. But that doesn’t mean a chance will never occur.
For example if we could run amazing simulations to test longrun outcomes, I think a longtermist, if they believed in the tech, would want to give the best-predicted longterm action a go (like, the best sum of experiences had over time, summed at the end), using neutral moral weights for beings living today vs 3 years from now vs 1000 years from now. Of course there will be wider ranges and larger confidence intervals but you’d factor those too, as to what option you’d want to do. By contrast a neartermist would add moral weight to the consequences and experiences for beings existing in the nearterm on top of the differing ranges and confidence intervals which are just sensible to use for both neartermists and longtermists.
*I’ll note that some extinction risks seem low enough percentage chance of happening to me that they may not be worth working on unless you do add neutrally-weighted future generations into the calculus. But it depends on the moral discount rate you’d use, like if your expected rate of population growth is large if everything goes well, and your moral discount rate is gradual enough, you might still end up preferring a “longtermist” intervention, even though you might not really be a true “longtermist” philosophically because you are still claiming that future generations are morally worth less (regardless of confidence), but focusing on the longterm just passed your bar anyway because of the scale.
Longtermism includes the claim that improving the future is tractable. I think that was probably a mistake, and it should just be a claim about values.
Daaaang yeah that seems wrong to me too unfortunately. I can imagine we have passed the threshold on coordination and technology that changing the long-run future somewhat predictably is a tractable cause now (and I can also imagine we haven’t), but I’m pretty sure there were many years in history where it would have been impossible to predict past 5 years ahead. It seems to me that a philosophical position (which longtermism claims to be) should be able to have existed then too. But if it included tractability, that position was likely impossible to hold (correctly) at some moments in history, or in many single-actor thought experiments. But I’m no philosopher