An alternative minimal definition, suggested by Hilary Greaves (though the precise wording is my own), is that we could define longtermism as the view that the (intrinsic) value of an outcome is the same no matter what time it occurs.
By that token I expect that we nearly all would identify as longtermists. (And maybe you agree, as you say you find the term too broad).
But in the absence of a total utilitarian view, we don’t have a very solid empirical case that ’the value of your action depends mostly on its effect on the long term future (probably though reducing extinction risk)
This rules out views on which we should discount the future or that we should ignore the long-run indirect effects of our actions,
To be possibly redundant, I think no one is advocating that sort of discounting
but would not rule out views on which it’s just empirically intractable to try to improve the long-term future. Part of the idea is that this definition would open the way to a debate about the relevant empirical issues, in particular on the tractability of affecting the long run. [...]
Semi-agree, but I think more rides on whether you accept ‘total utilitarianism’ as a population ethic. It seems fairly clear (to me at least) that there are things we can do that are likely to reduce extinction risk. However, if I don’t put a high value on ‘creating happy lives’ (or ‘creating happy digital beings’ if we are going full avant garde) I might find it more effective to work to improve the lives of people and animals today (or those likely to exist in the near future).
In my view, this definition would be too broad. I think the distinctive idea that we should be trying to capture is the idea of trying to promote good long-term outcomes. I see the term ‘longtermism’ creating value if it results in more people taking action to help ensure that the long-run future goes well.
But there are tradeoffs, and I think these are likely to be consequential in important cases . In particular ‘should additional funding go to reducing pandemic and AI risk, or towards alleviating poverty or lobbying for animal welfare improvements’?
However, if I don’t put a high value on ‘creating happy lives’ (or ‘creating happy digital beings’ if we are going full avant garde) I might find it more effective to work to improve the lives of people and animals today (or those likely to exist in the near future).
Do you see preventing extinction as equivalent to ‘creating happy lives’? I guess if you hold the person-affecting-view, then extinction is bad because it kills the current population, but the fact that it prevents the existence of future generations is not seen as bad.
I see ‘extinction’ as doing a few things people might value, with different ethics and beliefs:
Killing the current generation and maybe causing them to suffer/lose something. All ethics probably see this as bad.
Preventing the creation of more lives, possibly many more. So, preventing extinction is ‘creating more lives’.
Happy lives? We can’t be sure, but maybe the issue of happiness vs suffering should be put in a different discussion?
Assuming the lives not-extincted ergo created are happy, the total utilitarian would value this part, and that’s where they see most of the value, dominating all other concerns.
A person-affecting-views-er would not see any value to this part, I guess.
Someone else who has a convex function of happy lives and the number of lives might also value this, but perhaps not so much that it dominates all other concerns (e.g., about present humanity).
Wiping out “humanity and our culture”; people may also see this as a bad for non-utilitarian reasons
But in the absence of a total utilitarian view, we don’t have a very solid empirical case that ’the value of your action depends mostly on its effect on the long term future (probably though reducing extinction risk)
I think this definition just assumes longtermist interventions are tractable, instead of proving it.
My statement above (not a ‘definition’, right?) is that
If you are not a total utilitarian, you don’t value “creating more lives” … at least not without some diminishing returns in your value. … perhaps you value reducing suffering or increasing happiness for people, now and in future, that will definitely or very likely exist...
then it is not clear that
“[A] reducing extinction risk is better than anything else we can do” …
because there’s also a strong case that, if the world is getting better, then helping people and animals right now is the most cost-effective solution.
Without the ‘extinction rules out an expected value very very OOM much larger number of future people’ cost, there is not a clear case that [A] preventing extinction risk must be the best use of our resources.
Now, suppose I were a total population utilitarian. Then there may be a strong case for [A]. But still maybe not; this seems to depend on empirical claims.
To me ‘reducing extinction risks’ seemed fairly obviously tractable, but on second thought, I can imagine some cases in which even this would be doubtful. Maybe, e.g., reducing risks of nuclear war in the next 100 years (e.g.) has actually little impact on extinction risks, as extinction is so likely anyways?!
Another important claim seems to be that there is a real likelihood of expansion past the earth into other planets/solar systems etc. Yet another is that ‘digital beings can have positive valenced existences’.
Got it. I’m not sure that this “common definition of longtermism” would or should be widely accepted by longtermists, upon reflection. As you suggest it is a claim about an in-principle measurable outcome (‘value … mostly depends … VMDLT’). It is not a core belief or value.
The truth value of VMDLT depends on a combination of empirical things (e.g., potential to affect long term future, likely positive nature of the future, …) and moral value things (especially total utilitarianism).[1].
What I find slightly strange about this definition of longtermism in an EA context is that it presumes one does the careful analysis with “good epistemics” and then gets to the VMDLT conclusion. But if that is the case then how can we define “longtermist thinking” or “longtermist ideas”?
By off the cuff analogy, suppose we were all trying to evaluate the merit of boosting nuclear energy as a source of power. We stated and defended our set of overlapping core beliefs, consulted similar data and evidence, and came up with estimates and simulations. Our estimates of the net benefit of nuclear spread out across a wide range, sometimes close to 0, sometimes negative, sometimes positive, sometimes very positive.
Would it then make sense to call the people who found it to be very positive “nuclear-ists”? What about those who found it to be just a bit better than 0 in expectation? Should all these people be thought of as a coherent movement and thought group? Should they meet and coalesce around the fact that their results found that Nuclear>0 ?
But I think there is not a unique path to getting there; I suspect a range of combinations of empirical and moral beliefs could get you to VMDLT… or not
Would it then make sense to call the people who found it to be very positive “nuclear-ists”? What about those who found it to be just a bit better than 0 in expectation? Should all these people be thought of as a coherent movement and thought group? Should they meet and coalesce around the fact that their results found that Nuclear>0 ?
Yes, I agree. I think longtermism is a step backwards from the original EA framework of importance/tractability/crowdedness, where we allocate resources to the interventions with the highest expected value. If those happen to be aimed at future generations, great. But we’re going to have a portfolio of interventions, and the ‘best’ intervention (which optimally receives the marginal funding dollar) will change as increased funding decreases marginal returns.
<y statement above (not a ‘definition’, right?) is that
If you are not a total utilitarian, you don’t value “creating more lives” … at least not without some diminishing returns in your value. … perhaps you value reducing suffering or increasing happiness for people, now and in future, that will definitely or very likely exist...
then it is not clear that
“[A] reducing extinction risk is better than anything else we can do” …
because there’s also a strong case that, if the world is getting better, then helping people and animals right now is the most cost-effective solution.
Without the ‘extinction rules out an expected value very very OOM much larger number of future people’ cost, there is not a clear case that [A] preventing extinction risk must be the best use of our resources.
Now, suppose I were a total population utilitarian. Then there may be a strong case for [A]. But still maybe not; this seems to depend on empirical claims.
To me ‘reducing extinction risks’ seemed fairly obviously tractable, but on second thought, I can imagine some cases in which even this would be doubtful. Maybe, e.g., reducing risks of nuclear war in the next 100 years (e.g.) has actually little impact on extinction risks, as extinction is so likely anyways?!
Another important claim seems to be that there is a real likelihood of expansion past the earth into other planets/solar systems etc. Yet another is that ‘digital beings can have positive valenced existences’.
By that token I expect that we nearly all would identify as longtermists. (And maybe you agree, as you say you find the term too broad).
But in the absence of a total utilitarian view, we don’t have a very solid empirical case that ’the value of your action depends mostly on its effect on the long term future (probably though reducing extinction risk)
To be possibly redundant, I think no one is advocating that sort of discounting
Semi-agree, but I think more rides on whether you accept ‘total utilitarianism’ as a population ethic. It seems fairly clear (to me at least) that there are things we can do that are likely to reduce extinction risk. However, if I don’t put a high value on ‘creating happy lives’ (or ‘creating happy digital beings’ if we are going full avant garde) I might find it more effective to work to improve the lives of people and animals today (or those likely to exist in the near future).
But there are tradeoffs, and I think these are likely to be consequential in important cases . In particular ‘should additional funding go to reducing pandemic and AI risk, or towards alleviating poverty or lobbying for animal welfare improvements’?
>(And maybe you agree, as you say you find the term too broad).
To be clear, I’m quoting MacAskill.
Do you see preventing extinction as equivalent to ‘creating happy lives’? I guess if you hold the person-affecting-view, then extinction is bad because it kills the current population, but the fact that it prevents the existence of future generations is not seen as bad.
I see ‘extinction’ as doing a few things people might value, with different ethics and beliefs:
Killing the current generation and maybe causing them to suffer/lose something. All ethics probably see this as bad.
Preventing the creation of more lives, possibly many more. So, preventing extinction is ‘creating more lives’.
Happy lives? We can’t be sure, but maybe the issue of happiness vs suffering should be put in a different discussion?
Assuming the lives not-extincted ergo created are happy, the total utilitarian would value this part, and that’s where they see most of the value, dominating all other concerns.
A person-affecting-views-er would not see any value to this part, I guess.
Someone else who has a convex function of happy lives and the number of lives might also value this, but perhaps not so much that it dominates all other concerns (e.g., about present humanity).
Wiping out “humanity and our culture”; people may also see this as a bad for non-utilitarian reasons
I think this definition just assumes longtermist interventions are tractable, instead of proving it.
My statement above (not a ‘definition’, right?) is that
If you are not a total utilitarian, you don’t value “creating more lives” … at least not without some diminishing returns in your value. … perhaps you value reducing suffering or increasing happiness for people, now and in future, that will definitely or very likely exist...
then it is not clear that “[A] reducing extinction risk is better than anything else we can do” …
because there’s also a strong case that, if the world is getting better, then helping people and animals right now is the most cost-effective solution.
Without the ‘extinction rules out an expected value very very OOM much larger number of future people’ cost, there is not a clear case that [A] preventing extinction risk must be the best use of our resources.
Now, suppose I were a total population utilitarian. Then there may be a strong case for [A]. But still maybe not; this seems to depend on empirical claims.
To me ‘reducing extinction risks’ seemed fairly obviously tractable, but on second thought, I can imagine some cases in which even this would be doubtful. Maybe, e.g., reducing risks of nuclear war in the next 100 years (e.g.) has actually little impact on extinction risks, as extinction is so likely anyways?!
Another important claim seems to be that there is a real likelihood of expansion past the earth into other planets/solar systems etc. Yet another is that ‘digital beings can have positive valenced existences’.
I’m referring to this common definition of longtermism:
>’the value of your action depends mostly on its effect on the long term future
Got it. I’m not sure that this “common definition of longtermism” would or should be widely accepted by longtermists, upon reflection. As you suggest it is a claim about an in-principle measurable outcome (‘value … mostly depends … VMDLT’). It is not a core belief or value.
The truth value of VMDLT depends on a combination of empirical things (e.g., potential to affect long term future, likely positive nature of the future, …) and moral value things (especially total utilitarianism).[1].
What I find slightly strange about this definition of longtermism in an EA context is that it presumes one does the careful analysis with “good epistemics” and then gets to the VMDLT conclusion. But if that is the case then how can we define “longtermist thinking” or “longtermist ideas”?
By off the cuff analogy, suppose we were all trying to evaluate the merit of boosting nuclear energy as a source of power. We stated and defended our set of overlapping core beliefs, consulted similar data and evidence, and came up with estimates and simulations. Our estimates of the net benefit of nuclear spread out across a wide range, sometimes close to 0, sometimes negative, sometimes positive, sometimes very positive.
Would it then make sense to call the people who found it to be very positive “nuclear-ists”? What about those who found it to be just a bit better than 0 in expectation? Should all these people be thought of as a coherent movement and thought group? Should they meet and coalesce around the fact that their results found that Nuclear>0 ?
But I think there is not a unique path to getting there; I suspect a range of combinations of empirical and moral beliefs could get you to VMDLT… or not
Yes, I agree. I think longtermism is a step backwards from the original EA framework of importance/tractability/crowdedness, where we allocate resources to the interventions with the highest expected value. If those happen to be aimed at future generations, great. But we’re going to have a portfolio of interventions, and the ‘best’ intervention (which optimally receives the marginal funding dollar) will change as increased funding decreases marginal returns.
<y statement above (not a ‘definition’, right?) is that
If you are not a total utilitarian, you don’t value “creating more lives” … at least not without some diminishing returns in your value. … perhaps you value reducing suffering or increasing happiness for people, now and in future, that will definitely or very likely exist...
then it is not clear that “[A] reducing extinction risk is better than anything else we can do” …
because there’s also a strong case that, if the world is getting better, then helping people and animals right now is the most cost-effective solution.
Without the ‘extinction rules out an expected value very very OOM much larger number of future people’ cost, there is not a clear case that [A] preventing extinction risk must be the best use of our resources.
Now, suppose I were a total population utilitarian. Then there may be a strong case for [A]. But still maybe not; this seems to depend on empirical claims.
To me ‘reducing extinction risks’ seemed fairly obviously tractable, but on second thought, I can imagine some cases in which even this would be doubtful. Maybe, e.g., reducing risks of nuclear war in the next 100 years (e.g.) has actually little impact on extinction risks, as extinction is so likely anyways?!
Another important claim seems to be that there is a real likelihood of expansion past the earth into other planets/solar systems etc. Yet another is that ‘digital beings can have positive valenced existences’.