But in the absence of a total utilitarian view, we don’t have a very solid empirical case that ’the value of your action depends mostly on its effect on the long term future (probably though reducing extinction risk)
I think this definition just assumes longtermist interventions are tractable, instead of proving it.
My statement above (not a ‘definition’, right?) is that
If you are not a total utilitarian, you don’t value “creating more lives” … at least not without some diminishing returns in your value. … perhaps you value reducing suffering or increasing happiness for people, now and in future, that will definitely or very likely exist...
then it is not clear that
“[A] reducing extinction risk is better than anything else we can do” …
because there’s also a strong case that, if the world is getting better, then helping people and animals right now is the most cost-effective solution.
Without the ‘extinction rules out an expected value very very OOM much larger number of future people’ cost, there is not a clear case that [A] preventing extinction risk must be the best use of our resources.
Now, suppose I were a total population utilitarian. Then there may be a strong case for [A]. But still maybe not; this seems to depend on empirical claims.
To me ‘reducing extinction risks’ seemed fairly obviously tractable, but on second thought, I can imagine some cases in which even this would be doubtful. Maybe, e.g., reducing risks of nuclear war in the next 100 years (e.g.) has actually little impact on extinction risks, as extinction is so likely anyways?!
Another important claim seems to be that there is a real likelihood of expansion past the earth into other planets/solar systems etc. Yet another is that ‘digital beings can have positive valenced existences’.
Got it. I’m not sure that this “common definition of longtermism” would or should be widely accepted by longtermists, upon reflection. As you suggest it is a claim about an in-principle measurable outcome (‘value … mostly depends … VMDLT’). It is not a core belief or value.
The truth value of VMDLT depends on a combination of empirical things (e.g., potential to affect long term future, likely positive nature of the future, …) and moral value things (especially total utilitarianism).[1].
What I find slightly strange about this definition of longtermism in an EA context is that it presumes one does the careful analysis with “good epistemics” and then gets to the VMDLT conclusion. But if that is the case then how can we define “longtermist thinking” or “longtermist ideas”?
By off the cuff analogy, suppose we were all trying to evaluate the merit of boosting nuclear energy as a source of power. We stated and defended our set of overlapping core beliefs, consulted similar data and evidence, and came up with estimates and simulations. Our estimates of the net benefit of nuclear spread out across a wide range, sometimes close to 0, sometimes negative, sometimes positive, sometimes very positive.
Would it then make sense to call the people who found it to be very positive “nuclear-ists”? What about those who found it to be just a bit better than 0 in expectation? Should all these people be thought of as a coherent movement and thought group? Should they meet and coalesce around the fact that their results found that Nuclear>0 ?
But I think there is not a unique path to getting there; I suspect a range of combinations of empirical and moral beliefs could get you to VMDLT… or not
Would it then make sense to call the people who found it to be very positive “nuclear-ists”? What about those who found it to be just a bit better than 0 in expectation? Should all these people be thought of as a coherent movement and thought group? Should they meet and coalesce around the fact that their results found that Nuclear>0 ?
Yes, I agree. I think longtermism is a step backwards from the original EA framework of importance/tractability/crowdedness, where we allocate resources to the interventions with the highest expected value. If those happen to be aimed at future generations, great. But we’re going to have a portfolio of interventions, and the ‘best’ intervention (which optimally receives the marginal funding dollar) will change as increased funding decreases marginal returns.
<y statement above (not a ‘definition’, right?) is that
If you are not a total utilitarian, you don’t value “creating more lives” … at least not without some diminishing returns in your value. … perhaps you value reducing suffering or increasing happiness for people, now and in future, that will definitely or very likely exist...
then it is not clear that
“[A] reducing extinction risk is better than anything else we can do” …
because there’s also a strong case that, if the world is getting better, then helping people and animals right now is the most cost-effective solution.
Without the ‘extinction rules out an expected value very very OOM much larger number of future people’ cost, there is not a clear case that [A] preventing extinction risk must be the best use of our resources.
Now, suppose I were a total population utilitarian. Then there may be a strong case for [A]. But still maybe not; this seems to depend on empirical claims.
To me ‘reducing extinction risks’ seemed fairly obviously tractable, but on second thought, I can imagine some cases in which even this would be doubtful. Maybe, e.g., reducing risks of nuclear war in the next 100 years (e.g.) has actually little impact on extinction risks, as extinction is so likely anyways?!
Another important claim seems to be that there is a real likelihood of expansion past the earth into other planets/solar systems etc. Yet another is that ‘digital beings can have positive valenced existences’.
I think this definition just assumes longtermist interventions are tractable, instead of proving it.
My statement above (not a ‘definition’, right?) is that
If you are not a total utilitarian, you don’t value “creating more lives” … at least not without some diminishing returns in your value. … perhaps you value reducing suffering or increasing happiness for people, now and in future, that will definitely or very likely exist...
then it is not clear that “[A] reducing extinction risk is better than anything else we can do” …
because there’s also a strong case that, if the world is getting better, then helping people and animals right now is the most cost-effective solution.
Without the ‘extinction rules out an expected value very very OOM much larger number of future people’ cost, there is not a clear case that [A] preventing extinction risk must be the best use of our resources.
Now, suppose I were a total population utilitarian. Then there may be a strong case for [A]. But still maybe not; this seems to depend on empirical claims.
To me ‘reducing extinction risks’ seemed fairly obviously tractable, but on second thought, I can imagine some cases in which even this would be doubtful. Maybe, e.g., reducing risks of nuclear war in the next 100 years (e.g.) has actually little impact on extinction risks, as extinction is so likely anyways?!
Another important claim seems to be that there is a real likelihood of expansion past the earth into other planets/solar systems etc. Yet another is that ‘digital beings can have positive valenced existences’.
I’m referring to this common definition of longtermism:
>’the value of your action depends mostly on its effect on the long term future
Got it. I’m not sure that this “common definition of longtermism” would or should be widely accepted by longtermists, upon reflection. As you suggest it is a claim about an in-principle measurable outcome (‘value … mostly depends … VMDLT’). It is not a core belief or value.
The truth value of VMDLT depends on a combination of empirical things (e.g., potential to affect long term future, likely positive nature of the future, …) and moral value things (especially total utilitarianism).[1].
What I find slightly strange about this definition of longtermism in an EA context is that it presumes one does the careful analysis with “good epistemics” and then gets to the VMDLT conclusion. But if that is the case then how can we define “longtermist thinking” or “longtermist ideas”?
By off the cuff analogy, suppose we were all trying to evaluate the merit of boosting nuclear energy as a source of power. We stated and defended our set of overlapping core beliefs, consulted similar data and evidence, and came up with estimates and simulations. Our estimates of the net benefit of nuclear spread out across a wide range, sometimes close to 0, sometimes negative, sometimes positive, sometimes very positive.
Would it then make sense to call the people who found it to be very positive “nuclear-ists”? What about those who found it to be just a bit better than 0 in expectation? Should all these people be thought of as a coherent movement and thought group? Should they meet and coalesce around the fact that their results found that Nuclear>0 ?
But I think there is not a unique path to getting there; I suspect a range of combinations of empirical and moral beliefs could get you to VMDLT… or not
Yes, I agree. I think longtermism is a step backwards from the original EA framework of importance/tractability/crowdedness, where we allocate resources to the interventions with the highest expected value. If those happen to be aimed at future generations, great. But we’re going to have a portfolio of interventions, and the ‘best’ intervention (which optimally receives the marginal funding dollar) will change as increased funding decreases marginal returns.
<y statement above (not a ‘definition’, right?) is that
If you are not a total utilitarian, you don’t value “creating more lives” … at least not without some diminishing returns in your value. … perhaps you value reducing suffering or increasing happiness for people, now and in future, that will definitely or very likely exist...
then it is not clear that “[A] reducing extinction risk is better than anything else we can do” …
because there’s also a strong case that, if the world is getting better, then helping people and animals right now is the most cost-effective solution.
Without the ‘extinction rules out an expected value very very OOM much larger number of future people’ cost, there is not a clear case that [A] preventing extinction risk must be the best use of our resources.
Now, suppose I were a total population utilitarian. Then there may be a strong case for [A]. But still maybe not; this seems to depend on empirical claims.
To me ‘reducing extinction risks’ seemed fairly obviously tractable, but on second thought, I can imagine some cases in which even this would be doubtful. Maybe, e.g., reducing risks of nuclear war in the next 100 years (e.g.) has actually little impact on extinction risks, as extinction is so likely anyways?!
Another important claim seems to be that there is a real likelihood of expansion past the earth into other planets/solar systems etc. Yet another is that ‘digital beings can have positive valenced existences’.