Got it. I’m not sure that this “common definition of longtermism” would or should be widely accepted by longtermists, upon reflection. As you suggest it is a claim about an in-principle measurable outcome (‘value … mostly depends … VMDLT’). It is not a core belief or value.
The truth value of VMDLT depends on a combination of empirical things (e.g., potential to affect long term future, likely positive nature of the future, …) and moral value things (especially total utilitarianism).[1].
What I find slightly strange about this definition of longtermism in an EA context is that it presumes one does the careful analysis with “good epistemics” and then gets to the VMDLT conclusion. But if that is the case then how can we define “longtermist thinking” or “longtermist ideas”?
By off the cuff analogy, suppose we were all trying to evaluate the merit of boosting nuclear energy as a source of power. We stated and defended our set of overlapping core beliefs, consulted similar data and evidence, and came up with estimates and simulations. Our estimates of the net benefit of nuclear spread out across a wide range, sometimes close to 0, sometimes negative, sometimes positive, sometimes very positive.
Would it then make sense to call the people who found it to be very positive “nuclear-ists”? What about those who found it to be just a bit better than 0 in expectation? Should all these people be thought of as a coherent movement and thought group? Should they meet and coalesce around the fact that their results found that Nuclear>0 ?
But I think there is not a unique path to getting there; I suspect a range of combinations of empirical and moral beliefs could get you to VMDLT… or not
Would it then make sense to call the people who found it to be very positive “nuclear-ists”? What about those who found it to be just a bit better than 0 in expectation? Should all these people be thought of as a coherent movement and thought group? Should they meet and coalesce around the fact that their results found that Nuclear>0 ?
Yes, I agree. I think longtermism is a step backwards from the original EA framework of importance/tractability/crowdedness, where we allocate resources to the interventions with the highest expected value. If those happen to be aimed at future generations, great. But we’re going to have a portfolio of interventions, and the ‘best’ intervention (which optimally receives the marginal funding dollar) will change as increased funding decreases marginal returns.
I’m referring to this common definition of longtermism:
>’the value of your action depends mostly on its effect on the long term future
Got it. I’m not sure that this “common definition of longtermism” would or should be widely accepted by longtermists, upon reflection. As you suggest it is a claim about an in-principle measurable outcome (‘value … mostly depends … VMDLT’). It is not a core belief or value.
The truth value of VMDLT depends on a combination of empirical things (e.g., potential to affect long term future, likely positive nature of the future, …) and moral value things (especially total utilitarianism).[1].
What I find slightly strange about this definition of longtermism in an EA context is that it presumes one does the careful analysis with “good epistemics” and then gets to the VMDLT conclusion. But if that is the case then how can we define “longtermist thinking” or “longtermist ideas”?
By off the cuff analogy, suppose we were all trying to evaluate the merit of boosting nuclear energy as a source of power. We stated and defended our set of overlapping core beliefs, consulted similar data and evidence, and came up with estimates and simulations. Our estimates of the net benefit of nuclear spread out across a wide range, sometimes close to 0, sometimes negative, sometimes positive, sometimes very positive.
Would it then make sense to call the people who found it to be very positive “nuclear-ists”? What about those who found it to be just a bit better than 0 in expectation? Should all these people be thought of as a coherent movement and thought group? Should they meet and coalesce around the fact that their results found that Nuclear>0 ?
But I think there is not a unique path to getting there; I suspect a range of combinations of empirical and moral beliefs could get you to VMDLT… or not
Yes, I agree. I think longtermism is a step backwards from the original EA framework of importance/tractability/crowdedness, where we allocate resources to the interventions with the highest expected value. If those happen to be aimed at future generations, great. But we’re going to have a portfolio of interventions, and the ‘best’ intervention (which optimally receives the marginal funding dollar) will change as increased funding decreases marginal returns.