Second, the argument overshoots. Given other plausible claims, building policy on this premise would not only lead governments to increase their efforts to prevent catastrophes. It would also lead them to impose extreme costs on the present generation for the sake of miniscule reductions in the risk of existential catastrophe.
I disagree with this.
First, I think that many moral views are compelled to find the possibility that their generation permanently eradicates all humans from the world to be especially bad and worthy of much extra effort to avoid. As I detailed in Chapter 2 of the Precipice, this can be based on considerations about the past or about the future. While longtermism is often associated with the future-directed reasons, I favour a broader definition. If someone is deeply moved by the Burkean partnership of the generations over an unbroken chain stretching back 10,000 generations and thinks this gives additional reason not to be the generation who breaks it, then I’m inclined to say they are a longtermist too. But whether it counts or doesn’t, my arguments in Chapter 2 still imply that many moral views are already committed to a special badness of extinction (and often other existential risks). This means there is a wide set of views that go beyond traditional CBA and I can’t see a good argument why they should all overshoot.
And what about for a longtermist view that is more typical of our community? Suppose we are committed to the idea that each person matters equally no matter when they would live. It doesn’t follow from this that the best policy is one that demands vast sacrifices from the current generation, anymore than this follows from the widely held view that all people matter equally regardless of race or place of birth. One could still have places in ethics or political philosophy where there are limits placed on the sacrifices that can be demanded of you, and especially limits placed on the sacrifices you can force others to endure in order to produce a larger amount of benefits for others. Theories with such limits could still be impartial in time and could definitely qualify as longtermist.
One could also have things tempered by moral uncertainty or political beliefs about pluralism or non-coercion.
And that is before we get to the fact that longtermist policy drafters don’t have to ignore the feasibility of their proposals — another clear way to stop before you overshoot.
I really don’t think it is clear that there are any serious policy suggestions from longtermists that do overshoot here. e.g. in The Precipice (p. 186) my advice on budget is:
We currently spend less than a thousandth of a percent of gross world product on them. Earlier, I suggested bringing this up by at least a factor of 100, to reach a point where the world is spending more on securing its potential than on ice cream, and perhaps a good longer-term target may be a full 1 percent.
And this doesn’t seem too different from your own advice ($400B spending by the US is 2% of a year’s GDP).
A different take might be that I and others could be commended for not going too far, but that in doing so we are being inconsistent with our stated principles. That is an interesting angle, and one raised by Jim Holt in his very good NYT book review. But I ultimately don’t think it works either: I can’t see any strong arguments that longtermism lacks the theoretical resources to consistently avoid overshooting.
The argument we mean to refer to here is the one that we call the ‘best-known argument’ elsewhere: the one that says that the non-existence of future generations would be an overwhelming moral loss because the expected future population is enormous, the lives of future people are good in expectation, and it is better if the future contains more good lives. We think that this argument is liable to overshoot.
I agree that there are other compelling longtermist arguments that don’t overshoot. But my concern is that governments can’t use these arguments to guide their catastrophe policy. That’s because these arguments don’t give governments much guidance in deciding where to set the bar for funding catastrophe-preventing interventions. They don’t answer the question, ‘By how much does an intervention need to reduce risks per $1 billion of cost in order to be worth funding?’.
We currently spend less than a thousandth of a percent of gross world product on them. Earlier, I suggested bringing this up by at least a factor of 100, to reach a point where the world is spending more on securing its potential than on ice cream, and perhaps a good longer-term target may be a full 1 percent.
And this doesn’t seem too different from your own advice ($400B spending by the US is 2% of a year’s GDP).
This seems like a good target to me, although note that $400b is our estimate for how much it would cost to fund our suite of interventions for a decade, rather than for a year.
Overshooting:
I disagree with this.
First, I think that many moral views are compelled to find the possibility that their generation permanently eradicates all humans from the world to be especially bad and worthy of much extra effort to avoid. As I detailed in Chapter 2 of the Precipice, this can be based on considerations about the past or about the future. While longtermism is often associated with the future-directed reasons, I favour a broader definition. If someone is deeply moved by the Burkean partnership of the generations over an unbroken chain stretching back 10,000 generations and thinks this gives additional reason not to be the generation who breaks it, then I’m inclined to say they are a longtermist too. But whether it counts or doesn’t, my arguments in Chapter 2 still imply that many moral views are already committed to a special badness of extinction (and often other existential risks). This means there is a wide set of views that go beyond traditional CBA and I can’t see a good argument why they should all overshoot.
And what about for a longtermist view that is more typical of our community? Suppose we are committed to the idea that each person matters equally no matter when they would live. It doesn’t follow from this that the best policy is one that demands vast sacrifices from the current generation, anymore than this follows from the widely held view that all people matter equally regardless of race or place of birth. One could still have places in ethics or political philosophy where there are limits placed on the sacrifices that can be demanded of you, and especially limits placed on the sacrifices you can force others to endure in order to produce a larger amount of benefits for others. Theories with such limits could still be impartial in time and could definitely qualify as longtermist.
One could also have things tempered by moral uncertainty or political beliefs about pluralism or non-coercion.
And that is before we get to the fact that longtermist policy drafters don’t have to ignore the feasibility of their proposals — another clear way to stop before you overshoot.
I really don’t think it is clear that there are any serious policy suggestions from longtermists that do overshoot here. e.g. in The Precipice (p. 186) my advice on budget is:
And this doesn’t seem too different from your own advice ($400B spending by the US is 2% of a year’s GDP).
A different take might be that I and others could be commended for not going too far, but that in doing so we are being inconsistent with our stated principles. That is an interesting angle, and one raised by Jim Holt in his very good NYT book review. But I ultimately don’t think it works either: I can’t see any strong arguments that longtermism lacks the theoretical resources to consistently avoid overshooting.
The argument we mean to refer to here is the one that we call the ‘best-known argument’ elsewhere: the one that says that the non-existence of future generations would be an overwhelming moral loss because the expected future population is enormous, the lives of future people are good in expectation, and it is better if the future contains more good lives. We think that this argument is liable to overshoot.
I agree that there are other compelling longtermist arguments that don’t overshoot. But my concern is that governments can’t use these arguments to guide their catastrophe policy. That’s because these arguments don’t give governments much guidance in deciding where to set the bar for funding catastrophe-preventing interventions. They don’t answer the question, ‘By how much does an intervention need to reduce risks per $1 billion of cost in order to be worth funding?’.
This seems like a good target to me, although note that $400b is our estimate for how much it would cost to fund our suite of interventions for a decade, rather than for a year.
Thanks for the clarifications!