Thanks for posting! I think you raise a number of interesting and valid points, and I’d love to see more people share their reasoning about cause prioritization in this way.
My own view is that your arguments are examples of why the case for longtermism is more complex than it might seem at first glance, but that ultimately they fall short of undermining longtermism.
A few quick points:
Re argument 1, I agree that infinities are a serious theoretical problem for consequentialist accounts of ethics. However, I don’t think that a constant discount rate for well-being is a plausible response.
Imagine telling someone who’ll live in 1,000 years: “I’m sorry that you have a bad life, but I discounted your well-being to near-zero because otherwise the sums in my calculations would not converge. And then the upshot was that greenhouse gas emissions were much less problematic than if I had used a zero discount rate, so I thought it was fine to emit more.”—Is that really what you’d want to say?
Conversely, would you have wanted, say, Confucius to teach his followers that they can basically ignore the effects of their actions on you just to avoid infinities? Wouldn’t you feel that other kinds of arguments should take priority, and that maybe he should have searched a little harder for alternatives other than a constant social discount rate?
You might be interested in On infinite ethicsby Joe Carlsmith, a great blog post on this issue that also references some ways in which academic philosophers have tried to respond to the challenge posed by infinities.
(Also note that, empirically, my understanding is that our best cosmological guess is that the universe will eventually become too cold to sustain life or other ways of morally relevant information processing. Therefore, contrary to your discussion, I actually think that the more likely source of infinities is a spatially infinite universe or multiverse rather than a temporally unbounded future for morally relevant beings.)
Re argument 2, I actually think that conceptually most longtermists would agree with you. (But would disagree that this point undermines longtermism, and would disagree with some empirical claims in your discussion.)
When longtermists say that, as you put it, “human life has the same value temporally”, they mean something like this conceptual/ethical claim: “The value of a fixed amount of well-being – say, experiencing one hour of pleasure at a certain level – does not depend on when it is being experienced.” But they of course agree, empirically, that events happening at different times (such as saving a life sooner rather than later) can change how much total well-being is experienced.
So of course someone’s descendants count! Longtermists understand that the total number of someone’s descendents may depend on when they live. (Here’s a whole post on the issue!) In fact, empirically, both someone’s own well-being as well as the effect they will have on others’ total well-being (not just by having children) depend on when they live:
If you had lived in, say, the year 1000 CE, probably your life would have gone quite differently (and probably your lifetime well-being would be worse).
If you had lived in 1950, you could have done a lot of good by contributing to smallpox eradication. Someone living today can no longer affect others’ well-being in that exact way. Similarly, someone living today may be able to have a large impact on total future well-being by reducing existential risk from emerging technologies in a way not accessibly to someone who had lived before the relevant technologies had been invented. This list is, of course, endless.
In other words, the longtermist key tenet may be better summarized as “human life has the same value no matter when it happens, all else being equal”.
But doesn’t that mean that longtermists should adopt a nonzero social discount rate? Yes and no.
‘No’ because, as hinted at by the previous discussion, there are all kinds of effects that determine how the temporal location of someone’s life (or some amount of resources, like a lump of steel, or a $100 bill) makes a difference for how much good they will do. Only some of them – e.g., fertility and inflation over time scales when they’re roughly constant – are well approximated by a constant discount rate. Others, such as the fact that someone living in 1900 couldn’t campaign for nuclear disarmament, aren’t. (And once we no longer arbitrarily single out specific effects such as fertility but try to assess the total indirect long-run effects of, e.g., saving a life it becomes very hard to arrive at a bottom line.)
(There is a good discussion of this and other points in the Appendix of Parfit’s Reasons and Persons.)
‘Yes’ in the sense that a constant discount rate might still be a fine approximation in certain contexts – in particular, models that are only meant to be applied to the next couple of years and decades.
For example, it is entirely consistent to discount future resources because future people will be richer, and so derive less of an additional benefit from them.
In standard economic models, the discount rate is set by r=δ+ηg, with g denoting the consumption growth rate, η denoting the consumption elasticity of utility (roughly, how quickly the returns of consumption diminish), and δ the discount rate at which we discount the intrinsic well-being of future people (known as ‘rate of pure time reference’). Longtermists only argue that the rate of pure time preference should be zero, but are (provided the model is sensible at all) entirely content with ηg>0.
And ‘yes’ in the more general sense that we should of course pay attention to empirical differences in how much good we can do with a unit of resources at different times.
(Also, empirically I think you are wrong to say that fertility will stabilize at 2.4. Both the UN and a better projection by a team at IHME predict that absent intervention the world population is likely going to plateau and then decline. For UN projections in terms of fertility rather than population, see this graph, which projects the total fertility rate to fall below 2 before the end of the century with almost 90% confidence.)
Thanks so much for responding to my post. I’m glad that you were able to point me towards very interesting further reading and provide very valid critiques of my arguments. I’d like to provide, respectfully, a rebuttal here, and I think you will find that we both hold very similar perspectives and much middle ground.
You say that attempting to avoid infinities is fraught when explaining something to someone in the future. First, I think this misunderstands the proof by contradiction. The original statement has no moral bearing or any connection with the concluding statement—so long as I finish with something absurd, there must have been a problem with the preceding arguments.
But formal logic aside, my argument is not a moral one. But a pragmatic and empirical one (which I think you agree with). Confucius ought to teach followers that people who are temporarily close are more important, because helping them will improve their lives, and they will go on to improve lives of others or give birth to children. This will then compound and affect me in 2022 in a much more profound way than had Confucius taught people to try to find interventions that might save my life in 2022.
Your points about UN projections of population growth and the physics of the universe are well taken. Even so, while I do not completely understand the physics behind a multi-verse or a spatially infinite universe, I do think that even if this is the case, this makes longtermism extremely fraught when it comes to small probabilities and big numbers. Can we really say that the probability that a random stranger will kill everyone is 1/(total finite population of all of future humanity)? Moreover—the first two contradictions within Argument 1 rely loosely on an infinite number of future people, but Contradiction 3 does not.
The Reflective Disequilibrium post is fascinating, because it implies that perhaps the more important factor that contributes to future well-being is not population growth, but rather accumulation of technology, which then enables health and population growth. But if anything, I think the key message of that article ought to be that saving a life in the past was extremely important, because that one person could then develop technologies that help a much larger fraction of the population. The blog does say this, but then does not extend that argument to discount rates. However, I really do think it should.
Of course, I do think one very valid argument is whether technological growth will always be a positive force for human population growth. In the past, this seems to be true. As, it seems that these positive technologies vastly outweighed the negative effects of technology on the ability to wage war, say. The longtermist argument would then be, that in the future, the negative effect of technology growth on population will outpace the positive effect of technology on population growth. If this indeed is the argument of longtermists, then adopting a near zero discount rate indeed may be appropriate.
I do not want to advocate for a constant discount rate across all time for all projects, in the same way that we ought not to assign the same value of a statistical life across all time and all countries and actors. However, one could model a decreasing discount rate into the future (if one assumes that population growth will continue to decline past 2.4 and technological progress’s effect on growth will also slow down) and then mathematically reduce that into a constant discount rate.
I also agree with you that there are different interventions that people could do or make at different periods of history.
I think overall, my point is that helping someone today is going to have some sort of unknown compounding effect on people in the future. However, there are reasonable ways of understanding and mathematically bounding this compounding effect on people in the future. So long as we ignore this, we will never be able to adequately prioritize projects that we believe are extremely cost-effective in the short term with projects that we think are extremely uncertain and could affect the long term.
Given your discussion in the fourth bullet point from the last, it seems like we are broadly in agreement. Yes, I think one way to rephrase the push of my post is not so much that longtermism is wrong per se, but rather that we ought to find more effective ways of prioritizing any sort of projects by assessing the empirical long-term effects of short-term interventions. So long as we ignore this, we will certainly see nearly all funding shift from global health and development to esoteric long-run safety projects.
As you correctly pointed out, there are many flaws with my naïve approach calculation. But the very fact that few have attempted to provide some way of thinking about different funding opportunities across time seems very flawed.
Thanks! I think you’re right that we may be broadly in agreement methodologically/conceptually. I think remaining disagreements are most likely empirical. In particular, I think that:
Exponential growth of welfare-relevant quantities (such as population size) must slow down qualitatively on time scales that are short compared to the plausible life span of Earth-originating civilization. This is because we’re going to hit physical limits, after which such growth will be bounded by the at most polynomially growing amount of usable energy (because we can’t travel in any direction faster than the speed of light).
Therefore, setting in motion processes of compounding growth earlier or with larger initial stocks “only” has the effect of us reaching the polynomial-growth plateau sooner. Compared to that, it tends to be more valuable to increase the probability the we reach the polynomial-growth plateau at all, or that once we reach it we use the available resources well by impartially altruistic standards. (Unless the effects of the latter type that we can feasibly achieve are tiny, which I don’t think they are empirically – it does seem that existential risk this century is on the order of at least 1%, and that we can reduce it by nontrivial quantities.)
(I think this is the most robust argument, so I’m omitting several others. – E.g., I’m skeptical that we can ever come to a stable assessment of the net indirect long-term effects of, e.g., saving a life by donating to AMF.)
In a sense, I agree with many of Greaves’ premises but none of her conclusions in this post that you mentioned here. I do think we ought to be doing more modeling, because there are some things that are actually possible to model reasonably accurately (and other things not).
Greaves says an argument for longtermism is, “I don’t know what the effect is of increasing population size on economic growth.” But we do! There are times when it increases economic growth, and there are times when it decreases it. There are very well-thought-out macro models of this, but in general, I think we ought to be in favor of increasing population growth.
She also says, “I don’t know what the effect [of population growth] is on tendencies towards peace and cooperation versus conflict.” But that’s like saying, “Don’t invent the plow or modern agriculture, because we don’t know whether they’ll get into a fight once villages have grown big enough.” This distresses me so much, because it seems that the pivotal point in her argument is that we can no longer agree that saving lives is good, but rather only that extinction is bad. If we can no longer agree that saving lives is good, I really don’t know what we can agree upon…
Thanks for posting! I think you raise a number of interesting and valid points, and I’d love to see more people share their reasoning about cause prioritization in this way.
My own view is that your arguments are examples of why the case for longtermism is more complex than it might seem at first glance, but that ultimately they fall short of undermining longtermism.
A few quick points:
Re argument 1, I agree that infinities are a serious theoretical problem for consequentialist accounts of ethics. However, I don’t think that a constant discount rate for well-being is a plausible response.
Imagine telling someone who’ll live in 1,000 years: “I’m sorry that you have a bad life, but I discounted your well-being to near-zero because otherwise the sums in my calculations would not converge. And then the upshot was that greenhouse gas emissions were much less problematic than if I had used a zero discount rate, so I thought it was fine to emit more.”—Is that really what you’d want to say?
Conversely, would you have wanted, say, Confucius to teach his followers that they can basically ignore the effects of their actions on you just to avoid infinities? Wouldn’t you feel that other kinds of arguments should take priority, and that maybe he should have searched a little harder for alternatives other than a constant social discount rate?
You might be interested in On infinite ethics by Joe Carlsmith, a great blog post on this issue that also references some ways in which academic philosophers have tried to respond to the challenge posed by infinities.
(Also note that, empirically, my understanding is that our best cosmological guess is that the universe will eventually become too cold to sustain life or other ways of morally relevant information processing. Therefore, contrary to your discussion, I actually think that the more likely source of infinities is a spatially infinite universe or multiverse rather than a temporally unbounded future for morally relevant beings.)
Re argument 2, I actually think that conceptually most longtermists would agree with you. (But would disagree that this point undermines longtermism, and would disagree with some empirical claims in your discussion.)
When longtermists say that, as you put it, “human life has the same value temporally”, they mean something like this conceptual/ethical claim: “The value of a fixed amount of well-being – say, experiencing one hour of pleasure at a certain level – does not depend on when it is being experienced.” But they of course agree, empirically, that events happening at different times (such as saving a life sooner rather than later) can change how much total well-being is experienced.
So of course someone’s descendants count! Longtermists understand that the total number of someone’s descendents may depend on when they live. (Here’s a whole post on the issue!) In fact, empirically, both someone’s own well-being as well as the effect they will have on others’ total well-being (not just by having children) depend on when they live:
If you had lived in, say, the year 1000 CE, probably your life would have gone quite differently (and probably your lifetime well-being would be worse).
If you had lived in 1950, you could have done a lot of good by contributing to smallpox eradication. Someone living today can no longer affect others’ well-being in that exact way. Similarly, someone living today may be able to have a large impact on total future well-being by reducing existential risk from emerging technologies in a way not accessibly to someone who had lived before the relevant technologies had been invented. This list is, of course, endless.
In other words, the longtermist key tenet may be better summarized as “human life has the same value no matter when it happens, all else being equal”.
But doesn’t that mean that longtermists should adopt a nonzero social discount rate? Yes and no.
‘No’ because, as hinted at by the previous discussion, there are all kinds of effects that determine how the temporal location of someone’s life (or some amount of resources, like a lump of steel, or a $100 bill) makes a difference for how much good they will do. Only some of them – e.g., fertility and inflation over time scales when they’re roughly constant – are well approximated by a constant discount rate. Others, such as the fact that someone living in 1900 couldn’t campaign for nuclear disarmament, aren’t. (And once we no longer arbitrarily single out specific effects such as fertility but try to assess the total indirect long-run effects of, e.g., saving a life it becomes very hard to arrive at a bottom line.)
(There is a good discussion of this and other points in the Appendix of Parfit’s Reasons and Persons.)
‘Yes’ in the sense that a constant discount rate might still be a fine approximation in certain contexts – in particular, models that are only meant to be applied to the next couple of years and decades.
For example, it is entirely consistent to discount future resources because future people will be richer, and so derive less of an additional benefit from them.
In standard economic models, the discount rate is set by r=δ+ηg, with g denoting the consumption growth rate, η denoting the consumption elasticity of utility (roughly, how quickly the returns of consumption diminish), and δ the discount rate at which we discount the intrinsic well-being of future people (known as ‘rate of pure time reference’). Longtermists only argue that the rate of pure time preference should be zero, but are (provided the model is sensible at all) entirely content with ηg>0.
See this paper by GPI’s Hilary Greaves for an excellent discussion.
And ‘yes’ in the more general sense that we should of course pay attention to empirical differences in how much good we can do with a unit of resources at different times.
(Also, empirically I think you are wrong to say that fertility will stabilize at 2.4. Both the UN and a better projection by a team at IHME predict that absent intervention the world population is likely going to plateau and then decline. For UN projections in terms of fertility rather than population, see this graph, which projects the total fertility rate to fall below 2 before the end of the century with almost 90% confidence.)
Hi Max,
Thanks so much for responding to my post. I’m glad that you were able to point me towards very interesting further reading and provide very valid critiques of my arguments. I’d like to provide, respectfully, a rebuttal here, and I think you will find that we both hold very similar perspectives and much middle ground.
You say that attempting to avoid infinities is fraught when explaining something to someone in the future. First, I think this misunderstands the proof by contradiction. The original statement has no moral bearing or any connection with the concluding statement—so long as I finish with something absurd, there must have been a problem with the preceding arguments.
But formal logic aside, my argument is not a moral one. But a pragmatic and empirical one (which I think you agree with). Confucius ought to teach followers that people who are temporarily close are more important, because helping them will improve their lives, and they will go on to improve lives of others or give birth to children. This will then compound and affect me in 2022 in a much more profound way than had Confucius taught people to try to find interventions that might save my life in 2022.
Your points about UN projections of population growth and the physics of the universe are well taken. Even so, while I do not completely understand the physics behind a multi-verse or a spatially infinite universe, I do think that even if this is the case, this makes longtermism extremely fraught when it comes to small probabilities and big numbers. Can we really say that the probability that a random stranger will kill everyone is 1/(total finite population of all of future humanity)? Moreover—the first two contradictions within Argument 1 rely loosely on an infinite number of future people, but Contradiction 3 does not.
The Reflective Disequilibrium post is fascinating, because it implies that perhaps the more important factor that contributes to future well-being is not population growth, but rather accumulation of technology, which then enables health and population growth. But if anything, I think the key message of that article ought to be that saving a life in the past was extremely important, because that one person could then develop technologies that help a much larger fraction of the population. The blog does say this, but then does not extend that argument to discount rates. However, I really do think it should.
Of course, I do think one very valid argument is whether technological growth will always be a positive force for human population growth. In the past, this seems to be true. As, it seems that these positive technologies vastly outweighed the negative effects of technology on the ability to wage war, say. The longtermist argument would then be, that in the future, the negative effect of technology growth on population will outpace the positive effect of technology on population growth. If this indeed is the argument of longtermists, then adopting a near zero discount rate indeed may be appropriate.
I do not want to advocate for a constant discount rate across all time for all projects, in the same way that we ought not to assign the same value of a statistical life across all time and all countries and actors. However, one could model a decreasing discount rate into the future (if one assumes that population growth will continue to decline past 2.4 and technological progress’s effect on growth will also slow down) and then mathematically reduce that into a constant discount rate.
I also agree with you that there are different interventions that people could do or make at different periods of history.
I think overall, my point is that helping someone today is going to have some sort of unknown compounding effect on people in the future. However, there are reasonable ways of understanding and mathematically bounding this compounding effect on people in the future. So long as we ignore this, we will never be able to adequately prioritize projects that we believe are extremely cost-effective in the short term with projects that we think are extremely uncertain and could affect the long term.
Given your discussion in the fourth bullet point from the last, it seems like we are broadly in agreement. Yes, I think one way to rephrase the push of my post is not so much that longtermism is wrong per se, but rather that we ought to find more effective ways of prioritizing any sort of projects by assessing the empirical long-term effects of short-term interventions. So long as we ignore this, we will certainly see nearly all funding shift from global health and development to esoteric long-run safety projects.
As you correctly pointed out, there are many flaws with my naïve approach calculation. But the very fact that few have attempted to provide some way of thinking about different funding opportunities across time seems very flawed.
Thanks! I think you’re right that we may be broadly in agreement methodologically/conceptually. I think remaining disagreements are most likely empirical. In particular, I think that:
Exponential growth of welfare-relevant quantities (such as population size) must slow down qualitatively on time scales that are short compared to the plausible life span of Earth-originating civilization. This is because we’re going to hit physical limits, after which such growth will be bounded by the at most polynomially growing amount of usable energy (because we can’t travel in any direction faster than the speed of light).
Therefore, setting in motion processes of compounding growth earlier or with larger initial stocks “only” has the effect of us reaching the polynomial-growth plateau sooner. Compared to that, it tends to be more valuable to increase the probability the we reach the polynomial-growth plateau at all, or that once we reach it we use the available resources well by impartially altruistic standards. (Unless the effects of the latter type that we can feasibly achieve are tiny, which I don’t think they are empirically – it does seem that existential risk this century is on the order of at least 1%, and that we can reduce it by nontrivial quantities.)
(I think this is the most robust argument, so I’m omitting several others. – E.g., I’m skeptical that we can ever come to a stable assessment of the net indirect long-term effects of, e.g., saving a life by donating to AMF.)
This argument is explained better in several other places, such as Nick Bostrom’s Astronomical Waste, Paul Christiano’s On Progress and Prosperity, and other comments here and here.
The general topic does come up from time to time, e.g. here.
In a sense, I agree with many of Greaves’ premises but none of her conclusions in this post that you mentioned here. I do think we ought to be doing more modeling, because there are some things that are actually possible to model reasonably accurately (and other things not).
Greaves says an argument for longtermism is, “I don’t know what the effect is of increasing population size on economic growth.” But we do! There are times when it increases economic growth, and there are times when it decreases it. There are very well-thought-out macro models of this, but in general, I think we ought to be in favor of increasing population growth.
She also says, “I don’t know what the effect [of population growth] is on tendencies towards peace and cooperation versus conflict.” But that’s like saying, “Don’t invent the plow or modern agriculture, because we don’t know whether they’ll get into a fight once villages have grown big enough.” This distresses me so much, because it seems that the pivotal point in her argument is that we can no longer agree that saving lives is good, but rather only that extinction is bad. If we can no longer agree that saving lives is good, I really don’t know what we can agree upon…