This report seems to assume exponential discount rates for the future when modeling extinction risk. This seems to lead to extreme and seemingly immoral conclusions when applied to decisions that previous generations of humans faced.
I think exponential discount rates can make sense in short-term economic modeling, and can be a proxy for various forms of hard-to-model uncertainty and the death of individual participants in an economic system, but applying even mild economic discount rates very quickly implies pursuing policies that act with extreme disregard for any future civilizations and future humans (and as such overdetermine the results of any analysis about the long-run future).
The report says:
However, for this equation to equal to 432W, we would require merely that ρ = 0. 99526. In other words, we would need to discount utility flows like our own at 0.47% per year, to value such a future at 432 population years. This is higher than Davidson (2022), though still lower than the lowest rate recommended in Circular A-4. It suggests conservative, but not unheard of, valuations of the distant future would be necessary to prefer pausing science, if extinction imperiled our existence at rates implied by domain expert estimates.
At this discount rate, you would value a civilization that lives 10,000 years in the future, which is something that past humans decisions did influence, at less than a billion billion times of their civilization at the time. By this logic ancestral humans should have taken a trade where they had a slightly better meal, or a single person lived a single additional second (or anything else that improved the lives of a single person by more than a billionth of a percent), in favor of present civilization completely failing to come into existence.
This seems like a pretty strong reductio-ad-absurdum, so I have trouble taking the recommendations of the report seriously. From an extinction risk perspective it seems that if you buy exponential discount rates as aggressive as 1% you basically committed to not caring about future humans in any substantial way. It also seems to me that various thought experiments (like the above ancestral human facing the decision on whether to deal with the annoyance of stepping over a stone, or causing the destruction of our complete civilization) demonstrate that such discount rates almost inevitably recommend actions that seem strongly in conflict with various common sense notions of treating future generations with respect.
I think many economists justify discount rates for more pragmatic reasons, including uncertainty over the future. Your hypothetical in which a civilization 10,000 years from now is given extremely little weight isn’t necessarily a reductio in my opinion, since we know very little about what the world will be like in 10,000 years, or how our actions now could predictably change anything about the world 10,000 years from now. It is difficult to forecast even 10 years into the future. Forecasting 10,000 years into the future is in some sense “1000 times harder” than the 10 year forecast.
An exponential discount rate is simply one way of modeling “epistemic fog”, such that things further from us in time are continuously more opaque and harder to see from our perspective.
Do economists actually use discount rates to account for uncertainty? My understanding was that we are discounting expected utilities, so uncertainty should be accounted for in those expected utilities themselves.
Maybe it’s easier to account for uncertainty via an increasing discount rate, but an exponential discount rate seems inappropriate. For starters I would think our degree of uncertainty would moderate over time (e.g. we may be a lot more uncertain about effects ten years from now than today, but I doubt we are much more uncertain about effects 1,000,010 years from now compared to 1,000,000 or even 500,000 years from now).
If you think that the risk of extinction in any year is a constant γ, then the risk of extinction by year t is γt, so that makes it the only principled discount rate. If you think the risk of extinction is time-varying, then you should do something else. I imagine that a hyperbolic discount rate or something else would be fine, but I don’t think it would change the results very much (you would just have another small number as the break-even discount rate).
I think there’s a non-negligible chance we survive until the heat death of the sun or whatever, maybe even after, which is not well-modelled by any of this.
The reason it seems reasonable to view the future 1,000,010 years as almost exactly as uncertain as 1,000,000 years is mostly myopia. To analogize, is the ground 1,000 miles west of me more or less uneven than the ground 10 miles west of me? Maybe, maybe not—but I have a better idea of what the near-surroundings are, so it seems more known. For the long term future, we don’t have much confidence in our projections of either a million or a million an ten years, but it seems hard to understand why all the relevant uncertainties will simply go away, other than simply not being able to have any degree of resolution due to distance. (Unless we’re extinct, in which case, yeah.)
I agree that in short-term contexts a discount rate can be a reasonable pragmatic choice to model things like epistemic uncertainty, but this seems to somewhat obviously fall apart on the scale of tens of thousands of years. If you introduce space travel and uploaded minds and a world where even traveling between different parts of your civilization might take hundreds of years, you of course have much better bounds on how your actions might influence the future.
I think something like a decaying exponential wouldn’t seem crazy to me, where you do something like 1% for the next few years, and then 0.1% for the next few hundred years, and then 0.01% for the next few thousand years, etc. But anything that is assumed to stay exponential when modeling the distant future seems like it doesn’t survive sanity-checks.
Edit: To clarify more: This bites particularly much when dealing with extinction risks. The whole point of talking about extinction is that we have an event which we are very confident will have very long lasting effects on the degree to which our values are fulfilled. If humanity goes extinct, it seems like we can be reasonably confident (though not totally confident) that this will imply a large reduction in human welfare billions of years into the future (since there are no humans around anymore). So especially in the context of extinction risk, an exponential discount rate seems inappropriate to model the relevant epistemic uncertainty.
Perhaps worth noting that very long term discounting is even more obviously wrong because of light-speed limits and the mass available to us that limits long term available wealth—at which point discounting should be based on polynomial growth (cubic) rather than exponential growth. And around 100,000-200,000 years, it gets far worse, once we’ve saturated the Milky Way.
Hyperbolic discounting, despite its reputation for being super-short-term and irrational, is actually better in this context, and doesn’t run into the same absurd “value an extra meal in 10,000 years more than a thriving civilization in 20,000 years” problems of exponential discounting.
Here is a nice blog post arguing that hyperbolic discounting is actually more rational than exponential: hyperbolic discounting is what you get when you have uncertainty over what the correct discount rate should be.
Using the Superforecaster estimates, we need the value of all future utility outside the current epistemic regime to be equivalent to many tens of thousands of years at current consumption and population levels, specifically 66,500-178,000 population years. With domain experts we obtain much lower estimates. Given implied extinction risks, we would prefer to pause science if future utility is roughly equivalent to 400-1000 years of current population-years utility.
I don’t really know why the author thinks that 100,000x is a difficult threshold to hit for the value of future civilization. My guess is this must be a result of exponential discount rate, but assuming any kind of space colonization (which my guess is expert estimates of the kind the author puts a lot of weight on would put at least in the tens of percent likely in the next few thousand years), it seems almost inevitable that human population size will grow to at least 100x-10,000x its present size. You only need to believe in 10-100 years of that kind of future to reach the higher thresholds of valuing the future at ~100,000x current population levels.
And of course in-expectation, taking the average across many futures and taking into account the heavy right tail, as many thinkers have written about, there might very well be more than 10^30 humans alive, dominating many expected value estimates here, and easily crushing the threshold of 100,000x present population value.
To be clear, I am not particularly in-favor of halting science, but I find the reasoning in this report not very compelling for that conclusion.
To embrace this as a conclusion, you also need to fairly strongly buy total utilitarianism across the future light cone, as opposed to any understanding of the future, and the present, that assumes that humanity as a species doesn’t change much in value just because there are more people. (Not that I think either view is obviously wrong—but it is so generally assumed in EA that it’s often unnoticed, but it’s very much not a widely shared view among philosophers or the public.)
Matthew is right that uncertainty over the future is the main justification for discount rates, but another principled reason to discount the future is that future humans will be significantly richer and better off than we are, so if marginal utility is diminishing, then resources are better allocated to us than to them. This classically gives you a discount rate of ρ=d+ng where ρ is the applied discount rate, d is a rate of pure time preference that you argue should be zero, g is the growth rate of income, and n determines how steeply marginal utility declines with income. So even if you have no ethical discount rate (d=0), you would still end up with ρ>0. Most discount rates are loaded on the growth adjustment (ng) and not the ethical discount rate (d) so I don’t think longtermism really bites against having a discount rate. [EDIT: this is wrong, see Jack’s comment]
Also, am I missing something, or would a zero discount rate make this analysis impossible? The future utility with and without science is “infinite” (the sum of utilities diverges unless you have a discount rate) so how can you work without a discount rate?
Also, am I missing something, or would a zero discount rate make this analysis impossible?
I don’t think anyone is suggesting a zero discount rate? Worth noting though that that former paper I linked to discusses a generally accepted argument that the discount rate should fall over time to its lowest possible value (Weitzman’s argument).
Most discount rates are loaded on the growth adjustment (ng) and not the ethical discount rate (d) so I don’t think longtermism really bites against having a discount rate.
The growth adjustment term is only relevant if we’re talking about increasing the wealth of future people, not when we’re talking about saving them from extinction. To quote Toby Ord in the Precipice:
“The entire justification of the growth adjustment term term is to adjust for marginal benefits that are worth less to you when you are richer (such as money or things money can easily buy), but that is inapplicable here—if anything, the richer people might be, the more they would benefit from avoiding ruin or oblivion. Put another way, the ηg term is applicable only when discounting monetary benefits, but here we are considering discounting wellbeing (or utility) itself. So the ηg term should be treated as zero, leaving us with a social discount rate equal to δ.”
Yes, Ramsey discounting focuses on higher incomes of people in the future, which is the part I focused on. I probably shouldn’t have said “main”, but I meant that uncertainty over the future seems like the first order concern to me(and Ramsey ignores it).
Habryka’s comment:
applying even mild economic discount rates very quickly implies pursuing policies that act with extreme disregard for any future civilizations and future humans (and as such overdetermine the results of any analysis about the long-run future).
seems to be arguing for a zero discount rate.
Good point that growth-adjusted discounting doesn’t apply here, my main claim was incorrect.
Long run growth rates cannot be exponential. This is easy to prove. Even mild steady exponential growth rates would quickly exhaust all available matter and energy in the universe within a few million years (see Holden’s post “This can’t go on” for more details).
So a model that tries adjust for marginal utility of resources should also quickly switch towards something other than assumed exponential growth within a few thousand years.
Separately, the expected lifetime of the universe is finite, as is the space we can affect, so I don’t see why you need discount rates (see
a bunch of Bostrom’s work for how much life the energy in the reachable universe can support).
But even if things were infinite, then the right response isn’t to discount the future completely within a few thousand years just because we don’t know how to deal with infinite ethics. The choice of exponential discount rates in time does not strike me as very principled in the face of the ethical problems we would be facing in that case.
At this discount rate, you would value a civilization that lives 10,000 years in the future, which is a real choice that past humans faced, at less than a billion billion times of their civilization at the time.
I meant in the sense that humans were alive 10,000 years, and could have caused the extinction of humanity then (and in that decision, by the logic of the OP, they would have assigned zero weight to us existing).
I’m not sure that choice is a real one humanity actually faced though. It seems unlikely that humans alive 10,000 years ago actually had the capability to commit omnicide, still less the ability to avert future omnicide for the cost of lunch. It’s not a strong reductio ad absurdum because it implies a level of epistemic certainty that didn’t and doesn’t exist.
The closest ancient-world analogue is humans presented with entirely false choices to sacrifice their lunch to long-forgotten deities to preserve the future of humanity. Factoring in the possible existence of billions of humans 10,000 years into the future wouldn’t have allowed them to make decisions that better ensured our survival, so I have absolutely no qualms with those who discounted the value of our survival low enough to decline to proffer their lunch.
Even if humanity 10000 years ago had been acting on good information (perhaps a time traveller from this century warned them that cultivating grasses would set them on a path towards civilization capable of omnicide) rather than avoiding a Pascal-mugging, it’s far from clear that humanity deciding to go hungry to prevent the evils of civilization from harming billions of future humans would [i] not have ended up discovering the scientific method and founding civilizations capable of splitting atoms and engineering pathogens a bit later on anyway [ii] have ended up with as many happy humans if their cultural taboos against civilization had somehow persisted. So I’m unconvinced of a moral imperative to change course even with that foreknowledge. We don’t have comparable foreknowledge of any course the next 10000y could take, and our knowledge of actual and potential existential threats gives us more reason to discount the potential big expansive future even if we act now, especially if the proposed risk-mitigation is as untenable and unsustainable as “end science”.
If humanity ever reached the stage where we could meaningfully trade inconsequential things for cataclysms that only affect people in the far future [with high certainty], that might be time to revisit the discount rate, but it’s supposed to reflect our current epistemic uncertainty.
This report seems to assume exponential discount rates for the future when modeling extinction risk. This seems to lead to extreme and seemingly immoral conclusions when applied to decisions that previous generations of humans faced.
I think exponential discount rates can make sense in short-term economic modeling, and can be a proxy for various forms of hard-to-model uncertainty and the death of individual participants in an economic system, but applying even mild economic discount rates very quickly implies pursuing policies that act with extreme disregard for any future civilizations and future humans (and as such overdetermine the results of any analysis about the long-run future).
The report says:
At this discount rate, you would value a civilization that lives 10,000 years in the future, which is something that past humans decisions did influence, at less than a billion billion times of their civilization at the time. By this logic ancestral humans should have taken a trade where they had a slightly better meal, or a single person lived a single additional second (or anything else that improved the lives of a single person by more than a billionth of a percent), in favor of present civilization completely failing to come into existence.
This seems like a pretty strong reductio-ad-absurdum, so I have trouble taking the recommendations of the report seriously. From an extinction risk perspective it seems that if you buy exponential discount rates as aggressive as 1% you basically committed to not caring about future humans in any substantial way. It also seems to me that various thought experiments (like the above ancestral human facing the decision on whether to deal with the annoyance of stepping over a stone, or causing the destruction of our complete civilization) demonstrate that such discount rates almost inevitably recommend actions that seem strongly in conflict with various common sense notions of treating future generations with respect.
I think many economists justify discount rates for more pragmatic reasons, including uncertainty over the future. Your hypothetical in which a civilization 10,000 years from now is given extremely little weight isn’t necessarily a reductio in my opinion, since we know very little about what the world will be like in 10,000 years, or how our actions now could predictably change anything about the world 10,000 years from now. It is difficult to forecast even 10 years into the future. Forecasting 10,000 years into the future is in some sense “1000 times harder” than the 10 year forecast.
An exponential discount rate is simply one way of modeling “epistemic fog”, such that things further from us in time are continuously more opaque and harder to see from our perspective.
Do economists actually use discount rates to account for uncertainty? My understanding was that we are discounting expected utilities, so uncertainty should be accounted for in those expected utilities themselves.
Maybe it’s easier to account for uncertainty via an increasing discount rate, but an exponential discount rate seems inappropriate. For starters I would think our degree of uncertainty would moderate over time (e.g. we may be a lot more uncertain about effects ten years from now than today, but I doubt we are much more uncertain about effects 1,000,010 years from now compared to 1,000,000 or even 500,000 years from now).
If you think that the risk of extinction in any year is a constant γ, then the risk of extinction by year t is γt, so that makes it the only principled discount rate. If you think the risk of extinction is time-varying, then you should do something else. I imagine that a hyperbolic discount rate or something else would be fine, but I don’t think it would change the results very much (you would just have another small number as the break-even discount rate).
I think there’s a non-negligible chance we survive until the heat death of the sun or whatever, maybe even after, which is not well-modelled by any of this.
The reason it seems reasonable to view the future 1,000,010 years as almost exactly as uncertain as 1,000,000 years is mostly myopia. To analogize, is the ground 1,000 miles west of me more or less uneven than the ground 10 miles west of me? Maybe, maybe not—but I have a better idea of what the near-surroundings are, so it seems more known. For the long term future, we don’t have much confidence in our projections of either a million or a million an ten years, but it seems hard to understand why all the relevant uncertainties will simply go away, other than simply not being able to have any degree of resolution due to distance. (Unless we’re extinct, in which case, yeah.)
I agree that in short-term contexts a discount rate can be a reasonable pragmatic choice to model things like epistemic uncertainty, but this seems to somewhat obviously fall apart on the scale of tens of thousands of years. If you introduce space travel and uploaded minds and a world where even traveling between different parts of your civilization might take hundreds of years, you of course have much better bounds on how your actions might influence the future.
I think something like a decaying exponential wouldn’t seem crazy to me, where you do something like 1% for the next few years, and then 0.1% for the next few hundred years, and then 0.01% for the next few thousand years, etc. But anything that is assumed to stay exponential when modeling the distant future seems like it doesn’t survive sanity-checks.
Edit: To clarify more: This bites particularly much when dealing with extinction risks. The whole point of talking about extinction is that we have an event which we are very confident will have very long lasting effects on the degree to which our values are fulfilled. If humanity goes extinct, it seems like we can be reasonably confident (though not totally confident) that this will imply a large reduction in human welfare billions of years into the future (since there are no humans around anymore). So especially in the context of extinction risk, an exponential discount rate seems inappropriate to model the relevant epistemic uncertainty.
Perhaps worth noting that very long term discounting is even more obviously wrong because of light-speed limits and the mass available to us that limits long term available wealth—at which point discounting should be based on polynomial growth (cubic) rather than exponential growth. And around 100,000-200,000 years, it gets far worse, once we’ve saturated the Milky Way.
Hyperbolic discounting, despite its reputation for being super-short-term and irrational, is actually better in this context, and doesn’t run into the same absurd “value an extra meal in 10,000 years more than a thriving civilization in 20,000 years” problems of exponential discounting.
Here is a nice blog post arguing that hyperbolic discounting is actually more rational than exponential: hyperbolic discounting is what you get when you have uncertainty over what the correct discount rate should be.
Commenting more, this report also says:
I don’t really know why the author thinks that 100,000x is a difficult threshold to hit for the value of future civilization. My guess is this must be a result of exponential discount rate, but assuming any kind of space colonization (which my guess is expert estimates of the kind the author puts a lot of weight on would put at least in the tens of percent likely in the next few thousand years), it seems almost inevitable that human population size will grow to at least 100x-10,000x its present size. You only need to believe in 10-100 years of that kind of future to reach the higher thresholds of valuing the future at ~100,000x current population levels.
And of course in-expectation, taking the average across many futures and taking into account the heavy right tail, as many thinkers have written about, there might very well be more than 10^30 humans alive, dominating many expected value estimates here, and easily crushing the threshold of 100,000x present population value.
To be clear, I am not particularly in-favor of halting science, but I find the reasoning in this report not very compelling for that conclusion.
To embrace this as a conclusion, you also need to fairly strongly buy total utilitarianism across the future light cone, as opposed to any understanding of the future, and the present, that assumes that humanity as a species doesn’t change much in value just because there are more people. (Not that I think either view is obviously wrong—but it is so generally assumed in EA that it’s often unnoticed, but it’s very much not a widely shared view among philosophers or the public.)
Matthew is right that uncertainty over the future is the main justification for discount rates, but another principled reason to discount the future is that future humans will be significantly richer and better off than we are, so if marginal utility is diminishing, then resources are better allocated to us than to them. This classically gives you a discount rate of ρ=d+ng where ρ is the applied discount rate, d is a rate of pure time preference that you argue should be zero, g is the growth rate of income, and n determines how steeply marginal utility declines with income. So even if you have no ethical discount rate (d=0), you would still end up with ρ>0. Most discount rates are loaded on the growth adjustment (ng) and not the ethical discount rate (d) so I don’t think longtermism really bites against having a discount rate. [EDIT: this is wrong, see Jack’s comment]
Also, am I missing something, or would a zero discount rate make this analysis impossible? The future utility with and without science is “infinite” (the sum of utilities diverges unless you have a discount rate) so how can you work without a discount rate?
I don’t think this is true if we’re talking about Ramsey discounting. Discounting for public policy: A survey and Ramsey and Intergenerational Welfare Economics don’t seem to indicate this.
I don’t think anyone is suggesting a zero discount rate? Worth noting though that that former paper I linked to discusses a generally accepted argument that the discount rate should fall over time to its lowest possible value (Weitzman’s argument).
The growth adjustment term is only relevant if we’re talking about increasing the wealth of future people, not when we’re talking about saving them from extinction. To quote Toby Ord in the Precipice:
“The entire justification of the growth adjustment term term is to adjust for marginal benefits that are worth less to you when you are richer (such as money or things money can easily buy), but that is inapplicable here—if anything, the richer people might be, the more they would benefit from avoiding ruin or oblivion. Put another way, the ηg term is applicable only when discounting monetary benefits, but here we are considering discounting wellbeing (or utility) itself. So the ηg term should be treated as zero, leaving us with a social discount rate equal to δ.”
Yes, Ramsey discounting focuses on higher incomes of people in the future, which is the part I focused on. I probably shouldn’t have said “main”, but I meant that uncertainty over the future seems like the first order concern to me(and Ramsey ignores it).
Habryka’s comment:
seems to be arguing for a zero discount rate.
Good point that growth-adjusted discounting doesn’t apply here, my main claim was incorrect.
Long run growth rates cannot be exponential. This is easy to prove. Even mild steady exponential growth rates would quickly exhaust all available matter and energy in the universe within a few million years (see Holden’s post “This can’t go on” for more details).
So a model that tries adjust for marginal utility of resources should also quickly switch towards something other than assumed exponential growth within a few thousand years.
Separately, the expected lifetime of the universe is finite, as is the space we can affect, so I don’t see why you need discount rates (see a bunch of Bostrom’s work for how much life the energy in the reachable universe can support).
But even if things were infinite, then the right response isn’t to discount the future completely within a few thousand years just because we don’t know how to deal with infinite ethics. The choice of exponential discount rates in time does not strike me as very principled in the face of the ethical problems we would be facing in that case.
What choice are you thinking of?
I meant in the sense that humans were alive 10,000 years, and could have caused the extinction of humanity then (and in that decision, by the logic of the OP, they would have assigned zero weight to us existing).
I’m not sure that choice is a real one humanity actually faced though. It seems unlikely that humans alive 10,000 years ago actually had the capability to commit omnicide, still less the ability to avert future omnicide for the cost of lunch. It’s not a strong reductio ad absurdum because it implies a level of epistemic certainty that didn’t and doesn’t exist.
The closest ancient-world analogue is humans presented with entirely false choices to sacrifice their lunch to long-forgotten deities to preserve the future of humanity. Factoring in the possible existence of billions of humans 10,000 years into the future wouldn’t have allowed them to make decisions that better ensured our survival, so I have absolutely no qualms with those who discounted the value of our survival low enough to decline to proffer their lunch.
Even if humanity 10000 years ago had been acting on good information (perhaps a time traveller from this century warned them that cultivating grasses would set them on a path towards civilization capable of omnicide) rather than avoiding a Pascal-mugging, it’s far from clear that humanity deciding to go hungry to prevent the evils of civilization from harming billions of future humans would [i] not have ended up discovering the scientific method and founding civilizations capable of splitting atoms and engineering pathogens a bit later on anyway [ii] have ended up with as many happy humans if their cultural taboos against civilization had somehow persisted. So I’m unconvinced of a moral imperative to change course even with that foreknowledge. We don’t have comparable foreknowledge of any course the next 10000y could take, and our knowledge of actual and potential existential threats gives us more reason to discount the potential big expansive future even if we act now, especially if the proposed risk-mitigation is as untenable and unsustainable as “end science”.
If humanity ever reached the stage where we could meaningfully trade inconsequential things for cataclysms that only affect people in the far future [with high certainty], that might be time to revisit the discount rate, but it’s supposed to reflect our current epistemic uncertainty.