I think fixed discount rates (i.e. a discount rate where every year, no matter how far away, reduces the weighting by the same fraction) of any amount seems pretty obviously crazy to me as a model of the future. We use discount rates as a proxy for things like “predictability of the future” and “constraining our plans towards worlds we can influence”, which often makes sense, but I think even very simple thought-experiments produce obviously insane conclusions if you use practically any non-zero fixed discount rate in situations where it comes apart from the proxies (as it virtually guaranteed to happen in the long-run future).
This report seems to assume exponential discount rates for the future when modeling extinction risk. This seems to lead to extreme and seemingly immoral conclusions when applied to decisions that previous generations of humans faced.
I think exponential discount rates can make sense in short-term economic modeling, and can be a proxy for various forms of hard-to-model uncertainty and the death of individual participants in an economic system, but applying even mild economic discount rates very quickly implies pursuing policies that act with extreme disregard for any future civilizations and future humans (and as such overdetermine the results of any analysis about the long-run future).
The report says:
However, for this equation to equal to 432W, we would require merely that ρ = 0. 99526. In other words, we would need to discount utility flows like our own at 0.47% per year, to value such a future at 432 population years. This is higher than Davidson (2022), though still lower than the lowest rate recommended in Circular A-4. It suggests conservative, but not unheard of, valuations of the distant future would be necessary to prefer pausing science, if extinction imperiled our existence at rates implied by domain expert estimates.
At this discount rate, you would value a civilization that lives 10,000 years in the future, which is something that past humans decisions did influence, at less than a billion billion times of their civilization at the time. By this logic ancestral humans should have taken a trade where they had a slightly better meal, or a single person lived a single additional second (or anything else that improved the lives of a single person by more than a billionth of a percent), in favor of present civilization completely failing to come into existence.
This seems like a pretty strong reductio-ad-absurdum, so I have trouble taking the recommendations of the report seriously. From an extinction risk perspective it seems that if you buy exponential discount rates as aggressive as 1% you basically committed to not caring about future humans in any substantial way. It also seems to me that various thought experiments (like the above ancestral human facing the decision on whether to deal with the annoyance of stepping over a stone, or causing the destruction of our complete civilization) demonstrate that such discount rates almost inevitably recommend actions that seem strongly in conflict with various common sense notions of treating future generations with respect.
I think fixed discount rates (i.e. a discount rate where every year, no matter how far away, reduces the weighting by the same fraction) of any amount seems pretty obviously crazy to me as a model of the future. We use discount rates as a proxy for things like “predictability of the future” and “constraining our plans towards worlds we can influence”, which often makes sense, but I think even very simple thought-experiments produce obviously insane conclusions if you use practically any non-zero fixed discount rate in situations where it comes apart from the proxies (as it virtually guaranteed to happen in the long-run future).
I agree there’s a decent case to be made for abandoning fixed exponential discount rates in favor of a more nuanced model. However, it’s often unclear what model is best suited to handle scenarios involving a sequence of future events — T_1, T_2,T_3,…,T_N — where our knowledge about T_i is always significantly greater than our knowledge about T_{i + 1}.
From what I understand, many EAs seem to reject time discounting partly because they accept an empirical premise that goes something like this: “The future becomes increasingly difficult to predict as we look further ahead, but at some point, there will be a “value lock-in” — a moment when key values or structures become fixed — and after this lock-in, the long-term future could become highly predictable, even over time horizons spanning billions of years.” If this premise is correct, it might justify using something like a fixed discount rate for time periods leading up to the value lock-in, but then something like a zero rate of time discounting after the anticipated lock-in.
Personally, I find the concept of a value lock-in to be highly uncertain and speculative. Because of this, I’m skeptical of the conclusion that we should treat the level of epistemic uncertainty about the world, say, 1,000 years from now as being essentially the same as the uncertainty about the world 1 billion years from now. While both timeframes might feel similarly distant from our perspective — both being “a long time from now” — I ultimately think there’s still a meaningful difference: predicting the state of the world 1 billion years from now is likely much harder than predicting the state of the world 1,000 years from now.
One reasonable compromise model between these two perspectives is to tie the discount rate to the predicted amount of change that will happen at a given point of time. This could lead to a continuously increasing discounting rate for years that lead up to and include AGI, but then eventually a falling discounting rate for later years as technological progress becomes relatively saturated.
I’ve referred to this latter point as candy bar extinction; using fixed discount rates, a candy bar is better than preventing extinction with certainty after some number of years. (And with moderately high discount rates, the number of years isn’t even absurdly high!)
I think fixed discount rates (i.e. a discount rate where every year, no matter how far away, reduces the weighting by the same fraction) of any amount seems pretty obviously crazy to me as a model of the future. We use discount rates as a proxy for things like “predictability of the future” and “constraining our plans towards worlds we can influence”, which often makes sense, but I think even very simple thought-experiments produce obviously insane conclusions if you use practically any non-zero fixed discount rate in situations where it comes apart from the proxies (as it virtually guaranteed to happen in the long-run future).
See also my comment here: https://forum.effectivealtruism.org/posts/PArvxhBaZJrGAuhZp/report-on-the-desirability-of-science-given-new-biotech?commentId=rsqwSR6h5XPY8EPiT
I agree there’s a decent case to be made for abandoning fixed exponential discount rates in favor of a more nuanced model. However, it’s often unclear what model is best suited to handle scenarios involving a sequence of future events — T_1, T_2,T_3,…,T_N — where our knowledge about T_i is always significantly greater than our knowledge about T_{i + 1}.
From what I understand, many EAs seem to reject time discounting partly because they accept an empirical premise that goes something like this: “The future becomes increasingly difficult to predict as we look further ahead, but at some point, there will be a “value lock-in” — a moment when key values or structures become fixed — and after this lock-in, the long-term future could become highly predictable, even over time horizons spanning billions of years.” If this premise is correct, it might justify using something like a fixed discount rate for time periods leading up to the value lock-in, but then something like a zero rate of time discounting after the anticipated lock-in.
Personally, I find the concept of a value lock-in to be highly uncertain and speculative. Because of this, I’m skeptical of the conclusion that we should treat the level of epistemic uncertainty about the world, say, 1,000 years from now as being essentially the same as the uncertainty about the world 1 billion years from now. While both timeframes might feel similarly distant from our perspective — both being “a long time from now” — I ultimately think there’s still a meaningful difference: predicting the state of the world 1 billion years from now is likely much harder than predicting the state of the world 1,000 years from now.
One reasonable compromise model between these two perspectives is to tie the discount rate to the predicted amount of change that will happen at a given point of time. This could lead to a continuously increasing discounting rate for years that lead up to and include AGI, but then eventually a falling discounting rate for later years as technological progress becomes relatively saturated.
I’ve referred to this latter point as candy bar extinction; using fixed discount rates, a candy bar is better than preventing extinction with certainty after some number of years. (And with moderately high discount rates, the number of years isn’t even absurdly high!)