I think fixed discount rates (i.e. a discount rate where every year, no matter how far away, reduces the weighting by the same fraction) of any amount seems pretty obviously crazy to me as a model of the future. We use discount rates as a proxy for things like “predictability of the future” and “constraining our plans towards worlds we can influence”, which often makes sense, but I think even very simple thought-experiments produce obviously insane conclusions if you use practically any non-zero fixed discount rate in situations where it comes apart from the proxies (as it virtually guaranteed to happen in the long-run future).
I agree there’s a decent case to be made for abandoning fixed exponential discount rates in favor of a more nuanced model. However, it’s often unclear what model is best suited to handle scenarios involving a sequence of future events — T_1, T_2,T_3,…,T_N — where our knowledge about T_i is always significantly greater than our knowledge about T_{i + 1}.
From what I understand, many EAs seem to reject time discounting partly because they accept an empirical premise that goes something like this: “The future becomes increasingly difficult to predict as we look further ahead, but at some point, there will be a “value lock-in” — a moment when key values or structures become fixed — and after this lock-in, the long-term future could become highly predictable, even over time horizons spanning billions of years.” If this premise is correct, it might justify using something like a fixed discount rate for time periods leading up to the value lock-in, but then something like a zero rate of time discounting after the anticipated lock-in.
Personally, I find the concept of a value lock-in to be highly uncertain and speculative. Because of this, I’m skeptical of the conclusion that we should treat the level of epistemic uncertainty about the world, say, 1,000 years from now as being essentially the same as the uncertainty about the world 1 billion years from now. While both timeframes might feel similarly distant from our perspective — both being “a long time from now” — I ultimately think there’s still a meaningful difference: predicting the state of the world 1 billion years from now is likely much harder than predicting the state of the world 1,000 years from now.
One reasonable compromise model between these two perspectives is to tie the discount rate to the predicted amount of change that will happen at a given point of time. This could lead to a continuously increasing discounting rate for years that lead up to and include AGI, but then eventually a falling discounting rate for later years as technological progress becomes relatively saturated.
One reasonable compromise model between these two perspectives is to tie the discount rate to the predicted amount of change that will happen at a given point of time. This could lead to a continuously increasing discounting rate for years that lead up to and include AGI, but then eventually a falling discounting rate for later years as technological progress becomes relatively saturated
Yeah, this is roughly the kind of thing I would suggest if one wants to stay within the discount rate framework.
I agree there’s a decent case to be made for abandoning fixed exponential discount rates in favor of a more nuanced model. However, it’s often unclear what model is best suited to handle scenarios involving a sequence of future events — T_1, T_2,T_3,…,T_N — where our knowledge about T_i is always significantly greater than our knowledge about T_{i + 1}.
From what I understand, many EAs seem to reject time discounting partly because they accept an empirical premise that goes something like this: “The future becomes increasingly difficult to predict as we look further ahead, but at some point, there will be a “value lock-in” — a moment when key values or structures become fixed — and after this lock-in, the long-term future could become highly predictable, even over time horizons spanning billions of years.” If this premise is correct, it might justify using something like a fixed discount rate for time periods leading up to the value lock-in, but then something like a zero rate of time discounting after the anticipated lock-in.
Personally, I find the concept of a value lock-in to be highly uncertain and speculative. Because of this, I’m skeptical of the conclusion that we should treat the level of epistemic uncertainty about the world, say, 1,000 years from now as being essentially the same as the uncertainty about the world 1 billion years from now. While both timeframes might feel similarly distant from our perspective — both being “a long time from now” — I ultimately think there’s still a meaningful difference: predicting the state of the world 1 billion years from now is likely much harder than predicting the state of the world 1,000 years from now.
One reasonable compromise model between these two perspectives is to tie the discount rate to the predicted amount of change that will happen at a given point of time. This could lead to a continuously increasing discounting rate for years that lead up to and include AGI, but then eventually a falling discounting rate for later years as technological progress becomes relatively saturated.
Yeah, this is roughly the kind of thing I would suggest if one wants to stay within the discount rate framework.