For example, if you were to estimate there is a background basically unavoidable existential risk rate of 0.1% (as the UK government Stern Review discount rate suggests) then basically all the value of your actions (99.5%) is eroded after 5000 years and arguably it could be not worth thinking beyond that timeframe.

Worth bearing in mind that uncertainty over the discount rate leads to you applying a discount rate that declines over time (Weitzman 1998). This avoids conclusions that say we can completely ignore the far future.

Yes that is true and a good point. I think I would expect a very small non-zero discount rate to be reasonable, although still not sure what relevance this has to longtermism arguments.

Appendix A in The Precipice by Toby Ord has a good discussion on discounting and its implications for reducing existential risk. Firstly he says discounting on the basis of humanity being richer in the future is irrelevant because we are talking about actually having some sort of future, which isn’t a monetary benefit that is subject to diminishing marginal utility. Note that this argument may apply to a wide range of longtermist interventions (including non-x-risk ones).

Ord also sets the rate of pure time preference equal to zero for the regular reasons.

That leaves him with just discounting for the catastrophe rate which he says is reasonable. However, he also says that it is quite possible that we might be able to reduce catastrophic risk over time. This leads us to be uncertain about the catastrophe rate in the future meaning, as mentioned, that we should have a declining discount rate over time and that we should actually discount the longterm future as if we were in the safest world among those we find plausible. Therefore longtermism manages to go through, provided we have all the other requirements (future is vast in expectation, there are tractable ways to influence the long-run future etc.).

Ord also points out that reducing x-risk will reduce the discount rate (through reducing the catastrophe rate) which can then lead to increasing returns on longtermist work.

I guess the summary of all this is that discounting doesn’t seem to invalidate longtermism or even strong longtermism, although discounting for the catastrophe rate is relevant and does reduce its bite at least to some extent.

Thank you Jack very useful. Thank you for the reading suggestion too. Some more thoughts from me

“Discounting for the catastrophe rate” should also include discounting for sudden positive windfalls or other successes that make current actions less useful. Eg if we find out that the universe is populated by benevolent intelligent non-human life anyway, or if a future unexpected invention suddenly solves societal problems, etc.

There should also be an internal project discount rate (not mentioned in my original comment). So the general discount rate (discussed above) applies after you have discounted the project you are currently working on for the chance that the project itself becomes of no value – capturing internal project risks or windfalls, as opposed to catastrophic risk or windfalls.

I am not sure I get the point about “discount the longterm future as if we were in the safest world among those we find plausible”.

I don’t think any of this (on its own) invalidates the case for longtermism but I do expect it to be relevant to thinking through how longtermists make decisions.

In a seminal article, Weitzman (1998) claimed that the correct results [when uncertain about the discount rate] are given by using an effective discount factor for any given time t that is the probability-weighted average of the various possible values for the true discount factor R(t): Reff(t) = E[R(t)]. From this premise, it is easy to deduce, given the exponential relationship between discount rates and discount factors, that if the various possible true discount rates are constant, the effective discount rate declines over time, tending to its lowest possible value in the limit t → ∞.

This video attempts to explain this in an excel spreadsheet.

Worth bearing in mind that uncertainty over the discount rate leads to you applying a discount rate that declines over time (Weitzman 1998). This avoids conclusions that say we can completely ignore the far future.

Yes that is true and a good point. I think I would expect a very small non-zero discount rate to be reasonable, although still not sure what relevance this has to longtermism arguments.

Appendix A in The Precipice by Toby Ord has a good discussion on discounting and its implications for reducing existential risk. Firstly he says discounting on the basis of humanity being richer in the future is irrelevant because we are talking about actually

havingsome sort of future, which isn’t a monetary benefit that is subject to diminishing marginal utility. Note that this argument may apply to a wide range of longtermist interventions (including non-x-risk ones).Ord also sets the rate of pure time preference equal to zero for the regular reasons.

That leaves him with just discounting for the catastrophe rate which he says is reasonable. However, he also says that it is quite possible that we might be able to reduce catastrophic risk over time. This leads us to be uncertain about the catastrophe rate in the future meaning, as mentioned, that we should have a declining discount rate over time and that we should actually discount the longterm future as if we were in the safest world among those we find plausible. Therefore longtermism manages to go through, provided we have all the other requirements (future is vast in expectation, there are tractable ways to influence the long-run future etc.).

Ord also points out that reducing x-risk will reduce the discount rate (through reducing the catastrophe rate) which can then lead to increasing returns on longtermist work.

I guess the summary of all this is that discounting doesn’t seem to invalidate longtermism or even strong longtermism, although discounting for the catastrophe rate is relevant and does reduce its bite at least to some extent.

Thank you Jack very useful. Thank you for the reading suggestion too. Some more thoughts from me

“Discounting for the catastrophe rate” should also include discounting for sudden positive windfalls or other successes that make current actions less useful. Eg if we find out that the universe is populated by benevolent intelligent non-human life anyway, or if a future unexpected invention suddenly solves societal problems, etc.

There should also be an internal project discount rate (not mentioned in my original comment). So the general discount rate (discussed above) applies after you have discounted the project you are currently working on for the chance that the project itself becomes of no value – capturing internal project risks or windfalls, as opposed to catastrophic risk or windfalls.

I am not sure I get the point about “discount the longterm future as if we were in the safest world among those we find plausible”.

I don’t think any of this (on its own) invalidates the case for longtermism but I do expect it to be relevant to thinking through how longtermists make decisions.

I think this is just what is known as Weitzman discounting. From Greaves’ paper Discounting for Public Policy:

This video attempts to explain this in an excel spreadsheet.

Makes sense. Thanks Jack.