Great introduction. Strongly upvoted. It is really good to see stuff written up clearly. Well done!!!
To add some points on discounting. This is not to disagree with you but to add some nuance onto a topic it is useful for people to understand. Government’s (or at least the UK government) apply discount rates for three reasons:
Firstly, pure-time discounting as people want things now rather than in the future of roughly 0.5%. This is what you seem to be talking about here when you talk about discounting. Interestingly this is not done for electoral politics (the discount rate is not a big political issue) but because the literature on the topic has numbers ranging from 0% to 1% so the government (which listens to experts) goes for 0.5%.
Secondly, catastrophic risk discounting of 1% to account for the fact that a major catastrophic risk could make a project’s value worthless, eg earthquakes could destroy the new hospital, social unrest could ruin a social programs success, etc.
Thirdly, wealth discounting of 2%, to account for the fact that the future will be richer so transferring wealth form now to the future has a cost. This does not apply to harms such as loss of life.
Ultimately it is only the fist of these that longtermists and philosophers tend to disagree with. The others may still be valid to longtermists.
For example, if you were to estimate there is a background basically unavoidable existential risk rate of 0.1% (as the UK government Stern Review discount rate suggests) then basically all the value of your actions (99.5%) is eroded after 5000 years and arguably it could be not worth thinking beyond that timeframe. There are good counter-considerations to this, not trying to start a debate here, just explain how folk outside the longtermist community apply discounting and how it might reasonably apply to longtermists’ decisions.
For example, if you were to estimate there is a background basically unavoidable existential risk rate of 0.1% (as the UK government Stern Review discount rate suggests) then basically all the value of your actions (99.5%) is eroded after 5000 years and arguably it could be not worth thinking beyond that timeframe.
Worth bearing in mind that uncertainty over the discount rate leads to you applying a discount rate that declines over time (Weitzman 1998). This avoids conclusions that say we can completely ignore the far future.
Yes that is true and a good point. I think I would expect a very small non-zero discount rate to be reasonable, although still not sure what relevance this has to longtermism arguments.
Appendix A in The Precipice by Toby Ord has a good discussion on discounting and its implications for reducing existential risk. Firstly he says discounting on the basis of humanity being richer in the future is irrelevant because we are talking about actually having some sort of future, which isn’t a monetary benefit that is subject to diminishing marginal utility. Note that this argument may apply to a wide range of longtermist interventions (including non-x-risk ones).
Ord also sets the rate of pure time preference equal to zero for the regular reasons.
That leaves him with just discounting for the catastrophe rate which he says is reasonable. However, he also says that it is quite possible that we might be able to reduce catastrophic risk over time. This leads us to be uncertain about the catastrophe rate in the future meaning, as mentioned, that we should have a declining discount rate over time and that we should actually discount the longterm future as if we were in the safest world among those we find plausible. Therefore longtermism manages to go through, provided we have all the other requirements (future is vast in expectation, there are tractable ways to influence the long-run future etc.).
Ord also points out that reducing x-risk will reduce the discount rate (through reducing the catastrophe rate) which can then lead to increasing returns on longtermist work.
I guess the summary of all this is that discounting doesn’t seem to invalidate longtermism or even strong longtermism, although discounting for the catastrophe rate is relevant and does reduce its bite at least to some extent.
Thank you Jack very useful. Thank you for the reading suggestion too. Some more thoughts from me
“Discounting for the catastrophe rate” should also include discounting for sudden positive windfalls or other successes that make current actions less useful. Eg if we find out that the universe is populated by benevolent intelligent non-human life anyway, or if a future unexpected invention suddenly solves societal problems, etc.
There should also be an internal project discount rate (not mentioned in my original comment). So the general discount rate (discussed above) applies after you have discounted the project you are currently working on for the chance that the project itself becomes of no value – capturing internal project risks or windfalls, as opposed to catastrophic risk or windfalls.
I am not sure I get the point about “discount the longterm future as if we were in the safest world among those we find plausible”.
I don’t think any of this (on its own) invalidates the case for longtermism but I do expect it to be relevant to thinking through how longtermists make decisions.
I think this is just what is known as Weitzman discounting. From Greaves’ paper Discounting for Public Policy:
In a seminal article, Weitzman (1998) claimed that the correct results [when uncertain about the discount rate] are given by using an effective discount factor for any given time t that is the probability-weighted average of the various possible values for the true discount factor R(t): Reff(t) = E[R(t)]. From this premise, it is easy to deduce, given the exponential relationship between discount rates and discount factors, that if the various possible true discount rates are constant, the effective discount rate declines over time, tending to its lowest possible value in the limit t → ∞.
This video attempts to explain this in an excel spreadsheet.
Makes sense. Thanks Jack.
To add a more opinionated less factual point, as someone who researches and advises policymakers on how to think and make long-term decisions, I tend to be somewhat disappointed by the extent to which the longtermist community lacks discussions and understanding of how long-term decision making is done in practice. I guess, if worded strongly, this could be worded as an additional community-level objection to longtermism along the lines of:
Objection: The longtermist idea makes quite strong somewhat counterintuitive claims about how to do good but the longtermist community has not yet demonstrated appropriately strong intellectual rigour (other than in the field of philosophy) about these claims and what they mean in practice. Individuals should therefore should be sceptical of the claims of longtermists about how to do good.
If worded more politely the objection would basically be that the ideas of longtermism are very new and somewhat untested and may still change significantly so we should be super cautious about adopting the conclusions of longtermists for a while longer.
Thanks so much for both these comments! I definitely missed some important detail there.
Do you think there are any counterexamples to this? For example certain actions to reduce x-risk?
I guess some of the: AI will be transformative therefore deserves attention arguments are some of the oldest and most generally excepted within this space.
For various reasons I think the arguments for focusing on x-risk are much stronger than other longtermist arguments, but how best to do this, what x-risks to focus on, etc, is all still new and somewhat uncertain.