Thank you Jack very useful. Thank you for the reading suggestion too. Some more thoughts from me
“Discounting for the catastrophe rate” should also include discounting for sudden positive windfalls or other successes that make current actions less useful. Eg if we find out that the universe is populated by benevolent intelligent non-human life anyway, or if a future unexpected invention suddenly solves societal problems, etc.
There should also be an internal project discount rate (not mentioned in my original comment). So the general discount rate (discussed above) applies after you have discounted the project you are currently working on for the chance that the project itself becomes of no value – capturing internal project risks or windfalls, as opposed to catastrophic risk or windfalls.
I am not sure I get the point about “discount the longterm future as if we were in the safest world among those we find plausible”.
I don’t think any of this (on its own) invalidates the case for longtermism but I do expect it to be relevant to thinking through how longtermists make decisions.
In a seminal article, Weitzman (1998) claimed that the correct results [when uncertain about the discount rate] are given by using an effective discount factor for any given time t that is the probability-weighted average of the various possible values for the true discount factor R(t): Reff(t) = E[R(t)]. From this premise, it is easy to deduce, given the exponential relationship between discount rates and discount factors, that if the various possible true discount rates are constant, the effective discount rate declines over time, tending to its lowest possible value in the limit t → ∞.
This video attempts to explain this in an excel spreadsheet.
Thank you Jack very useful. Thank you for the reading suggestion too. Some more thoughts from me
“Discounting for the catastrophe rate” should also include discounting for sudden positive windfalls or other successes that make current actions less useful. Eg if we find out that the universe is populated by benevolent intelligent non-human life anyway, or if a future unexpected invention suddenly solves societal problems, etc.
There should also be an internal project discount rate (not mentioned in my original comment). So the general discount rate (discussed above) applies after you have discounted the project you are currently working on for the chance that the project itself becomes of no value – capturing internal project risks or windfalls, as opposed to catastrophic risk or windfalls.
I am not sure I get the point about “discount the longterm future as if we were in the safest world among those we find plausible”.
I don’t think any of this (on its own) invalidates the case for longtermism but I do expect it to be relevant to thinking through how longtermists make decisions.
I think this is just what is known as Weitzman discounting. From Greaves’ paper Discounting for Public Policy:
This video attempts to explain this in an excel spreadsheet.
Makes sense. Thanks Jack.