> Then given uncertainty about these parameters, in the long run the scenarios that dominate the EV calculation are where there’s been no pre-emption and the future population is not that high. e.g. There’s been some great societal catastrophe and we’re rebuilding civilization from just a few million people. If we think the inverse relationship between population size and hingeyness is very strong, then maybe we should be saving for such a possible scenario; that’s the hinge moment.
I agree (and have used in calculations about optimal disbursement and savings rates) that the chance of a future altruist funding crash is an important reason for saving (e.g. medium-scale donors can provide insurance against a huge donor like the Open Philanthropy Project not entering an important area or being diverted). However, the particularly relevant kind of event for saving is the possibility of a ‘catastrophe’ that cuts other altruistic funding or similar while leaving one’s savings unaffected. Good Ventures going awry fits that bill better than a nuclear war (which would also destroy a DAF saving for the future with high probability).
Saving extra for a catastrophe that destroys one’s savings and the broader world at the same rate is a bet on proportional influence being more important in the poorer smaller post-disaster world, which seems like a weaker consideration. Saving or buying insurance to pay off in those cases, e.g. with time capsule messages to post-apocalyptic societies, or catastrophe bonds/insurance contracts to release funds in the event of a crash in the EA movement, get more oomph.
I’ll also flag that we’re switching back and forth here between the question of which century has the highest marginal impact per unit resources and which periods are worth saving/expending how much for.
>Then given uncertainty about these parameters, in the long run the scenarios that dominate the EV calculation are where there’s been no pre-emption and the future population is not that high.
I think this is true for what little EV of ‘most important century’ remains so far out, but that residual is very small. Note that Martin Weitzman’s argument for discounting the future at the lowest possible rate (where we consider even very unlikely situations where discount rates remain low to get a low discount rate for the very long-term) gives different results with an effectively bounded utility function. If we face a limit like ‘~max value future’ or ‘utopian light-cone after a great reflection’ then we can’t make up for increasingly unlikely scenarios with correspondingly greater incremental probability of achieving ~ that maximum: diminishing returns mean we can’t exponentially grow our utility gained from resources indefinitely (going from 99% of all wealth to 99.9% or 99.999% and so on will yield only a bounded increment to the chance of a utopian long-term). A related limit to growth (although there is some chance it could be avoided, making it another drag factor) comes if the chances of expropriation rise as one’s wealth becomes a larger share of the world (a foundation with 50% of world wealth would be likely to face new taxes).
> Then given uncertainty about these parameters, in the long run the scenarios that dominate the EV calculation are where there’s been no pre-emption and the future population is not that high. e.g. There’s been some great societal catastrophe and we’re rebuilding civilization from just a few million people. If we think the inverse relationship between population size and hingeyness is very strong, then maybe we should be saving for such a possible scenario; that’s the hinge moment.
I agree (and have used in calculations about optimal disbursement and savings rates) that the chance of a future altruist funding crash is an important reason for saving (e.g. medium-scale donors can provide insurance against a huge donor like the Open Philanthropy Project not entering an important area or being diverted). However, the particularly relevant kind of event for saving is the possibility of a ‘catastrophe’ that cuts other altruistic funding or similar while leaving one’s savings unaffected. Good Ventures going awry fits that bill better than a nuclear war (which would also destroy a DAF saving for the future with high probability).
Saving extra for a catastrophe that destroys one’s savings and the broader world at the same rate is a bet on proportional influence being more important in the poorer smaller post-disaster world, which seems like a weaker consideration. Saving or buying insurance to pay off in those cases, e.g. with time capsule messages to post-apocalyptic societies, or catastrophe bonds/insurance contracts to release funds in the event of a crash in the EA movement, get more oomph.
I’ll also flag that we’re switching back and forth here between the question of which century has the highest marginal impact per unit resources and which periods are worth saving/expending how much for.
>Then given uncertainty about these parameters, in the long run the scenarios that dominate the EV calculation are where there’s been no pre-emption and the future population is not that high.
I think this is true for what little EV of ‘most important century’ remains so far out, but that residual is very small. Note that Martin Weitzman’s argument for discounting the future at the lowest possible rate (where we consider even very unlikely situations where discount rates remain low to get a low discount rate for the very long-term) gives different results with an effectively bounded utility function. If we face a limit like ‘~max value future’ or ‘utopian light-cone after a great reflection’ then we can’t make up for increasingly unlikely scenarios with correspondingly greater incremental probability of achieving ~ that maximum: diminishing returns mean we can’t exponentially grow our utility gained from resources indefinitely (going from 99% of all wealth to 99.9% or 99.999% and so on will yield only a bounded increment to the chance of a utopian long-term). A related limit to growth (although there is some chance it could be avoided, making it another drag factor) comes if the chances of expropriation rise as one’s wealth becomes a larger share of the world (a foundation with 50% of world wealth would be likely to face new taxes).