I hadn’t even taken into account future donors; if you take that into account then yeah we should be doing even more now. Huh. Maybe it should be like 20% or so. Then there’s also the discount rate to think about… various risks of our money being confiscated, or controlled by unaligned people, or some random other catastrophe killing most of our impact, etc.… (Historically, foundations seem to pretty typically diverge from the original vision/mission laid out by their founders.)
I’ve read the hinge of history argument before, and was thoroughly unconvinced (for reasons other people explained in the comments).
One quick thing is that I think high interest rates are overall an argument for giving later rather than sooner!
Hmmm, toy model time: Suppose that our overall impact is log(whatwespendinyear2021)+log(whatwespendinyear2022)+Log(whatwespendinyear2023)… etc. up until some year when existential safety is reached or x-risk point of no return is passed. Then is it still the case that going from e.g. a 10% interest rate to a 20% interest rate means we should spend less in 2021? Idk, but I’ll go find out! (Since I take this toy model to be reasonably representative of our situation)
I highly recommend the Founder’s Pledge report on Investing to Give. It goes through and models the various factors in the giving-now vs giving-later decision, including the ones you describe. Interestingly, the case for giving-later is strongest for longtermist priorities, driven largely by the possibility that significantly more cost-effective grants may be available in the future. This suggests that the optimal giving rate today could very well be 0%.
Hi Owen, even if you’re confident today about identifying investment-like giving opportunities with returns that beat financial markets, investing-to-give can still be desirable. That’s because investing-to-give preserves optionality. Giving today locks in the expected impact of your grant, but waiting allows for funding of potentially higher impact opportunities in the future.
The secretary problem comes to mind (not a perfect analogy but I think the insight applies). The optimal solution is to reject the initial ~37% of all applicants and then accept the next applicant that’s better than all the ones we’ve seen. Given that EA has only been around for about a decade, you would have to think that extinction is imminent for a decade to count for ~37% of our total future. Otherwise, we should continue rejecting opportunities. This allows us to better understand the extent of impact that’s actually possible, including opportunities like movement building and global priorities research. Future ones could be even better!
But the investment-like giving opportunities also preserve optionality! This is the sense in which they are investment-like. They can result in more (expected) dollars held in a future year (say a decade from now) by careful thinking people who will be roughly aligned with our values than if we just make financial investments now.
Thanks for the clarification, Owen! I had mis-understood ‘investment-like’ as simply having return compounding characteristics. To truly preserve optionality though, these grants would need to remain flexible (can change cause areas if necessary; so grants to a specific cause area like AI safety wouldn’t necessarily count) and liquid (can be immediately called upon; so Founder’s Pledge future pledges wouldn’t necessarily count). So yes, your example of grants that result “in more (expected) dollars held in a future year (say a decade from now) by careful thinking people who will be roughly aligned with our values” certainly qualifies, but I suspect that’s about it. Still, as long as such grants exist today, I now understand why you say that the optimal giving rate is implausibly (exactly) 0%.
If I recall correctly (and I may well be wrong), the secretary problem’s solution only applies if your utility is linear in the ranking of the secretary that you choose—I’ve never come across a problem where this was a useful assumption.
Interesting! The secretary problem does seem relevant as a model, thanks!
Given that EA has only been around for about a decade, you would have to think that extinction is imminent for a decade to count for ~37% of our total future.
That toy model is similar to Phil’s, so I’d start by reading his stuff. IIRC with log utility the interest rate factors out. With other functions, it can go either way.
However, if your model is more like impact = log(all time longtermist spending before the hinge of history), which also has some truth to it, then I think higher interest rates will generally make you want to give later, since they mean you get more total resources (so long as you can spend it quickly enough as you get close to the hinge).
I think the discount rate for the things you talk about is probably under 1% per year, so doesn’t have a huge effect either way. (Whereas if you think EA capital is going to double again in the next 10 years, then that would double the ideal percentage to distribute.)
I hadn’t even taken into account future donors; if you take that into account then yeah we should be doing even more now. Huh. Maybe it should be like 20% or so. Then there’s also the discount rate to think about… various risks of our money being confiscated, or controlled by unaligned people, or some random other catastrophe killing most of our impact, etc.… (Historically, foundations seem to pretty typically diverge from the original vision/mission laid out by their founders.)
I’ve read the hinge of history argument before, and was thoroughly unconvinced (for reasons other people explained in the comments).
Hmmm, toy model time: Suppose that our overall impact is log(whatwespendinyear2021)+log(whatwespendinyear2022)+Log(whatwespendinyear2023)… etc. up until some year when existential safety is reached or x-risk point of no return is passed.
Then is it still the case that going from e.g. a 10% interest rate to a 20% interest rate means we should spend less in 2021? Idk, but I’ll go find out! (Since I take this toy model to be reasonably representative of our situation)
I highly recommend the Founder’s Pledge report on Investing to Give. It goes through and models the various factors in the giving-now vs giving-later decision, including the ones you describe. Interestingly, the case for giving-later is strongest for longtermist priorities, driven largely by the possibility that significantly more cost-effective grants may be available in the future. This suggests that the optimal giving rate today could very well be 0%.
I think it’s implausible that the optimal giving rate today could be 0%. This is because many giving opportunities function as a form of investment, and we’re pretty sure that the best of those outperform the financial market. (I wrote more about ~this in this post: https://forum.effectivealtruism.org/posts/Eh7c9NhGynF4EiX3u/patient-vs-urgent-longtermism-has-little-direct-bearing-on )
Hi Owen, even if you’re confident today about identifying investment-like giving opportunities with returns that beat financial markets, investing-to-give can still be desirable. That’s because investing-to-give preserves optionality. Giving today locks in the expected impact of your grant, but waiting allows for funding of potentially higher impact opportunities in the future.
The secretary problem comes to mind (not a perfect analogy but I think the insight applies). The optimal solution is to reject the initial ~37% of all applicants and then accept the next applicant that’s better than all the ones we’ve seen. Given that EA has only been around for about a decade, you would have to think that extinction is imminent for a decade to count for ~37% of our total future. Otherwise, we should continue rejecting opportunities. This allows us to better understand the extent of impact that’s actually possible, including opportunities like movement building and global priorities research. Future ones could be even better!
But the investment-like giving opportunities also preserve optionality! This is the sense in which they are investment-like. They can result in more (expected) dollars held in a future year (say a decade from now) by careful thinking people who will be roughly aligned with our values than if we just make financial investments now.
Thanks for the clarification, Owen! I had mis-understood ‘investment-like’ as simply having return compounding characteristics. To truly preserve optionality though, these grants would need to remain flexible (can change cause areas if necessary; so grants to a specific cause area like AI safety wouldn’t necessarily count) and liquid (can be immediately called upon; so Founder’s Pledge future pledges wouldn’t necessarily count). So yes, your example of grants that result “in more (expected) dollars held in a future year (say a decade from now) by careful thinking people who will be roughly aligned with our values” certainly qualifies, but I suspect that’s about it. Still, as long as such grants exist today, I now understand why you say that the optimal giving rate is implausibly (exactly) 0%.
If I recall correctly (and I may well be wrong), the secretary problem’s solution only applies if your utility is linear in the ranking of the secretary that you choose—I’ve never come across a problem where this was a useful assumption.
Interesting! The secretary problem does seem relevant as a model, thanks!
FWIW, many of us do think that. I do, for example.
Thanks Wayne, will read!
That toy model is similar to Phil’s, so I’d start by reading his stuff. IIRC with log utility the interest rate factors out. With other functions, it can go either way.
However, if your model is more like impact = log(all time longtermist spending before the hinge of history), which also has some truth to it, then I think higher interest rates will generally make you want to give later, since they mean you get more total resources (so long as you can spend it quickly enough as you get close to the hinge).
I think the discount rate for the things you talk about is probably under 1% per year, so doesn’t have a huge effect either way. (Whereas if you think EA capital is going to double again in the next 10 years, then that would double the ideal percentage to distribute.)
Will do, thanks!