It would seem to me that a philanthropist who is really purely interested in maximizing the impact of altruistic spending would have to be operating in a fairly narrow range of confidence in their ability to shape the future in order for this kind of investing to make sense. In other words: either I can affect things like AI risk, future culture, and long term outcomes in a way that implies above-market ‘returns’ (in human welfare) to my donation over extended time frames. In which case I should spend what money I’m willing to give to those causes today, investing nothing for future acts of altruism. Or I have little confidence in my judgment on these future matters, in which case I should help people living today and again likely invest nothing. Only in some narrow middle ground where I think the ROI on these investments will allow for better effective altruism in the future (though I have no really good idea how to influence it otherwise) would it make sense to put aside money like this.
There are of course other reasons that someone with a great deal of money wouldn’t want to try to spend it all at once. It ’s understood that it’s actually difficult to give away a billion dollars in a way that’s efficient, so donating it over time makes sense as a way to get feedback and avoid diminishing returns in specific areas. But this is a separate concern.
either I can affect things like AI risk [...] In which case I’m willing to give to those causes today
It could be that you expect you can affect important things today, but that your actions won’t have a perpetually compounding effect, so in the long run you can do better by investing to give later.
Or I have little confidence in my judgment on these future matters, in which case I should help people living today and again likely invest nothing.
Similarly, in this case it could be that you expect to do better by investing your wealth to eventually help the future, and that by the time the future comes, you (or your descendants) will know how to help the world at that time.
I do broadly agree that it only makes sense to invest for the long term if you make certain assumptions.
It would seem to me that a philanthropist who is really purely interested in maximizing the impact of altruistic spending would have to be operating in a fairly narrow range of confidence in their ability to shape the future in order for this kind of investing to make sense.
In other words: either I can affect things like AI risk, future culture, and long term outcomes in a way that implies above-market ‘returns’ (in human welfare) to my donation over extended time frames. In which case I should spend what money I’m willing to give to those causes today, investing nothing for future acts of altruism.
Or I have little confidence in my judgment on these future matters, in which case I should help people living today and again likely invest nothing.
Only in some narrow middle ground where I think the ROI on these investments will allow for better effective altruism in the future (though I have no really good idea how to influence it otherwise) would it make sense to put aside money like this.
There are of course other reasons that someone with a great deal of money wouldn’t want to try to spend it all at once. It ’s understood that it’s actually difficult to give away a billion dollars in a way that’s efficient, so donating it over time makes sense as a way to get feedback and avoid diminishing returns in specific areas. But this is a separate concern.
I sort of agree but not entirely:
It could be that you expect you can affect important things today, but that your actions won’t have a perpetually compounding effect, so in the long run you can do better by investing to give later.
Similarly, in this case it could be that you expect to do better by investing your wealth to eventually help the future, and that by the time the future comes, you (or your descendants) will know how to help the world at that time.
I do broadly agree that it only makes sense to invest for the long term if you make certain assumptions.