Elsewhere, Holden makes this remark about the optimal timing of donations:
Right now there aren’t a lot of obvious places to donate (though you can donate to the Long-Term Future Fund12 if you feel so moved).
I’m guessing this will change in the future, for a number of reasons.13
Something I’d consider doing is setting some pool of money aside, perhaps invested such that it’s particularly likely to grow a lot if and when AI systems become a lot more capable and impressive,14 in case giving opportunities come up in the future.
You can also, of course, donate to things today that others aren’t funding for whatever reason.
I generally expect there to be more and more clarity about what actions would be helpful, and more and more people willing to work on them if they can get funded. A bit more specifically and speculatively, I expect AI safety research to get more expensive as it requires access to increasingly large, expensive AI models.
I’m taking the quote out of context a little bit here. I don’t know if Holden’s guess that giving opportunities will increase is one of OpenPhil’s reasons to spend at a low rate. There might be other reasons. Also, Holden is talking about individual donations here, not necessarily about OpenPhil spending.
I’m adding it here because it might help answer the question “Why is the spending rate so low relative to AI timelines?” even though it’s only tangentially relevant.
Elsewhere, Holden makes this remark about the optimal timing of donations:
And in footnote 13:
I’m taking the quote out of context a little bit here. I don’t know if Holden’s guess that giving opportunities will increase is one of OpenPhil’s reasons to spend at a low rate. There might be other reasons. Also, Holden is talking about individual donations here, not necessarily about OpenPhil spending.
I’m adding it here because it might help answer the question “Why is the spending rate so low relative to AI timelines?” even though it’s only tangentially relevant.