Elsewhere, Holden makes this remark about the optimal timing of donations:
Right now there arenât a lot of obvious places to donate (though you can donate to the Long-Term Future Fund12 if you feel so moved).
Iâm guessing this will change in the future, for a number of reasons.13
Something Iâd consider doing is setting some pool of money aside, perhaps invested such that itâs particularly likely to grow a lot if and when AI systems become a lot more capable and impressive,14 in case giving opportunities come up in the future.
You can also, of course, donate to things today that others arenât funding for whatever reason.
I generally expect there to be more and more clarity about what actions would be helpful, and more and more people willing to work on them if they can get funded. A bit more specifically and speculatively, I expect AI safety research to get more expensive as it requires access to increasingly large, expensive AI models.
Iâm taking the quote out of context a little bit here. I donât know if Holdenâs guess that giving opportunities will increase is one of OpenPhilâs reasons to spend at a low rate. There might be other reasons. Also, Holden is talking about individual donations here, not necessarily about OpenPhil spending.
Iâm adding it here because it might help answer the question âWhy is the spending rate so low relative to AI timelines?â even though itâs only tangentially relevant.
Elsewhere, Holden makes this remark about the optimal timing of donations:
And in footnote 13:
Iâm taking the quote out of context a little bit here. I donât know if Holdenâs guess that giving opportunities will increase is one of OpenPhilâs reasons to spend at a low rate. There might be other reasons. Also, Holden is talking about individual donations here, not necessarily about OpenPhil spending.
Iâm adding it here because it might help answer the question âWhy is the spending rate so low relative to AI timelines?â even though itâs only tangentially relevant.