Can you comment a bit more on how the specific number of years (20 and 50) were chosen? Aren’t those intervals [very] conservative, especially given that AGI/TAI timeline estimates have shortened for many? E.g., if one took seriously the predictions from
wouldn’t it be reasonable to also have scenarios under which you might want to spend at least the AI risk portfolio in something like 5-10 years instead? Maybe this is covered somewhat by ‘Of course, we can adjust our spending rate over time’, but I’d still be curious to hear more of your thoughts, especially since I’m not aware of OpenPhil updates on spending plans based on shortened AI timelines, even after e.g. Ajeya has discussed her shortened timelines.
Can the people who agreement-downvoted this explain yourselves? Bogdan has a good point: if we really believe in short timelines to transformative AI we should either be spending our entire AI-philanthropy capital endowment now, or possibly investing it in something that will be useful after TAI exists. What does not make sense is trying to set up a slow funding stream for 50 years of AI alignment research if we’ll have AGI in 20 years.
(Edit: the comment above had very negative net agreement when I wrote this.)
it would still result in a high likelihood of very short timelines to superintelligence (there can be inconsistencies between Metaculus forecasts, e.g. with
as others have pointed out before). I’m not claiming we should only rely on these Metaculus forecasts or that we should only plan for [very] short timelines, but I’m getting the impression the community as a whole and OpenPhil in particular haven’t really updated their spending plans with respect to these considerations (or at least this hasn’t been made public, to the best of my awareness), even after updating to shorter timelines.
Aiming to spend down in less than 20 years would not obviously be justified even if one’s median for transformative AI timelines were well under 20 years. This is because we may want extra capital in a “crunch time” where we’re close enough to transformative AI for the strategic picture to have become a lot clearer, and because even a 10-25% chance of longer timelines would provide some justification for not spending down on short time frames.
This move could be justified if the existing giving opportunities were strong enough even with a lower bar. That may end up being the case in the future. But we don’t feel it’s the case today, having eyeballed the stack rank.
I agree. This lines with models of optimal spending I worked on which allowed for a post-fire alarm “crunch time” in which one can spend a significant fraction of remaining capital.
Elsewhere, Holden makes this remark about the optimal timing of donations:
Right now there aren’t a lot of obvious places to donate (though you can donate to the Long-Term Future Fund12 if you feel so moved).
I’m guessing this will change in the future, for a number of reasons.13
Something I’d consider doing is setting some pool of money aside, perhaps invested such that it’s particularly likely to grow a lot if and when AI systems become a lot more capable and impressive,14 in case giving opportunities come up in the future.
You can also, of course, donate to things today that others aren’t funding for whatever reason.
I generally expect there to be more and more clarity about what actions would be helpful, and more and more people willing to work on them if they can get funded. A bit more specifically and speculatively, I expect AI safety research to get more expensive as it requires access to increasingly large, expensive AI models.
I’m taking the quote out of context a little bit here. I don’t know if Holden’s guess that giving opportunities will increase is one of OpenPhil’s reasons to spend at a low rate. There might be other reasons. Also, Holden is talking about individual donations here, not necessarily about OpenPhil spending.
I’m adding it here because it might help answer the question “Why is the spending rate so low relative to AI timelines?” even though it’s only tangentially relevant.
Can you comment a bit more on how the specific number of years (20 and 50) were chosen? Aren’t those intervals [very] conservative, especially given that AGI/TAI timeline estimates have shortened for many? E.g., if one took seriously the predictions from
wouldn’t it be reasonable to also have scenarios under which you might want to spend at least the AI risk portfolio in something like 5-10 years instead? Maybe this is covered somewhat by ‘Of course, we can adjust our spending rate over time’, but I’d still be curious to hear more of your thoughts, especially since I’m not aware of OpenPhil updates on spending plans based on shortened AI timelines, even after e.g. Ajeya has discussed her shortened timelines.
Can the people who agreement-downvoted this explain yourselves? Bogdan has a good point: if we really believe in short timelines to transformative AI we should either be spending our entire AI-philanthropy capital endowment now, or possibly investing it in something that will be useful after TAI exists. What does not make sense is trying to set up a slow funding stream for 50 years of AI alignment research if we’ll have AGI in 20 years.
(Edit: the comment above had very negative net agreement when I wrote this.)
That question’s definition of AGI is probably too weak—it will probably resolve true a good deal before we have a dangerously powerful AI.
Maybe, though e.g. combined with
it would still result in a high likelihood of very short timelines to superintelligence (there can be inconsistencies between Metaculus forecasts, e.g. with
as others have pointed out before). I’m not claiming we should only rely on these Metaculus forecasts or that we should only plan for [very] short timelines, but I’m getting the impression the community as a whole and OpenPhil in particular haven’t really updated their spending plans with respect to these considerations (or at least this hasn’t been made public, to the best of my awareness), even after updating to shorter timelines.
Aiming to spend down in less than 20 years would not obviously be justified even if one’s median for transformative AI timelines were well under 20 years. This is because we may want extra capital in a “crunch time” where we’re close enough to transformative AI for the strategic picture to have become a lot clearer, and because even a 10-25% chance of longer timelines would provide some justification for not spending down on short time frames.
This move could be justified if the existing giving opportunities were strong enough even with a lower bar. That may end up being the case in the future. But we don’t feel it’s the case today, having eyeballed the stack rank.
I agree. This lines with models of optimal spending I worked on which allowed for a post-fire alarm “crunch time” in which one can spend a significant fraction of remaining capital.
Elsewhere, Holden makes this remark about the optimal timing of donations:
And in footnote 13:
I’m taking the quote out of context a little bit here. I don’t know if Holden’s guess that giving opportunities will increase is one of OpenPhil’s reasons to spend at a low rate. There might be other reasons. Also, Holden is talking about individual donations here, not necessarily about OpenPhil spending.
I’m adding it here because it might help answer the question “Why is the spending rate so low relative to AI timelines?” even though it’s only tangentially relevant.