As a result of our internal process, we decided to keep that new higher bar, while also aiming to roughly double our GCR spending over the next few years — if we can find sufficiently cost-effective opportunities.
At first glance, this seems potentially ‘wildly conservative’ to me, if I think of what this implies for the AI risk mitigation portion of the funding and how this intersects with (shortening) timelines [estimates].
My impression from looking briefly at recent grants is that probably ⇐ 150M$ was spent by Open Philanthropy on AI risk mitigation during the past year. A doubling of AI risk spending would imply ⇐ 300M$ / year.
AFAICT (including based on non-public conversations / information), at this point, median forecasts for something like TAI / AGI are very often < 10 years, especially from people who have thought the most about this question. And a very respectable share of those people seem to have < 5 year medians.
X-risk estimates from powerful AI vs. from other sources often have AI take more than half of the total x-risk (e.g. estimates from ‘The Precipice’ have AI take ~10% of ~17% for x-risk during the next ~100 years).
Considering all the above, the current AI risk mitigation spending plans seem to me far too conservative.
I also personally find it pretty unlikely that there aren’t decent opportunities to spend > 300M$ / year (and especially > 150M$ / year), given e.g. the growth in the public discourse about AI risks; and that some plans could potentially be [very] scalable in how much funding they could take in, e.g. field-building, non-mentored independent research, or automated AI safety R&D.
Am I missing something (obvious) here?
(P.S.: my perspective here might be influenced / biased in a few ways here, given my AI risk mitigation focus, and how that intersects / has intersected with Open Philanthropy funding and career prospects.)
Re: why our current rate of spending on AI safety is “low.” At least for now, the main reason is lack of staff capacity! We’re putting a ton of effort into hiring (see here) but are still not finding as many qualified candidates for our AI roles as we’d like. If you want our AI safety spending to grow faster, please encourage people to apply!
From the linked post:
At first glance, this seems potentially ‘wildly conservative’ to me, if I think of what this implies for the AI risk mitigation portion of the funding and how this intersects with (shortening) timelines [estimates].
My impression from looking briefly at recent grants is that probably ⇐ 150M$ was spent by Open Philanthropy on AI risk mitigation during the past year. A doubling of AI risk spending would imply ⇐ 300M$ / year.
AFAICT (including based on non-public conversations / information), at this point, median forecasts for something like TAI / AGI are very often < 10 years, especially from people who have thought the most about this question. And a very respectable share of those people seem to have < 5 year medians.
Given e.g. https://www.bloomberg.com/billionaires/profiles/dustin-a-moskovitz/, I assume, in principle, Open Philanthropy could spend > 20B$ in total. So 150M$ [/ year] is less than 1% of the total portfolio and even 300M$ [/ year] would be < 2%.
X-risk estimates from powerful AI vs. from other sources often have AI take more than half of the total x-risk (e.g. estimates from ‘The Precipice’ have AI take ~10% of ~17% for x-risk during the next ~100 years).
Considering all the above, the current AI risk mitigation spending plans seem to me far too conservative.
I also personally find it pretty unlikely that there aren’t decent opportunities to spend > 300M$ / year (and especially > 150M$ / year), given e.g. the growth in the public discourse about AI risks; and that some plans could potentially be [very] scalable in how much funding they could take in, e.g. field-building, non-mentored independent research, or automated AI safety R&D.
Am I missing something (obvious) here?
(P.S.: my perspective here might be influenced / biased in a few ways here, given my AI risk mitigation focus, and how that intersects / has intersected with Open Philanthropy funding and career prospects.)
Re: why our current rate of spending on AI safety is “low.” At least for now, the main reason is lack of staff capacity! We’re putting a ton of effort into hiring (see here) but are still not finding as many qualified candidates for our AI roles as we’d like. If you want our AI safety spending to grow faster, please encourage people to apply!
There is also the theoretical possibility of disbursing a larger number of $ per hour of staff capacity.