I disagree; last I checked most AI safety research orgs think they could make more good hires with more money and see themselves as funding-constrained—at least all 4 that I’m familiar with: RP, GovAI, FAR, and AI Impacts.
Edit: also see the recent Alignment Grantmaking is Funding-Limited Right Now (note that most alignment funding on the margin goes to paying and supporting researchers, in the general sense of the word).
I disagree; last I checked most AI safety research orgs think they could make more good hires with more money and see themselves as funding-constrained—at least all 4 that I’m familiar with: RP, GovAI, FAR, and AI Impacts.
Edit: also see the recent Alignment Grantmaking is Funding-Limited Right Now (note that most alignment funding on the margin goes to paying and supporting researchers, in the general sense of the word).