I think it is important to keep in mind that we are not very funding constrained. It may be ok to have some false positives, false negatives may often be worse, so I wouldn’t be too careful.
I think grantmaking is probably still too reluctant to fund stuff that has an unlikely chance of high impact, especially if they are uncertain because the people aren’t EAs. For example, I told a very exceptional student (who has like 1 in a million problem solving capability) to apply for Atlas fellowship, although I don’t know him well, because from my limited knowledge it increases the chance that he will work on alignment from 10% to 20-25%, and the $50k are easily worth it.
Though of course having more false positives causes more people that only pretend to do sth good to apply, which isn’t easy to handle for our current limited number of grantmakers. We definitely need to scale up grantmaking ability anyways.
I think that non-EAs should know that they can get funding if they do something good/useful. You shouldn’t need to pretend to be an EA to get funding, and defending against people who pretend they do good projects seem easier in many cases, e.g. you can often just start giving a little funding and promise more funding later if they show they progress.
(I also expect that we/AI-risk-reduction gets even much more funding as the problem gets more known/acknowledged. I’d guess >$100B in 2030, so I don’t think funding ever becomes a bottleneck, but not totally sure of course.)
I think it is important to keep in mind that we are not very funding constrained. It may be ok to have some false positives, false negatives may often be worse, so I wouldn’t be too careful.
I think grantmaking is probably still too reluctant to fund stuff that has an unlikely chance of high impact, especially if they are uncertain because the people aren’t EAs.
For example, I told a very exceptional student (who has like 1 in a million problem solving capability) to apply for Atlas fellowship, although I don’t know him well, because from my limited knowledge it increases the chance that he will work on alignment from 10% to 20-25%, and the $50k are easily worth it.
Though of course having more false positives causes more people that only pretend to do sth good to apply, which isn’t easy to handle for our current limited number of grantmakers. We definitely need to scale up grantmaking ability anyways.
I think that non-EAs should know that they can get funding if they do something good/useful. You shouldn’t need to pretend to be an EA to get funding, and defending against people who pretend they do good projects seem easier in many cases, e.g. you can often just start giving a little funding and promise more funding later if they show they progress.
(I also expect that we/AI-risk-reduction gets even much more funding as the problem gets more known/acknowledged. I’d guess >$100B in 2030, so I don’t think funding ever becomes a bottleneck, but not totally sure of course.)