Empirically, I’ve observed some but not huge amounts of overlap between higher-rated applicants to the LTFF and applicants to Open Philanthropy’s programs; I’d estimate around 10%. And my guess is the “best historical grant opportunities” that Habryka is referring to[1] are largely in object-level AI safety work, which Open Philanthropy doesn’t have any open applications for right now (though it’s still funding individuals and research groups sourced through other means, and I think it may fund some of the MATS scholars in particular).
More broadly, many grantmakers at Open Philanthropy (including myself, and Ajeya, who is currently the only person full-time on technical AI safety grantmaking), are currently extremely capacity-constrained, so I wouldn’t make strong inferences that a given project isn’t cost-effective purely on the basis that Open Philanthropy hasn’t already funded it.
I don’t know exactly which grants this refers to and haven’t looked at our current highest-rated grants in-depth; I’m not intending to imply that I necessarily agree (or disagree) with Habryka’s statement.
Thank you for the detailed reply, that seems surprisingly little, I hope more apply.
Also really glad to hear that OP may fund some of the MATS scholars, as the original post mentioned that “some of [the unusual funding constrain] is caused by a large number of participants of the SERI MATS program applying for funding to continue the research they started during the program, and those applications are both highly time-sensitive and of higher-than-usual quality”.
Thank you again for taking the time to reply given the extreme capacity constrains
[Speaking for myself, not Open Philanthropy]
Empirically, I’ve observed some but not huge amounts of overlap between higher-rated applicants to the LTFF and applicants to Open Philanthropy’s programs; I’d estimate around 10%. And my guess is the “best historical grant opportunities” that Habryka is referring to[1] are largely in object-level AI safety work, which Open Philanthropy doesn’t have any open applications for right now (though it’s still funding individuals and research groups sourced through other means, and I think it may fund some of the MATS scholars in particular).
More broadly, many grantmakers at Open Philanthropy (including myself, and Ajeya, who is currently the only person full-time on technical AI safety grantmaking), are currently extremely capacity-constrained, so I wouldn’t make strong inferences that a given project isn’t cost-effective purely on the basis that Open Philanthropy hasn’t already funded it.
I don’t know exactly which grants this refers to and haven’t looked at our current highest-rated grants in-depth; I’m not intending to imply that I necessarily agree (or disagree) with Habryka’s statement.
Thank you for the detailed reply, that seems surprisingly little, I hope more apply.
Also really glad to hear that OP may fund some of the MATS scholars, as the original post mentioned that “some of [the unusual funding constrain] is caused by a large number of participants of the SERI MATS program applying for funding to continue the research they started during the program, and those applications are both highly time-sensitive and of higher-than-usual quality”.
Thank you again for taking the time to reply given the extreme capacity constrains