* This area does seem strangely neglected still. For example based on grant pay-outs and feedback received I believe that:
Open Phil does not fund new longtermist-adjacent policy advocacy projects – with the exception of CSET (a project run by a very senior political figure). Although they do give to existing policy projects such as existing large biosecurity organisations.
The Long-Term Future Fund receives numerous applications for policy projects (I know of a few and am sure there have been more) but does not appear to have funded any such work. (As far as I am aware the APPG is the only policy advocacy project the LTFF ever funded, and that was when it was already going and had built traction and they wouldn’t fund it to do anything new or different from what it was already doing in case of the risks).
My charitable take is that the LTFF’s (and maybe others) lack of focus in policy is because of the challenge of vetting policy work. Consider for example that ideally to vet policy advocacy projects well you would have an expert who understand policy in the specific country where the grant is for, but having policy experts for every country is impractical
I think policy is a domain where there are still many opportunities for spotting and funding projects early, and having more impact than donating to the EA Funds. As mentioned it may be very hard for the EA Funds to do this kind of work.
That said I think this only applies if if you have (or know and trust someone) with relevant expertise. There are risks and I think plenty of the projects I have come across I thought, this should not to be funded.
(I had planned to write a whole post on this and on how to do active grant-making well as a small donor – not sure if I will have time but maybe)
I had planned to write a whole post on this and on how to do active grant-making well as a small donor – not sure if I will have time but maybe
I would love to read this post (especially any insights that might transfer to someone with AI Safety expertise, but not much in other areas of EA!). Do you think there’s much value in small donors giving to areas they don’t know much about? Especially in areas with potential high downside risk like policy. Eg, is the average value of the marginal “not fully funded” policy project obviously positive or negative?
* This area does seem strangely neglected still. For example based on grant pay-outs and feedback received I believe that:
Open Phil does not fund new longtermist-adjacent policy advocacy projects – with the exception of CSET (a project run by a very senior political figure). Although they do give to existing policy projects such as existing large biosecurity organisations.
The Long-Term Future Fund receives numerous applications for policy projects (I know of a few and am sure there have been more) but does not appear to have funded any such work. (As far as I am aware the APPG is the only policy advocacy project the LTFF ever funded, and that was when it was already going and had built traction and they wouldn’t fund it to do anything new or different from what it was already doing in case of the risks).
My charitable take is that the LTFF’s (and maybe others) lack of focus in policy is because of the challenge of vetting policy work. Consider for example that ideally to vet policy advocacy projects well you would have an expert who understand policy in the specific country where the grant is for, but having policy experts for every country is impractical
I think policy is a domain where there are still many opportunities for spotting and funding projects early, and having more impact than donating to the EA Funds. As mentioned it may be very hard for the EA Funds to do this kind of work.
That said I think this only applies if if you have (or know and trust someone) with relevant expertise. There are risks and I think plenty of the projects I have come across I thought, this should not to be funded.
(I had planned to write a whole post on this and on how to do active grant-making well as a small donor – not sure if I will have time but maybe)
I would love to read this post (especially any insights that might transfer to someone with AI Safety expertise, but not much in other areas of EA!). Do you think there’s much value in small donors giving to areas they don’t know much about? Especially in areas with potential high downside risk like policy. Eg, is the average value of the marginal “not fully funded” policy project obviously positive or negative?
Hey Neel. It doesn’t add much (still not had time for a top level post) but in case helpful a bit more of my reasoning is set out here.