I think the simple answer is that it’s become less prioritised by the central orgs (the EA GHD fund is on indefinite hiatus, GHD is a diminishing part of CoGi’s budget, 80k moved away from it almost entirely, Rethink seem to have shifted towards animal welfare, CEA seem to have an increasingly longtermist/AI focus, etc). This gives a top-down cultural impetus away from the subject, and just means there’s less money in it.
It’s also, for better or worse, as an evidence-oriented field, a subject that’s harder to have amateur conversations about. I’ve been consistently supportive of it in my time here, but have had very little to contribute to conversations about what actually works, and felt that there was little value in contributing to any others.
I would love to see this reverse—I think EA is much richer for spanning multiple cause areas, and especially those which are well-evidenced. I don’t have any good solutions though :\
I agree that the depth of the evidence conversations doesn’t lend itself to amateur discussion on the forum and I also feel like there’s not much I have to add to the GHD discussions here because of that.
Don’t think it’s fair to say it’s not prioritised among the orgs. My understanding is that Coefficient Giving still gives huge amounts to GiveWell charities and grants.
Last I heard it was something like 10% of their GCR budget.
It’s also basically impossible to apply for GHD funding. I recently decided to put my money where my mouth is and get involved in an early stage GHD project, but there’s basically no EA-aligned funder who’s willing to let you approach them.
SFF are exclusively longtermist, EA GHD as mentioned basically shut down, and Givewell and CoGi don’t accept unsolicited applications. So as far as I can see if you think you have an idea in the GHD space and need funding for it you basically have to look outside the EA world (someone tell me if I missed something!)
Last I heard it was something like 10% of their GCR budget.
I don’t think that’s right — CG gave $400m to GHW in 2025, and to get a sense of what % that might be, Alexander Berger (CEO of CG) shared that overall “Coefficient Giving directed over $1 billion in 2025” in his recent letter.
Yep this is a legitimate concern, its hard for new projects that aren’t being incubated through CE for sure. I think there are decent arguments for bigger funders not funding new initiatives though. I think its not the worst for friends/family/non EA funds to help starting new initiatives before official funders get involved. Also (I could be wrong) if you made a very strong argument here on the forum there might be people willing to help.
The Global Health Funding circle is another EA avenue for newer ventures :). Also Scott Alexander’s yearly giveaway is open to new ideas and they fund a bunch of GHD stuff
Love this @Arepo and i largely agree. I think there’s plenty of uncertainty and space for amateur- ish discussions about GHD stuff. Yes even taking about specific interventions it helps to have specific knowledge but mostly it’s figure-out-able for a switched on person. i would say a lot of Technical AI discussion is harder- i struggle to understand some of the threads on lesswrong!
I think the simple answer is that it’s become less prioritised by the central orgs (the EA GHD fund is on indefinite hiatus, GHD is a diminishing part of CoGi’s budget, 80k moved away from it almost entirely, Rethink seem to have shifted towards animal welfare, CEA seem to have an increasingly longtermist/AI focus, etc). This gives a top-down cultural impetus away from the subject, and just means there’s less money in it.
It’s also, for better or worse, as an evidence-oriented field, a subject that’s harder to have amateur conversations about. I’ve been consistently supportive of it in my time here, but have had very little to contribute to conversations about what actually works, and felt that there was little value in contributing to any others.
I would love to see this reverse—I think EA is much richer for spanning multiple cause areas, and especially those which are well-evidenced. I don’t have any good solutions though :\
I agree that the depth of the evidence conversations doesn’t lend itself to amateur discussion on the forum and I also feel like there’s not much I have to add to the GHD discussions here because of that.
Don’t think it’s fair to say it’s not prioritised among the orgs. My understanding is that Coefficient Giving still gives huge amounts to GiveWell charities and grants.
Last I heard it was something like 10% of their GCR budget.
It’s also basically impossible to apply for GHD funding. I recently decided to put my money where my mouth is and get involved in an early stage GHD project, but there’s basically no EA-aligned funder who’s willing to let you approach them.
SFF are exclusively longtermist, EA GHD as mentioned basically shut down, and Givewell and CoGi don’t accept unsolicited applications. So as far as I can see if you think you have an idea in the GHD space and need funding for it you basically have to look outside the EA world (someone tell me if I missed something!)
Hey Arepo!
I don’t think that’s right — CG gave $400m to GHW in 2025, and to get a sense of what % that might be, Alexander Berger (CEO of CG) shared that overall “Coefficient Giving directed over $1 billion in 2025” in his recent letter.
Yep this is a legitimate concern, its hard for new projects that aren’t being incubated through CE for sure. I think there are decent arguments for bigger funders not funding new initiatives though. I think its not the worst for friends/family/non EA funds to help starting new initiatives before official funders get involved. Also (I could be wrong) if you made a very strong argument here on the forum there might be people willing to help.
The Global Health Funding circle is another EA avenue for newer ventures :). Also Scott Alexander’s yearly giveaway is open to new ideas and they fund a bunch of GHD stuff
Love this @Arepo and i largely agree. I think there’s plenty of uncertainty and space for amateur- ish discussions about GHD stuff. Yes even taking about specific interventions it helps to have specific knowledge but mostly it’s figure-out-able for a switched on person. i would say a lot of Technical AI discussion is harder- i struggle to understand some of the threads on lesswrong!