I think the simple answer is that it’s become less prioritised by the central orgs (the EA GHD fund is on indefinite hiatus, GHD is a diminishing part of CoGi’s budget, 80k moved away from it almost entirely, Rethink seem to have shifted towards animal welfare, CEA seem to have an increasingly longtermist/AI focus, etc). This gives a top-down cultural impetus away from the subject, and just means there’s less money in it.
It’s also, for better or worse, as an evidence-oriented field, a subject that’s harder to have amateur conversations about. I’ve been consistently supportive of it in my time here, but have had very little to contribute to conversations about what actually works, and felt that there was little value in contributing to any others.
I would love to see this reverse—I think EA is much richer for spanning multiple cause areas, and especially those which are well-evidenced. I don’t have any good solutions though :\
I agree that the depth of the evidence conversations doesn’t lend itself to amateur discussion on the forum and I also feel like there’s not much I have to add to the GHD discussions here because of that.
Don’t think it’s fair to say it’s not prioritised among the orgs. My understanding is that Coefficient Giving still gives huge amounts to GiveWell charities and grants.
Last I heard it was something like 10% of their GCR budget.
It’s also basically impossible to apply for GHD funding. I recently decided to put my money where my mouth is and get involved in an early stage GHD project, but there’s basically no EA-aligned funder who’s willing to let you approach them.
SFF are exclusively longtermist, EA GHD as mentioned basically shut down, and Givewell and CoGi don’t accept unsolicited applications. So as far as I can see if you think you have an idea in the GHD space and need funding for it you basically have to look outside the EA world (someone tell me if I missed something!)
Last I heard it was something like 10% of their GCR budget.
I don’t think that’s right — CG gave $400m to GHW in 2025, and to get a sense of what % that might be, Alexander Berger (CEO of CG) shared that overall “Coefficient Giving directed over $1 billion in 2025” in his recent letter.
I’m confused by the strong negative reaction to this comment. I guess it’s about the CoGi funding, which does sound like I was wrong. But it seems to be true that there’s no option to directly apply for funding for a new project (NickLaing mentions the GH funding circle, but they completed one round last year and their website doesn’t currently imply there would be any more).
I think this helps explain the decline of GHD in the OP—AIM’s charity list notwithstanding, no-one in the movement is incentivised to come up with practical ideas in the field.
Yep this is a legitimate concern, its hard for new projects that aren’t being incubated through CE for sure. I think there are decent arguments for bigger funders not funding new initiatives though. I think its not the worst for friends/family/non EA funds to help starting new initiatives before official funders get involved. Also (I could be wrong) if you made a very strong argument here on the forum there might be people willing to help.
The Global Health Funding circle is another EA avenue for newer ventures :). Also Scott Alexander’s yearly giveaway is open to new ideas and they fund a bunch of GHD stuff
A late comment to say that I don’t think RP takes the view that any given cause area is more important than another, either philosophically or in practice. Our GHD team produces a steady stream of-I think-interesting and helpful reports. Perhaps this perception stems from the fact that a lot of our GHD work is not public (for various reasons), or simply that people don’t engage with it as much as they might have in the past.
Thanks Tom. I’m sure that’s true in theory, but in practice RP is at the public forefront of the animal welfare work in the way that they aren’t in other work. That’s not to diminish other work, more to say that in the public sphere, the moral weights, cause prioritization work and surveys on community preferences point heavily in the direction of animal welfare.
So i might weakly disagree with your “in practice” claim. This might not be intentional or even bad if it’s pushing animal welfare work more to the forefront.
Thanks for jumping in Nick. I appreciate the distinction.
To be clear, what I meant by “in practice” is the actual amount of effort, time, and resources RP dedicates to GHD internally, which is distinct from its public footprint and its ultimate impact. My point is simply that characterizing RP as having “shifted” to animal welfare doesn’t capture my sense of internal resource allocation and the external impact of our GHD work (some of which may be not in the public domain), even if that’s how it appears externally.
Love this @Arepo and i largely agree. I think there’s plenty of uncertainty and space for amateur- ish discussions about GHD stuff. Yes even taking about specific interventions it helps to have specific knowledge but mostly it’s figure-out-able for a switched on person. i would say a lot of Technical AI discussion is harder- i struggle to understand some of the threads on lesswrong!
I think the simple answer is that it’s become less prioritised by the central orgs (the EA GHD fund is on indefinite hiatus, GHD is a diminishing part of CoGi’s budget, 80k moved away from it almost entirely, Rethink seem to have shifted towards animal welfare, CEA seem to have an increasingly longtermist/AI focus, etc). This gives a top-down cultural impetus away from the subject, and just means there’s less money in it.
It’s also, for better or worse, as an evidence-oriented field, a subject that’s harder to have amateur conversations about. I’ve been consistently supportive of it in my time here, but have had very little to contribute to conversations about what actually works, and felt that there was little value in contributing to any others.
I would love to see this reverse—I think EA is much richer for spanning multiple cause areas, and especially those which are well-evidenced. I don’t have any good solutions though :\
I agree that the depth of the evidence conversations doesn’t lend itself to amateur discussion on the forum and I also feel like there’s not much I have to add to the GHD discussions here because of that.
Don’t think it’s fair to say it’s not prioritised among the orgs. My understanding is that Coefficient Giving still gives huge amounts to GiveWell charities and grants.
Last I heard it was something like 10% of their GCR budget.
It’s also basically impossible to apply for GHD funding. I recently decided to put my money where my mouth is and get involved in an early stage GHD project, but there’s basically no EA-aligned funder who’s willing to let you approach them.
SFF are exclusively longtermist, EA GHD as mentioned basically shut down, and Givewell and CoGi don’t accept unsolicited applications. So as far as I can see if you think you have an idea in the GHD space and need funding for it you basically have to look outside the EA world (someone tell me if I missed something!)
Hey Arepo!
I don’t think that’s right — CG gave $400m to GHW in 2025, and to get a sense of what % that might be, Alexander Berger (CEO of CG) shared that overall “Coefficient Giving directed over $1 billion in 2025” in his recent letter.
I’m confused by the strong negative reaction to this comment. I guess it’s about the CoGi funding, which does sound like I was wrong. But it seems to be true that there’s no option to directly apply for funding for a new project (NickLaing mentions the GH funding circle, but they completed one round last year and their website doesn’t currently imply there would be any more).
I think this helps explain the decline of GHD in the OP—AIM’s charity list notwithstanding, no-one in the movement is incentivised to come up with practical ideas in the field.
Yep this is a legitimate concern, its hard for new projects that aren’t being incubated through CE for sure. I think there are decent arguments for bigger funders not funding new initiatives though. I think its not the worst for friends/family/non EA funds to help starting new initiatives before official funders get involved. Also (I could be wrong) if you made a very strong argument here on the forum there might be people willing to help.
The Global Health Funding circle is another EA avenue for newer ventures :). Also Scott Alexander’s yearly giveaway is open to new ideas and they fund a bunch of GHD stuff
A late comment to say that I don’t think RP takes the view that any given cause area is more important than another, either philosophically or in practice. Our GHD team produces a steady stream of-I think-interesting and helpful reports. Perhaps this perception stems from the fact that a lot of our GHD work is not public (for various reasons), or simply that people don’t engage with it as much as they might have in the past.
Thanks Tom. I’m sure that’s true in theory, but in practice RP is at the public forefront of the animal welfare work in the way that they aren’t in other work. That’s not to diminish other work, more to say that in the public sphere, the moral weights, cause prioritization work and surveys on community preferences point heavily in the direction of animal welfare.
So i might weakly disagree with your “in practice” claim. This might not be intentional or even bad if it’s pushing animal welfare work more to the forefront.
Thanks for jumping in Nick. I appreciate the distinction.
To be clear, what I meant by “in practice” is the actual amount of effort, time, and resources RP dedicates to GHD internally, which is distinct from its public footprint and its ultimate impact. My point is simply that characterizing RP as having “shifted” to animal welfare doesn’t capture my sense of internal resource allocation and the external impact of our GHD work (some of which may be not in the public domain), even if that’s how it appears externally.
Love this @Arepo and i largely agree. I think there’s plenty of uncertainty and space for amateur- ish discussions about GHD stuff. Yes even taking about specific interventions it helps to have specific knowledge but mostly it’s figure-out-able for a switched on person. i would say a lot of Technical AI discussion is harder- i struggle to understand some of the threads on lesswrong!