“I think if you fall into the latter camp, it’s a perfectly valid reason to want to leave.”
I guess I find this framing quick unfortunate, though I won’t at all begrudge anyone if they don’t want to associate with EA any more. Global Health & Development funding has never been higher, and is still the top cause area for EA funding as far as I’m aware. The relative situation is likely to get even more pro-GHD in the coming years as the money from FTX will go to $0.
On the other hand, many EAs focused on GHD seem to think that they have no place left in the movement, which is really sad to me. I don’t want to get into philosophical arguments about the generalisability/robustness of Singer’s argument, I think EA can clearly be a big enough tent for work into a variety of cause areas, both longtermist and those in GHD. I don’t think it’s not at the moment, but many people do seem to think it isn’t. I’m not sure what the best path forward to bridge that gap is, perhaps there needs to be more publicly strong commitments to value pluralism from EA orgs/thought leaders?
Thank you for that link, I find it genuinely heartening. I definitely don’t want to ever discount the incredible work that EA does do in GHD, and the many, many lives that it has saved and continues to save in that area.
I can still see where the OP is coming from though. When I first started following EA and donating many years ago, it was primarily a GHD organisation about giving to developing countries in an effective manner based on the results of rigorous peer reviewed trial evidence. I was happy and proud to present it to anyone I knew.
But now, I see an organisation where the core of EA is vastly more concerned with AI risk than GHD. As an AI risk skeptic, I believe this is a mistake based on incorrect beliefs and reasoning, and by it’s nature it lacks the rigorous evidence I expect from the GHD work. (you’re free to disagree with this of course, but it’s a valid opinion and one a lot of people hold). If I endorse and advocate for EA as a whole, a large fraction of the money that is brought in by the endorsement will end up going to causes I consider highly ineffective, whereas if I advocate for specific GHD causes, 100% of it will go to things I consider effective. So the temptation is to leave EA and just advocate directly for GHD orgs.
My current approach is to stick around, take the AI arguments seriously and to attempt to write indepthcritique of what I find incorrect about them. But it’s a lot of effort and very hard work to write , and it’s very easy to get discouraged and think it’s pointless. So I understand why a lot of people are not bothering.
“I think if you fall into the latter camp, it’s a perfectly valid reason to want to leave.”
I guess I find this framing quick unfortunate, though I won’t at all begrudge anyone if they don’t want to associate with EA any more. Global Health & Development funding has never been higher, and is still the top cause area for EA funding as far as I’m aware. The relative situation is likely to get even more pro-GHD in the coming years as the money from FTX will go to $0.
On the other hand, many EAs focused on GHD seem to think that they have no place left in the movement, which is really sad to me. I don’t want to get into philosophical arguments about the generalisability/robustness of Singer’s argument, I think EA can clearly be a big enough tent for work into a variety of cause areas, both longtermist and those in GHD. I don’t think it’s not at the moment, but many people do seem to think it isn’t. I’m not sure what the best path forward to bridge that gap is, perhaps there needs to be more publicly strong commitments to value pluralism from EA orgs/thought leaders?
Thank you for that link, I find it genuinely heartening. I definitely don’t want to ever discount the incredible work that EA does do in GHD, and the many, many lives that it has saved and continues to save in that area.
I can still see where the OP is coming from though. When I first started following EA and donating many years ago, it was primarily a GHD organisation about giving to developing countries in an effective manner based on the results of rigorous peer reviewed trial evidence. I was happy and proud to present it to anyone I knew.
But now, I see an organisation where the core of EA is vastly more concerned with AI risk than GHD. As an AI risk skeptic, I believe this is a mistake based on incorrect beliefs and reasoning, and by it’s nature it lacks the rigorous evidence I expect from the GHD work. (you’re free to disagree with this of course, but it’s a valid opinion and one a lot of people hold). If I endorse and advocate for EA as a whole, a large fraction of the money that is brought in by the endorsement will end up going to causes I consider highly ineffective, whereas if I advocate for specific GHD causes, 100% of it will go to things I consider effective. So the temptation is to leave EA and just advocate directly for GHD orgs.
My current approach is to stick around, take the AI arguments seriously and to attempt to write in depth critique of what I find incorrect about them. But it’s a lot of effort and very hard work to write , and it’s very easy to get discouraged and think it’s pointless. So I understand why a lot of people are not bothering.