I donât think this is a healthy way of framing disagreements about cause prioritization. Imagine if a fan of GiveDirectly started complaining about GiveWellâs top charities for âredirecting money from the wallets of worldâs poorest villagers...â Sounds almost like theft! Except, of course, that the âdefaultâ implicitly attributed here is purely rhetorical. No cause has any prior claim to the funds. The only question is where best to send them, and this should be determined in a cause neutral way, not picking out any one cause as the privileged âdefaultâ that is somehow robbed of its due by any or all competing candidates that receive funding.
Of course, youâre free to feel frustrated when others disagree with your priorities. I just think that the rhetorical framing of âredirectedâ funds is (i) not an accurate way to think about the situation, and (ii) potentially harmful, insofar as it seems apt to feed unwarranted grievances. So Iâd encourage folks to try to avoid it.
I appreciate the feedback and I think itâs helpful to think about what reference point weâre using. I stand by what Iâm saying, though, for a few reasons:
1) No cause has any prior claim to the funds, but theyâre zero-sum, and I think the counterfactual probably is more GH&D funding. Maybe there are funders who are willing to donate only to longtermist causes, but I think the model of a pool of money being split between GH&D/âanimal welfare and longtermism/âx-risk is somewhat fair: e.g., OpenPhil splits its money between these two buckets, and a lot of EAs defer to the âparty line.â So âwatching money get redirected from the Global South to AI researchersâ is a true description of much of whatâs happening. (More indirectly, I also think EAâs weirdness and futurism is turns off many people who might otherwise donate to GiveWell. This excellent post provides more detail. I think itâs worth thinking about whether packaging global health with futurism and movement-building expenses justified by post hoc Pascalian âBOTECsâ really does more good than harm.)
2) Even if you donât buy this, I believe making GH&D the baseline is (at least as I see itâDuncan Sabien says this is true of the drowning child thought experiment too), to some extent, the point of EA. It says âdonât pay an extra $5,000/âyear for rent to get a marginally nicer apartment because the opportunity cost could be saving a life.â At least, this ishow Peter Singer frames it in The Life You Can Save, the book that originally got me into EA.
Also, this is basically what GiveWell does by using GiveDirectly as a lower bound that their top charities have to beat. They realize that if the alternative is giving to GD, giving to Malaria Consortium or New Incentives does in practice âredirect money from the wallets of worldâs poorest villagers.â I agree with their framing that this is an appropriate bar to expect their top charities to clear.
(i) not an accurate way to think about the situation
I agree that that the framing could be improved, but Iâm not sure the actual claim is inaccurate? There is a pool of donors who make their decisions based on the opinions of EA. Several years ago they were âdirectedâ toward giving their money towards global poverty. Now, due to a shift in opinion, they are âdirectedâ towards giving their money towards AI safety. At least some of that money has been âredirectedâ: if the shift hadnât occurred, global poverty would probably have had more money, and AI safety probably would have had less.
As an AI risk believer, you think that this change in funding is on balance good, whereas the OP is an AI risk skeptic that thinks this shift in funding is bad. Both are valid opinions that cast no aspersions on ones character (and here is where I think the framing could be improved). I think if you fall into the latter camp, itâs a perfectly valid reason to want to leave.
âI think if you fall into the latter camp, itâs a perfectly valid reason to want to leave.â
I guess I find this framing quick unfortunate, though I wonât at all begrudge anyone if they donât want to associate with EA any more. Global Health & Development funding has never been higher, and is still the top cause area for EA funding as far as Iâm aware. The relative situation is likely to get even more pro-GHD in the coming years as the money from FTX will go to $0.
On the other hand, many EAs focused on GHD seem to think that they have no place left in the movement, which is really sad to me. I donât want to get into philosophical arguments about the generalisability/ârobustness of Singerâs argument, I think EA can clearly be a big enough tent for work into a variety of cause areas, both longtermist and those in GHD. I donât think itâs not at the moment, but many people do seem to think it isnât. Iâm not sure what the best path forward to bridge that gap is, perhaps there needs to be more publicly strong commitments to value pluralism from EA orgs/âthought leaders?
Thank you for that link, I find it genuinely heartening. I definitely donât want to ever discount the incredible work that EA does do in GHD, and the many, many lives that it has saved and continues to save in that area.
I can still see where the OP is coming from though. When I first started following EA and donating many years ago, it was primarily a GHD organisation about giving to developing countries in an effective manner based on the results of rigorous peer reviewed trial evidence. I was happy and proud to present it to anyone I knew.
But now, I see an organisation where the core of EA is vastly more concerned with AI risk than GHD. As an AI risk skeptic, I believe this is a mistake based on incorrect beliefs and reasoning, and by itâs nature it lacks the rigorous evidence I expect from the GHD work. (youâre free to disagree with this of course, but itâs a valid opinion and one a lot of people hold). If I endorse and advocate for EA as a whole, a large fraction of the money that is brought in by the endorsement will end up going to causes I consider highly ineffective, whereas if I advocate for specific GHD causes, 100% of it will go to things I consider effective. So the temptation is to leave EA and just advocate directly for GHD orgs.
My current approach is to stick around, take the AI arguments seriously and to attempt to write indepthcritique of what I find incorrect about them. But itâs a lot of effort and very hard work to write , and itâs very easy to get discouraged and think itâs pointless. So I understand why a lot of people are not bothering.
I am one of those donors, as are you, probably. Iâm not a high earner, but It does count. I make my decisions based on my own beliefs and the beliefs of who I trust. I also make it based on the opinions of EA, whenever I go look at the top charities of givewell.org to guide my donation decisions.
There are at least some some people who were previously donating to global poverty orgs based off EA recommendations, that are now donating to AI risk instead, based of EA recommendations, due to the shift in priorities among core EA. If the shift had not occurred, these people would still be donating to global poverty. You are welcome to view this as good or bad if you want, but itâs still true.
I donât think this is a healthy way of framing disagreements about cause prioritization. Imagine if a fan of GiveDirectly started complaining about GiveWellâs top charities for âredirecting money from the wallets of worldâs poorest villagers...â Sounds almost like theft! Except, of course, that the âdefaultâ implicitly attributed here is purely rhetorical. No cause has any prior claim to the funds. The only question is where best to send them, and this should be determined in a cause neutral way, not picking out any one cause as the privileged âdefaultâ that is somehow robbed of its due by any or all competing candidates that receive funding.
Of course, youâre free to feel frustrated when others disagree with your priorities. I just think that the rhetorical framing of âredirectedâ funds is (i) not an accurate way to think about the situation, and (ii) potentially harmful, insofar as it seems apt to feed unwarranted grievances. So Iâd encourage folks to try to avoid it.
I appreciate the feedback and I think itâs helpful to think about what reference point weâre using. I stand by what Iâm saying, though, for a few reasons:
1) No cause has any prior claim to the funds, but theyâre zero-sum, and I think the counterfactual probably is more GH&D funding. Maybe there are funders who are willing to donate only to longtermist causes, but I think the model of a pool of money being split between GH&D/âanimal welfare and longtermism/âx-risk is somewhat fair: e.g., OpenPhil splits its money between these two buckets, and a lot of EAs defer to the âparty line.â So âwatching money get redirected from the Global South to AI researchersâ is a true description of much of whatâs happening. (More indirectly, I also think EAâs weirdness and futurism is turns off many people who might otherwise donate to GiveWell. This excellent post provides more detail. I think itâs worth thinking about whether packaging global health with futurism and movement-building expenses justified by post hoc Pascalian âBOTECsâ really does more good than harm.)
2) Even if you donât buy this, I believe making GH&D the baseline is (at least as I see itâDuncan Sabien says this is true of the drowning child thought experiment too), to some extent, the point of EA. It says âdonât pay an extra $5,000/âyear for rent to get a marginally nicer apartment because the opportunity cost could be saving a life.â At least, this is how Peter Singer frames it in The Life You Can Save, the book that originally got me into EA.
Also, this is basically what GiveWell does by using GiveDirectly as a lower bound that their top charities have to beat. They realize that if the alternative is giving to GD, giving to Malaria Consortium or New Incentives does in practice âredirect money from the wallets of worldâs poorest villagers.â I agree with their framing that this is an appropriate bar to expect their top charities to clear.
I agree that that the framing could be improved, but Iâm not sure the actual claim is inaccurate? There is a pool of donors who make their decisions based on the opinions of EA. Several years ago they were âdirectedâ toward giving their money towards global poverty. Now, due to a shift in opinion, they are âdirectedâ towards giving their money towards AI safety. At least some of that money has been âredirectedâ: if the shift hadnât occurred, global poverty would probably have had more money, and AI safety probably would have had less.
As an AI risk believer, you think that this change in funding is on balance good, whereas the OP is an AI risk skeptic that thinks this shift in funding is bad. Both are valid opinions that cast no aspersions on ones character (and here is where I think the framing could be improved). I think if you fall into the latter camp, itâs a perfectly valid reason to want to leave.
âI think if you fall into the latter camp, itâs a perfectly valid reason to want to leave.â
I guess I find this framing quick unfortunate, though I wonât at all begrudge anyone if they donât want to associate with EA any more. Global Health & Development funding has never been higher, and is still the top cause area for EA funding as far as Iâm aware. The relative situation is likely to get even more pro-GHD in the coming years as the money from FTX will go to $0.
On the other hand, many EAs focused on GHD seem to think that they have no place left in the movement, which is really sad to me. I donât want to get into philosophical arguments about the generalisability/ârobustness of Singerâs argument, I think EA can clearly be a big enough tent for work into a variety of cause areas, both longtermist and those in GHD. I donât think itâs not at the moment, but many people do seem to think it isnât. Iâm not sure what the best path forward to bridge that gap is, perhaps there needs to be more publicly strong commitments to value pluralism from EA orgs/âthought leaders?
Thank you for that link, I find it genuinely heartening. I definitely donât want to ever discount the incredible work that EA does do in GHD, and the many, many lives that it has saved and continues to save in that area.
I can still see where the OP is coming from though. When I first started following EA and donating many years ago, it was primarily a GHD organisation about giving to developing countries in an effective manner based on the results of rigorous peer reviewed trial evidence. I was happy and proud to present it to anyone I knew.
But now, I see an organisation where the core of EA is vastly more concerned with AI risk than GHD. As an AI risk skeptic, I believe this is a mistake based on incorrect beliefs and reasoning, and by itâs nature it lacks the rigorous evidence I expect from the GHD work. (youâre free to disagree with this of course, but itâs a valid opinion and one a lot of people hold). If I endorse and advocate for EA as a whole, a large fraction of the money that is brought in by the endorsement will end up going to causes I consider highly ineffective, whereas if I advocate for specific GHD causes, 100% of it will go to things I consider effective. So the temptation is to leave EA and just advocate directly for GHD orgs.
My current approach is to stick around, take the AI arguments seriously and to attempt to write in depth critique of what I find incorrect about them. But itâs a lot of effort and very hard work to write , and itâs very easy to get discouraged and think itâs pointless. So I understand why a lot of people are not bothering.
There is a pool of donors who make their decisions based on their own beliefs and the beliefs of individuals they trust, not âEA.â See this post.
I am one of those donors, as are you, probably. Iâm not a high earner, but It does count. I make my decisions based on my own beliefs and the beliefs of who I trust. I also make it based on the opinions of EA, whenever I go look at the top charities of givewell.org to guide my donation decisions.
There are at least some some people who were previously donating to global poverty orgs based off EA recommendations, that are now donating to AI risk instead, based of EA recommendations, due to the shift in priorities among core EA. If the shift had not occurred, these people would still be donating to global poverty. You are welcome to view this as good or bad if you want, but itâs still true.
Will probably add this in as another example when I publish an update/âexpanded appendix to Setting the Zero Point.