I No Longer Feel Comfortable in EA
A few years ago, I read The Life You Can Save by Peter Singer. I felt deeply inspired. The idea that charities could be compared using evidence and reason, the thought that I could save many lives without sacrificing my own happiness: I found these ideas meaningful, and I hoped they would give my life a sense of purpose (even if other factors were likely also at play).
I became an Intro Fellow and read more. I went to conferences and retreats. I now lead my university group.
But I’m frustrated.
I’m now asked to answer for the actions of a man who defrauded millions of people, and for the purchase of castles and $2000+ coffee tables.
I’m now associated with predatory rationalists.
I’m now told to spend my life reducing existential risk by .00001 percent to protect 1018 future humans, and forced to watch money get redirected from the Global South to AI researchers.[1]
This is not what I signed up for.
I used to be proud to call myself an EA. Now, when I say it, I also feel shame and embarrassment.
I will take the Giving What We Can pledge, and I will stay friends with the many kind EAs I’ve met.
But I no longer feel represented by this community. And I think a lot of others feel the same way.
Edit log (2/6/23, 12:28pm): Edited the second item of the list, see RobBensinger’s comment.
- ^
This is not to say that longtermism is completely wrong—it’s not. I do, however, think “fanatical” or “strong” longtermism has gone too far.
Is influencing the far future really tractable? How is x-risk reduction not a Pascal’s mugging?
I agree that future generations are probably too neglected right now. But I just don’t find myself entirely convinced by the current EA answers to these questions. (See also.)
I don’t think this is a healthy way of framing disagreements about cause prioritization. Imagine if a fan of GiveDirectly started complaining about GiveWell’s top charities for “redirecting money from the wallets of world’s poorest villagers...” Sounds almost like theft! Except, of course, that the “default” implicitly attributed here is purely rhetorical. No cause has any prior claim to the funds. The only question is where best to send them, and this should be determined in a cause neutral way, not picking out any one cause as the privileged “default” that is somehow robbed of its due by any or all competing candidates that receive funding.
Of course, you’re free to feel frustrated when others disagree with your priorities. I just think that the rhetorical framing of “redirected” funds is (i) not an accurate way to think about the situation, and (ii) potentially harmful, insofar as it seems apt to feed unwarranted grievances. So I’d encourage folks to try to avoid it.
I appreciate the feedback and I think it’s helpful to think about what reference point we’re using. I stand by what I’m saying, though, for a few reasons:
1) No cause has any prior claim to the funds, but they’re zero-sum, and I think the counterfactual probably is more GH&D funding. Maybe there are funders who are willing to donate only to longtermist causes, but I think the model of a pool of money being split between GH&D/animal welfare and longtermism/x-risk is somewhat fair: e.g., OpenPhil splits its money between these two buckets, and a lot of EAs defer to the “party line.” So “watching money get redirected from the Global South to AI researchers” is a true description of much of what’s happening. (More indirectly, I also think EA’s weirdness and futurism is turns off many people who might otherwise donate to GiveWell. This excellent post provides more detail. I think it’s worth thinking about whether packaging global health with futurism and movement-building expenses justified by post hoc Pascalian “BOTECs” really does more good than harm.)
2) Even if you don’t buy this, I believe making GH&D the baseline is (at least as I see it—Duncan Sabien says this is true of the drowning child thought experiment too), to some extent, the point of EA. It says “don’t pay an extra $5,000/year for rent to get a marginally nicer apartment because the opportunity cost could be saving a life.” At least, this is how Peter Singer frames it in The Life You Can Save, the book that originally got me into EA.
Also, this is basically what GiveWell does by using GiveDirectly as a lower bound that their top charities have to beat. They realize that if the alternative is giving to GD, giving to Malaria Consortium or New Incentives does in practice “redirect money from the wallets of world’s poorest villagers.” I agree with their framing that this is an appropriate bar to expect their top charities to clear.
I agree that that the framing could be improved, but I’m not sure the actual claim is inaccurate? There is a pool of donors who make their decisions based on the opinions of EA. Several years ago they were “directed” toward giving their money towards global poverty. Now, due to a shift in opinion, they are “directed” towards giving their money towards AI safety. At least some of that money has been “redirected”: if the shift hadn’t occurred, global poverty would probably have had more money, and AI safety probably would have had less.
As an AI risk believer, you think that this change in funding is on balance good, whereas the OP is an AI risk skeptic that thinks this shift in funding is bad. Both are valid opinions that cast no aspersions on ones character (and here is where I think the framing could be improved). I think if you fall into the latter camp, it’s a perfectly valid reason to want to leave.
“I think if you fall into the latter camp, it’s a perfectly valid reason to want to leave.”
I guess I find this framing quick unfortunate, though I won’t at all begrudge anyone if they don’t want to associate with EA any more. Global Health & Development funding has never been higher, and is still the top cause area for EA funding as far as I’m aware. The relative situation is likely to get even more pro-GHD in the coming years as the money from FTX will go to $0.
On the other hand, many EAs focused on GHD seem to think that they have no place left in the movement, which is really sad to me. I don’t want to get into philosophical arguments about the generalisability/robustness of Singer’s argument, I think EA can clearly be a big enough tent for work into a variety of cause areas, both longtermist and those in GHD. I don’t think it’s not at the moment, but many people do seem to think it isn’t. I’m not sure what the best path forward to bridge that gap is, perhaps there needs to be more publicly strong commitments to value pluralism from EA orgs/thought leaders?
Thank you for that link, I find it genuinely heartening. I definitely don’t want to ever discount the incredible work that EA does do in GHD, and the many, many lives that it has saved and continues to save in that area.
I can still see where the OP is coming from though. When I first started following EA and donating many years ago, it was primarily a GHD organisation about giving to developing countries in an effective manner based on the results of rigorous peer reviewed trial evidence. I was happy and proud to present it to anyone I knew.
But now, I see an organisation where the core of EA is vastly more concerned with AI risk than GHD. As an AI risk skeptic, I believe this is a mistake based on incorrect beliefs and reasoning, and by it’s nature it lacks the rigorous evidence I expect from the GHD work. (you’re free to disagree with this of course, but it’s a valid opinion and one a lot of people hold). If I endorse and advocate for EA as a whole, a large fraction of the money that is brought in by the endorsement will end up going to causes I consider highly ineffective, whereas if I advocate for specific GHD causes, 100% of it will go to things I consider effective. So the temptation is to leave EA and just advocate directly for GHD orgs.
My current approach is to stick around, take the AI arguments seriously and to attempt to write in depth critique of what I find incorrect about them. But it’s a lot of effort and very hard work to write , and it’s very easy to get discouraged and think it’s pointless. So I understand why a lot of people are not bothering.
There is a pool of donors who make their decisions based on their own beliefs and the beliefs of individuals they trust, not “EA.” See this post.
I am one of those donors, as are you, probably. I’m not a high earner, but It does count. I make my decisions based on my own beliefs and the beliefs of who I trust. I also make it based on the opinions of EA, whenever I go look at the top charities of givewell.org to guide my donation decisions.
There are at least some some people who were previously donating to global poverty orgs based off EA recommendations, that are now donating to AI risk instead, based of EA recommendations, due to the shift in priorities among core EA. If the shift had not occurred, these people would still be donating to global poverty. You are welcome to view this as good or bad if you want, but it’s still true.
Will probably add this in as another example when I publish an update/expanded appendix to Setting the Zero Point.
I’ve never felt comfortable in EA broadly construed, not since I encountered it about three years ago. And yet I continue to be involved to a certain extent. Why? Because I think that doing so is useful for doing good, and many of the issues that EA focuses on are sadly still far too neglected elsewhere. Many of the people who come closest to sharing my values are in EA, so even if I didn’t want to be “in EA,” it would be pretty difficult to remove myself entirely.
I also love my university EA group, which is (intentionally, in part by my design, in part by the design of others) different from many other groups I’ve encountered.
I work in AI safety, and so the benefit of staying plugged into EA for me is probably higher than it would be for somebody who wants to work in global health and development. But I could still be making a (potentially massive) miscalculation.
If you think that EA is not serving your aims of doing good (the whole point of EA), then remember to look out the window. And even if you run an “EA” group, you don’t need to feel tied to the brand. Do what you think will actually be good for the world. Best of luck.
Seems really healthy and good to figure out what parts of it work for you and what parts don’t, and not feel like you need to answer for the parts that don’t.
“predatory polyamorous rationalists” is pretty bigoted. What would we think if someone referred to “predatory gays”?
But I’m a polyamorous EA, and I’m frustrated that I’m now associated with predatory polyamorous rationalists too. Their comment didn’t make a claim that the association was right to make, or even that that cohort even exists, just that that association is happening.
I think the line is a bit better if you replace rationalists with pseudo-rationalists and tech-bros though.
[Edit: I think using the terms “bigoted language” or “appears bigoted” would have been better choices than “is bigoted”. I think you want to be very careful to avoid misunderstandings that you are calling the person bigoted. I realize that quoting the phrase implied that bigoted is referencing the language (not necessarily the person), but if you think someone is using imprecise language, that should make you update that your conversation partner is more likely to misunderstand your own language. Just as you want someone to speak extra-carefully about sexual predators, we can all speak extra-carefully about bigotry etc by throwing in some more qualifiers]
“I’m now associated with predatory polyamorous rationalists.” doesn’t explicitly assert that all poly people are predatory, but it does read to me similar to “I’m now associated with predatory gay rationalists.” The implication is that it’s gross to be associated with poly people, just as it’s gross to be associated with predators. (“This is not what I signed up for. ”) And the implication is that polyamory and predatory behavior are a sort of package deal.
Compare, for example, “I’m now associated with greedy Jewish EAs” or “I’m now associated with smelly autistic gamers”. Are these explicitly asserting that all Jews are greedy, or that all autistic people are smelly? No, but I get the message loud and clear. OP is not being subtle here regarding what they think of polyamorous people.
Yeah I guess that’s probably the normal assumption, and likely what was meant. To me I’d think of the sentence for a gay person as more like “I’m now associated with Jeffrey Dahmer” or “I’m now associated with groomers”. Like that could totally be said in a gay space, and the sentence doesn’t require qualifiers of “not all gay people”, by virtue of being said in a gay-friendly space.
But yeah I guess this case doesn’t hold if someone who isn’t poly says it. And the majority of EAs are monogamous, likely OP too.
This thread is helping me realize that I’m still assuming that EAs aren’t judging poly people here, and that the EA Forum is still a safe space for poly people. I’ll keep this potential blindspot in my mind but keep giving benefit of the doubt for now. It’s not productive for me right now that I feel alienated regarding poly.
❤️
To be clear, I’m not making a claim about what EA-at-large feels. EA could be overall welcoming to poly people, and there still be occasional “poly people are gross” posts on the EA Forum (that end up at +0 ish karma rather than +100 ish karma). I just do think that “poly people are gross” is one part of what this post was expressing.
For anybody who wants more on this precise subject, check out Cat Couplings Revisited.
(Full disclosure: am polyamorous rationalist ¯\_(ツ)_/¯)
I have been trying to find this post and the older one for a few weeks now, but I couldn’t remember the term — thanks so much for linking it.
Thanks I’m glad I have language for this now
I edited the post because I don’t want this to distract from the larger message. A few points:
1) The recent TIME article argues that a lot of the misconduct and harassment is related to polyamory in EA. A few quotes:
I’m not saying people can never consent to having multiple partners, but this is not okay. People should not feel pressured into lifestyle choices like these. There needs to be a place in EA for people who want to buy malaria nets and want nothing to do with Berkeley polycules.
2) Keep in mind that these analogies risk trivializing the oppression that the LGBTQ+ community has faced. Gay and queer individuals have faced and continue to face massive discrimination, and being gay is never a choice.
As a gay person I really strongly object to this. I think it’s quite clear that in most of the modern US, being poly is significantly weirder and puts you more at risk of discrimination (e.g. of issues at work or with your family, or of having your partners recognised by the law) than being gay.
This is classic “oppression olympics” of a style that I think is nearly always counterproductive.
(NB: I actually agree that Bay Area poly culture is probably a contributing factor to a lot of the recent allegations and broader cultural issues, and that people in that culture need to take that possibility really seriously and think carefully about possibilities for change. I don’t think that legitimizes general anti-poly discrimination or derogatory language.)
Why would you take the TIME article at face value on this?
It doesn’t even get the language right. I’m poly, and I have never once heard people talk about “joining a polycule” as the thing someone chooses to do. That’s not how it works. You choose to date someone. “Polycule” just describes the set of people who you are dating, who your partner(s) are dating, who their partner(s) are dating, and so on. Dating someone doesn’t imply anything about how you have to relate to your metamours, much less people farther distant in the polycule. Sometimes you may never even know the full extent of your polycule.
I don’t know of a single poly person who would approve of the dynamic that the TIME article seems to describe, or any reason to think it is an accurate description of how EA works. Of course you shouldn’t shame people into dating you. Of course you shouldn’t leverage professional power for sexual benefit. Of course it’s good to be an EA and buy bed nets whether you are poly or monogomous. Nobody that I know of, poly or monogomous, disagrees with this. The fact that you think poly people do is what shows your prejudice. I suggest you try getting to know a poly person, talk to a poly person about their relationship(s), before opening your mouth on the subject again.
If I understand properly EA is to some extent a platform where many different perspectives can coexist. I find Shrimp welfare or deep longtermism totally pointless, but cause choice is a main principle of the movement.
Your effort and money can be directed to those issues that you really care about. Democracy is good, choice is even better!
I appreciate that you took the time to explain your perspective, and I am sorry to hear that you feel as you do. I think it is understandable and I sympathise.
Hopefully things improve and I think that they will.
Some very quick thoughts:
Even if you don’t feel part of the community, then you should perhaps still consider keeping an EA identity on the level of values.
For instance, you could continue to believe that you should i) try to do good and ii) try to do it effectively—arguably the core values that underpin EA as a philosophy.
I think that these are rare and admirable values and that EA is just one (though maybe the best) label of many that people use to communicate that they have them.
I don’t identify very strongly with the EA community, but I identify strongly with the core values as I see them.
Very-small-probability of very-large-impact is a straw man. People who think AGI risk is an important cause area think that because they also think that the probability is large.
I don’t see how that matters exactly? OP is talking about their effect, and I don’t think any work on AI safety to date has lowered the chance of catastrophe by more than a tiny amount.
I think my problem with a lot of this genre of post (neartermism > longtermism) is that the weird EAs donate much more? I would hazard a guess that “Berkeley polycules” have moved much more money for malaria nets while working on AI Safety and/or high income earn to give than most of the “normie EAs” at their office jobs. Also you can just donate and be a neartermist. Lots of people do this—you don’t need to brag about it or care about optics.
Wow.
Don’t have time to write a full article, but there is a lot, a lot to unpack.
First, it’s worth stopping to just unpack the object level claims in the parent comment—the reasoning is pretty wild, even on the surface level:
So absolutely, yes someone working on AI safety in an EA-funded AI safety job, or for OpenAI/Anthropic/DeepMind can donate more than someone working in a mere neartermist job in one of the neartermist orgs...ummm, I think there are more than one issue in that last sentence if you stop to think about it.
I would probably challenge the main claim, that that associated community is a dominant fraction of working income funding for bednets.
Also, money, shouldn’t be a criteria to discuss or waive serious issues. No matter what amount of money a community donates, does not mean it can be subject to bigotry or prejudice, or be shielded from investigation or concern of serious crime or misconduct.
Like, so much to unpack. Some of the points above, especially the third point, really make a person guess about what really people believe about SBF and FTX, versus what they say.
Getting to the big stuff:
Even before November, e.g. before the FTX fraud, the Bostrom/Tegmark “Peter Pan” management, the concentrated Bay Area sexual abuse, the “castles”.
I don’t think that even the longtermists in say, 2012, thinking about EA, would find the late 2022 state of longtermism ideal.
The affect from longtermist (or rather the current instantiation/culture of “longtermists”) on the rest of “neartermist” EA is not publicly known even in EA circles.
The truth is uh, much, more different than the parent comment suggests.
The main complaint from neartermist is not something like bednet funding should be 2X or 10X. It’s not even clear most global health people want more bednets.
They want something else, or even a host of other things, but they might not be fully aware of it. It would be the most brutal and sad thing to crisply elaborate on this and IMO dangerous for reasons below.
If they knew the truth, or were given a few anedcotes, the neartermists would probably riot.
I can’t give specific statements and I don’t have time to explain, but like, trust me, we do not want this poked at, and I strongly recommend not writing more about this or making similar comments like the parent one.
I think you are overstretching the claims I’m making here. Rather I am saying the heurestic and insinuations are that:
There is a clear split in longtermism and neartermist action for people where most EAs act in both regards and that the Bay Area memeing is unnecessarily divisive and misunderstands people’s motivation.
At no point do I excuse nor do I think most people excuse the sexual abuse that is too common.
I think people need to be more good faith in wondering why people shifted from things like GHW to AI Safety/CB/Governance in their jobs rather than presume it’s from some source of greed (though I have no doubt some did it out of self-interest and cannot speak for them).
Morality is hard in the best of times, and now is not the best of times. The movement may or may not be a good fit for you. I’m glad you’re still invested in doing good regardless of perceived or actual wrongdoing of other members of the movement to date, and I hope I and others will do the same.