Hi Ben. I came across this article sort of at random and wanted to weigh in.
I’m senior management at a for-profit (non-EA-affiliated) company. In principle, the idea of EA is very appealing to me. I absolutely agree that doing good “correctly” is really important. Prior to the last couple of years, I could absolutely have seen myself joining an EA org.
But over those years, as my exposure within Silicon Valley and the kind of groups that overlap heavily with EA (e.g. “rationalists”) has grown, I’ve become more reluctant to support it. Bluntly, I don’t trust groups with very high proportions (perhaps majorities?) of people who believe in some form of ‘scientific racism’ to solve the problems of the world’s most vulnerable, who are almost entirely of races they consider biologically incapable of governing themselves. Nor do I trust groups that are so eager to deny what is (to me) self-evident problems with the local culture to solve problems within other cultures (especially ones they consider inferior).
“Effective” altruism requires a notion of what “effect” is. And when I find myself surrounded by people who seem so determined to ignore the stated and clear needs of the people around them, I am concerned that the “effect” they want is not the “effect” I want. How can I trust someone who won’t see sexism right in front of him (choice of pronoun very much intentional) to not exacerbate sexism through his efforts? How can I trust someone who thinks Africans are genetic cretins to have the cultural respect to help them build a functioning society within the structures that make sense to them? Paternalism and a refusal to understand conditions on the ground is the king of all altruistic failure-modes.
I get that there’s a lot of stupid takes on the broader tech world (which I would consider EA to be a part of) and on the kind of people in it. All that “you can’t reduce feelings to numbers” nonsense is dumb. All the “white men are trying to help brown people and that’s racist” takes are dumb. I don’t want to throw the baby totally out with the bathwater. But the longer I’m around, the bigger I think these problems are, and the less welcome I feel, both for what I am and for what I believe. Management is a social discipline, and its practitioners do not particularly enjoy having the importance of social factors (which we know are vital even in very small organizations, much less in societies of millions) dismissed.
I don’t know that I have a solution to offer you here. For myself, I’m increasingly of the view that founder effects have unfortunately tainted the movement beyond repair. But maybe my voice can at least give you a sense of where some of your problems with finding people-oriented talent lie.
Sure. To take one concrete example, I know this is an explicit belief of Scott Alexander’s (author of SlateStarCodex/AstralCodexTen and major LessWrong contributor—these are the two largest specific sources of EA growth beyond generics like “blog” or 80k hours itself, per this breakdown, and were my own entry point into awareness of EA). This came out through a series of leaked emails which cite people like Steve Sailer (second link under “1. HBD is probably partially correct”) in a general defense of neoreactionaries. Yes, these emails are old, but (a) he’s made no effort to claim they’re incorrect and (b) he’s very recently defended people like Steve Hsu, who explicitly endorse HBD on the grounds that that is a valid theory that deserves space for advocacy. I also know Scott and his immediate associates personally, and their defenses of his views to me personally made no effort to pretend his views were otherwise.
When this fact came out, I was quite horrified and said as much. I assumed this would be a major shock. Instead, I was unable to find a single member of the Berkeley rationalist community who had a problem with it. I asked quite a few, and all of them (without exception) endorsed a position that I can roughly sum up as “well sure, the fact is that black people are/probably are genetically stupid, but we’re not mean and just stating a fact so it’s fine”. This included at least one person involved heavily with planning EA Global events here in the Bay Area, and included every single person I know personally who has even the loosest affiliation with EA. To my knowledge, not one of these people has a problem with explicit endorsement of the belief that black people are genetically stupider than white people.
To be clear, I don’t think that makes them insincere. I believe they believe what they’re saying, and I believe that they are sincerely motivated to make the world better. That’s why I was part of that community in the first place—the people involved are indeed very kind and pleasant in the day to day, to the point that this ugliness could hide for a long time. So I don’t think stuff like “it seems like a bizarre thing for an EA to say” applies: I think they basically think that being effective requires facts and that ‘scientific racism’ is a fact or at least probable fact. There’s nothing inconsistent about that set of beliefs, abhorrent though it is to me.