Hi Ben. I came across this article sort of at random and wanted to weigh in.
I’m senior management at a for-profit (non-EA-affiliated) company. In principle, the idea of EA is very appealing to me. I absolutely agree that doing good “correctly” is really important. Prior to the last couple of years, I could absolutely have seen myself joining an EA org.
But over those years, as my exposure within Silicon Valley and the kind of groups that overlap heavily with EA (e.g. “rationalists”) has grown, I’ve become more reluctant to support it. Bluntly, I don’t trust groups with very high proportions (perhaps majorities?) of people who believe in some form of ‘scientific racism’ to solve the problems of the world’s most vulnerable, who are almost entirely of races they consider biologically incapable of governing themselves. Nor do I trust groups that are so eager to deny what is (to me) self-evident problems with the local culture to solve problems within other cultures (especially ones they consider inferior).
“Effective” altruism requires a notion of what “effect” is. And when I find myself surrounded by people who seem so determined to ignore the stated and clear needs of the people around them, I am concerned that the “effect” they want is not the “effect” I want. How can I trust someone who won’t see sexism right in front of him (choice of pronoun very much intentional) to not exacerbate sexism through his efforts? How can I trust someone who thinks Africans are genetic cretins to have the cultural respect to help them build a functioning society within the structures that make sense to them? Paternalism and a refusal to understand conditions on the ground is the king of all altruistic failure-modes.
I get that there’s a lot of stupid takes on the broader tech world (which I would consider EA to be a part of) and on the kind of people in it. All that “you can’t reduce feelings to numbers” nonsense is dumb. All the “white men are trying to help brown people and that’s racist” takes are dumb. I don’t want to throw the baby totally out with the bathwater. But the longer I’m around, the bigger I think these problems are, and the less welcome I feel, both for what I am and for what I believe. Management is a social discipline, and its practitioners do not particularly enjoy having the importance of social factors (which we know are vital even in very small organizations, much less in societies of millions) dismissed.
I don’t know that I have a solution to offer you here. For myself, I’m increasingly of the view that founder effects have unfortunately tainted the movement beyond repair. But maybe my voice can at least give you a sense of where some of your problems with finding people-oriented talent lie.
I’m surprised to see this so heavily downvoted—I’ve also had concerns about EA culture with regards to sex and race and I wouldn’t be surprised if it puts off people with some of the soft skills EA is missing. This comment definitely exaggerates and I’m not happy about that, but the underlying idea people who are good at navigating social dynamics are wary of EA, which is contributing to the talent gap, is pretty interesting.
Hi! Like Tessa, I appreciate you sharing your concerns about the EA movement. I downvoted because some of your criticisms seem off the mark to me. Specifically, in the two years I’ve been highly involved in EA, I haven’t heard a single person say that non-white people are “biologically incapable of governing themselves.” The scientific consensus is that “claims of inherent differences in intelligence between races have been broadly rejected by scientists on both theoretical and empirical grounds” (Wikipedia), so it seems like a bizarre thing for an EA to say. Do you mind telling us where you’ve heard someone in the EA community say this?
Sure. To take one concrete example, I know this is an explicit belief of Scott Alexander’s (author of SlateStarCodex/AstralCodexTen and major LessWrong contributor—these are the two largest specific sources of EA growth beyond generics like “blog” or 80k hours itself, per this breakdown, and were my own entry point into awareness of EA). This came out through a series of leaked emails which cite people like Steve Sailer (second link under “1. HBD is probably partially correct”) in a general defense of neoreactionaries. Yes, these emails are old, but (a) he’s made no effort to claim they’re incorrect and (b) he’s very recently defended people like Steve Hsu, who explicitly endorse HBD on the grounds that that is a valid theory that deserves space for advocacy. I also know Scott and his immediate associates personally, and their defenses of his views to me personally made no effort to pretend his views were otherwise.
When this fact came out, I was quite horrified and said as much. I assumed this would be a major shock. Instead, I was unable to find a single member of the Berkeley rationalist community who had a problem with it. I asked quite a few, and all of them (without exception) endorsed a position that I can roughly sum up as “well sure, the fact is that black people are/probably are genetically stupid, but we’re not mean and just stating a fact so it’s fine”. This included at least one person involved heavily with planning EA Global events here in the Bay Area, and included every single person I know personally who has even the loosest affiliation with EA. To my knowledge, not one of these people has a problem with explicit endorsement of the belief that black people are genetically stupider than white people.
To be clear, I don’t think that makes them insincere. I believe they believe what they’re saying, and I believe that they are sincerely motivated to make the world better. That’s why I was part of that community in the first place—the people involved are indeed very kind and pleasant in the day to day, to the point that this ugliness could hide for a long time. So I don’t think stuff like “it seems like a bizarre thing for an EA to say” applies: I think they basically think that being effective requires facts and that ‘scientific racism’ is a fact or at least probable fact. There’s nothing inconsistent about that set of beliefs, abhorrent though it is to me.
For the question “How would you describe your opinion of the [sic] the idea of “human biodiversity”, eg the belief that races differ genetically in socially relevant ways?”
20.8% answered 4 and 8.7% answered 5.
Where 1 is Very unfavorable and 5 is Very favorable
Taking that at face value, 30% of Scott’s readers think favorably of “HBD”.
(I guess you could look at it as “80% of SSC readers fail to condemn scientific racism”. But that doesn’t strike me as charitable.)
From the same survey, 13.6% identified as EAs, and 33.4% answered sorta EA.
I should mention that the survey has some nonsensical answers (IQs of 186, verbal SATs of 30). And it appears that many answers lean liberal (Identifying as liberals, thinking favorably of feminism, and more open borders, while thinking unfavorably of Trump.)
“… If HBD is true, then all the existing correlational and longitudinal evidence immediately implies that group differences are the major reason why per capita income in the USA are 3-190x per capita income in Africa, that group differences are a major driver of history and the future, that intelligence has enormous spillovers totally ignored in all current analyses. This has huge implications for historical research, immigration policy (regression to the mean), dysgenics discussions (minor to irrelevant from just the individual differences perspective but long-term existential threat from HBD), development aid, welfare programs, education, and pretty much every single topic in the culture wars touching on ‘sexism’ or ‘racism’ where the supposedly iron-clad evidence is confounded or based on rational priors.”
I’m trying to imagine what global development charities EAs who believe HBD donate to, and I’m having a hard time. Assuming this implies that some EAs (1-5%?) believe in this, I would reckon they’re more focused on X-risks or animal welfare. (I don’t think this is true anymore, see comment below) It would be helpful to see how the people who identify as EAs answered this question.
Finally, from Scott’s email (which I think sharing is a horrible violation of privacy), the last sentence is emblematic of the attitude of lots of people in the community (including myself). My Goodreads contains lots of books I expect to disagree with or be offended by (Gyn/Ecology, The Bell Curve), but I still think it’s important to look into them.
Valuing new insights sometimes means looking into things no one else would, and that has been very useful for the community (fish/insect welfare, longtermism). But unfortunately, one risk is that at least some people will come out believing (outrageously) wrong things. I think that is worth it.
On a personal note, I’m black, and a community organizer, and I haven’t encountered anything but respect and love from the EA community.
I’m trying to imagine what global development charities EAs who believe HBD donate to, and I’m having a hard time.
I don’t totally follow why „the belief that races differ genetically in socially relevant ways“ would leave one to not donate to for example the Against Malaria Foundation, or Give Directly. Assuming that there for example is on average a (slightly?) lower average IQ, it seems to me that less Malaria or more money still will do most one would hope for and what the RCTs say they do, even if you might expect (slightly ?) lower economic growth potential and in the longer term (slightly?) less potential for the regions to become highly-specialized skilled labor places?
I think you’re right. I guess I took Gwen’s comment at face value and tried to figure out how development aid will look different due to the “huge implications”, which was hard.
Hi Ben. I came across this article sort of at random and wanted to weigh in.
I’m senior management at a for-profit (non-EA-affiliated) company. In principle, the idea of EA is very appealing to me. I absolutely agree that doing good “correctly” is really important. Prior to the last couple of years, I could absolutely have seen myself joining an EA org.
But over those years, as my exposure within Silicon Valley and the kind of groups that overlap heavily with EA (e.g. “rationalists”) has grown, I’ve become more reluctant to support it. Bluntly, I don’t trust groups with very high proportions (perhaps majorities?) of people who believe in some form of ‘scientific racism’ to solve the problems of the world’s most vulnerable, who are almost entirely of races they consider biologically incapable of governing themselves. Nor do I trust groups that are so eager to deny what is (to me) self-evident problems with the local culture to solve problems within other cultures (especially ones they consider inferior).
“Effective” altruism requires a notion of what “effect” is. And when I find myself surrounded by people who seem so determined to ignore the stated and clear needs of the people around them, I am concerned that the “effect” they want is not the “effect” I want. How can I trust someone who won’t see sexism right in front of him (choice of pronoun very much intentional) to not exacerbate sexism through his efforts? How can I trust someone who thinks Africans are genetic cretins to have the cultural respect to help them build a functioning society within the structures that make sense to them? Paternalism and a refusal to understand conditions on the ground is the king of all altruistic failure-modes.
I get that there’s a lot of stupid takes on the broader tech world (which I would consider EA to be a part of) and on the kind of people in it. All that “you can’t reduce feelings to numbers” nonsense is dumb. All the “white men are trying to help brown people and that’s racist” takes are dumb. I don’t want to throw the baby totally out with the bathwater. But the longer I’m around, the bigger I think these problems are, and the less welcome I feel, both for what I am and for what I believe. Management is a social discipline, and its practitioners do not particularly enjoy having the importance of social factors (which we know are vital even in very small organizations, much less in societies of millions) dismissed.
I don’t know that I have a solution to offer you here. For myself, I’m increasingly of the view that founder effects have unfortunately tainted the movement beyond repair. But maybe my voice can at least give you a sense of where some of your problems with finding people-oriented talent lie.
Appreciate you sharing why you have a negative impression of the Effective Altruism movement and aren’t interested in joining an EA org; you might be getting downvoted under the “clear, on-topic, and kind” comment guideline, but I’m not sure. In my own experience, there sure are lots of frustrating Silicon Valley memes that are overly dismissive of social factors (or of sexism and racism) out in the world, but they aren’t dominant among people actually doing direct EA-affiliated work. As a few recent examples that demonstrate a sensitivity to the importance of social factors, I enjoyed this 80,000 Hours Podcast with Leah Garcés on strategic and empathetic communication for animal advocacy and this post on surprising things learned from a year of working on policy to eliminate lead exposure in Malawi, Bostwana, Madagascar and Zimbabwe.
I’m surprised to see this so heavily downvoted—I’ve also had concerns about EA culture with regards to sex and race and I wouldn’t be surprised if it puts off people with some of the soft skills EA is missing. This comment definitely exaggerates and I’m not happy about that, but the underlying idea people who are good at navigating social dynamics are wary of EA, which is contributing to the talent gap, is pretty interesting.
Hi! Like Tessa, I appreciate you sharing your concerns about the EA movement. I downvoted because some of your criticisms seem off the mark to me. Specifically, in the two years I’ve been highly involved in EA, I haven’t heard a single person say that non-white people are “biologically incapable of governing themselves.” The scientific consensus is that “claims of inherent differences in intelligence between races have been broadly rejected by scientists on both theoretical and empirical grounds” (Wikipedia), so it seems like a bizarre thing for an EA to say. Do you mind telling us where you’ve heard someone in the EA community say this?
Sure. To take one concrete example, I know this is an explicit belief of Scott Alexander’s (author of SlateStarCodex/AstralCodexTen and major LessWrong contributor—these are the two largest specific sources of EA growth beyond generics like “blog” or 80k hours itself, per this breakdown, and were my own entry point into awareness of EA). This came out through a series of leaked emails which cite people like Steve Sailer (second link under “1. HBD is probably partially correct”) in a general defense of neoreactionaries. Yes, these emails are old, but (a) he’s made no effort to claim they’re incorrect and (b) he’s very recently defended people like Steve Hsu, who explicitly endorse HBD on the grounds that that is a valid theory that deserves space for advocacy. I also know Scott and his immediate associates personally, and their defenses of his views to me personally made no effort to pretend his views were otherwise.
When this fact came out, I was quite horrified and said as much. I assumed this would be a major shock. Instead, I was unable to find a single member of the Berkeley rationalist community who had a problem with it. I asked quite a few, and all of them (without exception) endorsed a position that I can roughly sum up as “well sure, the fact is that black people are/probably are genetically stupid, but we’re not mean and just stating a fact so it’s fine”. This included at least one person involved heavily with planning EA Global events here in the Bay Area, and included every single person I know personally who has even the loosest affiliation with EA. To my knowledge, not one of these people has a problem with explicit endorsement of the belief that black people are genetically stupider than white people.
To be clear, I don’t think that makes them insincere. I believe they believe what they’re saying, and I believe that they are sincerely motivated to make the world better. That’s why I was part of that community in the first place—the people involved are indeed very kind and pleasant in the day to day, to the point that this ugliness could hide for a long time. So I don’t think stuff like “it seems like a bizarre thing for an EA to say” applies: I think they basically think that being effective requires facts and that ‘scientific racism’ is a fact or at least probable fact. There’s nothing inconsistent about that set of beliefs, abhorrent though it is to me.
Hey, I thought this discussion could use some data. I also added some personal impressions.
These are the results of the 2020 SSC survey.
For the question “How would you describe your opinion of the [sic] the idea of “human biodiversity”, eg the belief that races differ genetically in socially relevant ways?”
20.8% answered 4 and 8.7% answered 5.
Where 1 is Very unfavorable and 5 is Very favorable
The answers look similar for 2019
Taking that at face value, 30% of Scott’s readers think favorably of “HBD”.
(I guess you could look at it as “80% of SSC readers fail to condemn scientific racism”. But that doesn’t strike me as charitable.)
From the same survey, 13.6% identified as EAs, and 33.4% answered sorta EA.
I should mention that the survey has some nonsensical answers (IQs of 186, verbal SATs of 30). And it appears that many answers lean liberal (Identifying as liberals, thinking favorably of feminism, and more open borders, while thinking unfavorably of Trump.)
A while ago, Gwern wrote
I’m trying to imagine what global development charities EAs who believe HBD donate to, and I’m having a hard time.
Assuming this implies that some EAs (1-5%?) believe in this, I would reckon they’re more focused on X-risks or animal welfare. (I don’t think this is true anymore, see comment below) It would be helpful to see how the people who identify as EAs answered this question.
Finally, from Scott’s email (which I think sharing is a horrible violation of privacy), the last sentence is emblematic of the attitude of lots of people in the community (including myself). My Goodreads contains lots of books I expect to disagree with or be offended by (Gyn/Ecology, The Bell Curve), but I still think it’s important to look into them.
Valuing new insights sometimes means looking into things no one else would, and that has been very useful for the community (fish/insect welfare, longtermism). But unfortunately, one risk is that at least some people will come out believing (outrageously) wrong things. I think that is worth it.
On a personal note, I’m black, and a community organizer, and I haven’t encountered anything but respect and love from the EA community.
Great comment!
I don’t totally follow why „the belief that races differ genetically in socially relevant ways“ would leave one to not donate to for example the Against Malaria Foundation, or Give Directly. Assuming that there for example is on average a (slightly?) lower average IQ, it seems to me that less Malaria or more money still will do most one would hope for and what the RCTs say they do, even if you might expect (slightly ?) lower economic growth potential and in the longer term (slightly?) less potential for the regions to become highly-specialized skilled labor places?
I think you’re right. I guess I took Gwen’s comment at face value and tried to figure out how development aid will look different due to the “huge implications”, which was hard.