Today, The Guardian published an article titled ” ‘Eugenics on steroids’: the toxic and contested legacy of Oxford’s Future of Humanity Institute ”. I thought I should flag this article here, since it’s such a major news organization presenting a rather scathing picture of EA and longtermism.
Personally, I see much of this article as unfair, but I imagine it will be successful in steering some readers away from engaging with the ideas of EA and longtermism.
I have a lot of thoughts about this article, but I don’t want to turn this into an opinion piece. I’ll just say that I like this quote from the recent conversation between Sam Harris and Will MacAskill: “ideas about existential risk and actually becoming rational around the real effects of efforts to do good, rather than the imagined effects or the hoped-for effects… all of that still stands. I mean, none of that was wrong, and none of that is shown to be wrong, by the example of Sam Bankman Fried, and so I do mourne any loss that those ideas have suffered in public perception because of this.” -Sam Harris, ~1:01:52, episode #361 of the Making Sense podcast.
Seems like a rather vague collection of barely connected anecdotes haphazardly strung together.
I am not particularly concerned as I don’t see this persuading anybody.
Gonna roll the dice and not click the link, but will guess that Torres and/or Gebru gets cited extensively! https://markfuentes1.substack.com/p/emile-p-torress-history-of-dishonesty—such a shame this excellent piece doesn’t get more circulation
I found this very concerning. I posted it but then a helpful admin showed me where it was already posted, I need to be better at searching :D
When we consider the impact of this, we need to forget for a moment everything we know about EA and imagine the impact this will have on someone who has never heard of EA, or who has just a vague idea about it.
I do not agree at all with the content of the article, and especially not with the tone of the article, which frankly surprised me from the Guardian. But even this shows how marginal EA is, even in the UK—that one columnist can write a pretty ill-informed and unresearched article, and apparently nobody challenged it.
BUT: I also see an opportunity. If someone credible from the UK EA community were to write an even, balanced rebuttal of this piece, that might turn this into a positive. Focusing on the way that people like Tony Ord choose to live frugally and donate most of their salary to good causes as being far more reflective of EA than the constant reference to SBF (who of course is one of the very few EA’s mentioned in the article).
I’m not sure the editors at the Guardian realise how closely EA’s philosophy aligns with many of the values they promote, and maybe this is a chance to change that and get some positive publicity.
I think the association of EA with eugenics and far-right views about race are potentially a bigger reputational hazard than what happened with FTX. Because with FTX, there is no evidence (that I’m aware of) that anyone in EA knew about the fraud before it became publicly known. The racism in EA is happening out in the open and the community at large is complacent and, therefore, complicit.
Example 1: https://forum.effectivealtruism.org/posts/kgBBzwdtGd4PHmRfs/an-instance-of-white-supremacist-and-nazi-ideology-creeping
Example 2: https://forum.effectivealtruism.org/posts/mZwJkhGWyZrvc2Qez/david-mathers-s-quick-takes?commentId=AnGzk7gjzpbMsHXHi
Example 1 is referencing a post that’s sitting at a score of –6. It was not a well-received post.
Example 2 is a very popular post denouncing Richard Hanania.
I would not interpret that as the community being complacent.
One of the defenses offered for the apparent number and weight of upvotes on the Ives Parr posts (cf. Example 1) was that voters may reach their voting decisions by comparing the amount of karma a post/comment has and the amount it should have, rather than by making an independent decision. In other words, maybe some upvoters thought the post/comment should have zero or some negative karma, but not that negative karma.
I’m updating against that theory based on the voting on this comment, which is sitting at −43 as I write this. This is not a norm-breaking comment, and it’s extremely uncommon for a comment to get to this level without being norm-breaking. While one may disagree with the perspective offered (and I do find portions of it to be overstated), evidentiary support has been offered. It is far more negative in karma than the Ives Parr posts; this says something concerning about what content the user base believes is deserving of a heavy karma penalty.
That doesn’t match my impression. IMO internet downvotes are generally rather capricious and the Forum is no exception. For example, this polite comment recommending a neuroscience book got downvoted to −60, apparently leading the author to delete their account.
In any case, Concerned User is concerned about a reputational risk. From the perspective of reputational risk, repeatedly harping on e.g. a downvoted post from many months ago that makes us look bad seems like a very unclear gain. I didn’t downvote Concerned User’s comment and I think they meant well by writing it, but it does strike me as an attempt to charge into quicksand, and I tend to interpret the downvotes as a strong feeling that we shouldn’t go there.
I’ve been reading discussions like this one on the EA Forum for years, and they always seem to go the same way. Side A wants to be very sure we’re totally free of $harmful_ideology; Side B wants EA to be a place that’s focused on factual accuracy and free of intellectual repression. The discussion generally ends up unsatisfactory to both sides. Side A interprets Side B’s arguments as further evidence of $harmful_ideology. And Side B just sees more evidence of a chilling intellectual climate. So I respect users who have decided to just downvote and move on. I don’t know if there is any solution to this problem—my best idea is to simultaneously condemn Nazis and affirm a commitment to truth and free thought, but I expect this would end up going wrong somehow.
The base rate of good-faith, norm-compliant comments being massively downvoted remains extremely low. I think that is pretty relevant in choosing how much to update on the karma votes here and in the Parr votes.
Substantively, the problem is that the evidence suggests the voting userbase is at least as opposed to Concerned User reminding us of Parr’s posts than it is opposed to Parr making the posts in the first place. While an optics-focused user might not be happy that Concerned User is bringing this up, one would expect their downvotes on the posts that created the optics problem in the first place to be equally as strong. If they aren’t downvoting the Parr posts due to “free speech” concerns, they shouldn’t be downvoting Concerned User for exercising their free-speech rights to call out what they see as a pattern of racism in EA.
One hypothesis: Forum users differ on whether they prioritize optics vs intellectual freedom.
Optics voters downvote both Parr and Concerned User. They want it all to go away.
Intellectual freedom voters upvote Parr, but downvote Concerned User. They appreciate Parr exploring a new cause proposal, and they feel the censure from Concerned User is unwarranted.
Result: Parr gets a mix of upvotes and downvotes. Concerned User is downvoted by everyone, since they annoyed both camps, for different reasons.
This is plausible, although I’d submit that it requires enough “optics voters” to be pretty bad at optics. Specifically, they would need to be unaware of the negative optical consequences of the comment here having been at −43.
Moreover, there are presumably voters who downvoted Parr and upvoted Concerned User because they thought Parr’s posts were deeply problematic and that Concerned User was right to call them out. For this hypothesis to work, they must have been substantially outnumbered by the group you describe as “intellectual freedom voters.” (I say the “group you describe” because the described voting behavior would be the same one would expect from people who sympathize with Parr’s views on the merits; I see no clear way to exclude the sympathy rationale on voting behavior alone.)
I don’t necessarily agree that the community is either complacent or complicit, but I do agree that this is potentially a massive reputational hazard. It’s not about anyone proving that EA’s are racist, it’s just about people starting to subconsciously associate “racism” and “EA”, even a tiny bit. It could really hurt the movement.
Again, as per my comment above, I think there is great value in a firm rebuttal from a credible voice in the UK EA community.
It’s just absurd that one email from nearly 30 years ago, taken out of context, is being used to tar an entire global community.
We also need to remember that back in 1996, when the email was written, the world was not in the state it’s in now where people believe that any phrase, even if uttered provocatively or in jest, can be taken literally and assumed to represent a person’s true beliefs, even if there are 10000 examples of them saying the exact opposite. I remember when I was in college it was quite normal to write or say shocking things just to get a reaction or a laugh, we didn’t yet have the mentality that you shouldn’t write or say anything that you wouldn’t be happy to see on the front page of the Times.
I think the commenter’s point is about the presence of current racism, and two recent discussions on the Forum are offered as evidence. So while this statement may work as a response to criticism based predominately on the Bostrom e-mail, I don’t find it particularly responsive to criticism based on current racism.