This comment that I’ve cross-posted to LessWrong has quickly accrued negative karma. This comment is easy to misunderstand as I originally wrote it, so I understand the confusion. I’ll explain here what I explained in an edit to my comment on LW, so as to avoid the confusion here on the EA Forum that I incurred there.
I wrote this comment off the cuff, so I didn’t put as much effort into writing it as clearly or succinctly as I could, or maybe should, have. So, I understand how it might read is as a long, meandering nitpick, of a few statements near the beginning of the podcast episode, without me having listened to the whole episode yet. Then, I call a bunch of ex-EAs naive idiots, like Elizabeth referred to herself as at least formerly being a naive idiot, and then say even future effective altruists will be proven to be idiots, and those still propagating EA after so long, like Scott Alexander, might be the most naive and idiotic of all. To be clear, I also included myself, so this reading would also imply that I’m calling myself a naive idiot.
That’s not what I meant to say. I would downvote that comment too. I’m saying that
If it’s true what Elizabeth is saying about her being a naive idiot, then it would seem to follow that a lot of current, and former, effective altruists, including many rationalists, would also be naive idiots for similar reasons.
If that were the case, then it’d be consistent with greater truth-seeking, and criticizing others for not putting enough effort into truth-seeking with integrity with regards to EA, to point out to those hundreds of other people that they either, at one point were, or maybe still are, naive idiots.
If Elizabeth or whoever wouldn’t do that, not only because they consider it mean, but moreover because they wouldn’t think it true, then they should apply the same standards to themselves, and reconsider that they were not, in fact, just naive idiots.
I’m disputing the “naive idiocy” hypothesis here as spurious, as it comes down to the question of
whether someone like Tim—and, by extension, someone like me in the same position, who has also mulled over quitting EA—are still being naive idiots, on account of not having updated yet to the conclusion Elizabeth has already reached.That’s important because it’d seem to be one of the major cruxes of whether someone like Tim, or me, would update and choose to quit EA entirely, which is the point of this dialogue, so if that’s not a true crux of disagreement here, speculating about whether hundreds of current and former effective altruists have been naive idiots is a waste of time.
It was requested by an anonymous individual in a private message group among several others—some effective altruists, and some not—that this be submitted to the EA Forum, with the anonymous requester not wanting to submit the post themself. While that person could technically have submitted this post under an anonymous EA Forum user account, as a matter of personal policy they have other reasons they wouldn’t want to submit the post regardless. As I was privy to that conversation, I volunteered to submit this post myself.
Other than submitting the link post to Dr. Thorstad’s post, the only other way I contributed was to provide the above summary on the post. I didn’t check with David beforehand that he verified that summary as accurate, though I’m aware he’s aware that these link posts are up and hasn’t disputed the accuracy of my summary since.
I also didn’t mean to tag Scott Alexander above in the link post as a call-out. Having also talked to the author, David, beforehand, he informed me that Scott was already aware of that this post had been written and published. Scott wouldn’t have been aware beforehand, though, that I was submitting this as a link post after it had been published on Dr. Thorstad’s blog, Reflective Altruism. I tagged Scott so he could receive a notification to be aware of this post largely about him whenever he might next log on to the EA Forum (and, also, LessWrong, respectively, where this link post was also cross-posted). As to why this post was downvoted, other than the obvious reasons, I suspect based on the link post itself or the summary I provided that:
Those who’d otherwise be inclined to agree with David’s criticism(s) presented might consider them to not be harsh enough, or to be avoided being discussed on the EA Forum so as not to bring further attention to the perceived association between EA and the subject matter in question, given that they’d prefer there be even less of an association between the two.
Those who’d want to avoid a post like this being present on the EA Forum, so as to not risk further association between EA and the subject matter in question, not based on earnest disagreement, but only based on optics/PR concerns.
Those who disagree with the characterization of the subject matter as “so-called” race science, given they may consider it to be as genuine a field/branch of science as any other of the life sciences or social sciences.
Those who disagree with the characterization of individuals referenced as “prominent thinkers” associated with the EA and/or rationality community, through either disagreeing with the idea those thinkers are significantly ‘prominent’ at all; or considering the association between those thinkers, and the EA or rationality communities, to be manufactured and exaggerated as part of past smear campaigns, and thus shouldn’t be validated whatsoever.
I’d consider those all to be worse reasons to downvote this post, based on either reactive conclusions about either optics or semantics. Especially as to optics, to counter one Streisand effect with massive downvoting can be an over-correction causing another Streisand effect. I’m only making this clarifying comment today, when I didn’t bother to do so before, only because I was reminded of it when I received a notification it has received multiple downvotes since yesterday. That may also be because others have been reminded of this post because David a few days ago made another post on the EA Forum, largely unrelated, and this link post was the last one most recently posted referring to any of David’s criticisms of EA. Either way, with over 20 comments in the last several weeks, downvoting this post didn’t obscure or bury it. While I doubt that was necessarily a significant motivation for most other EA Forum members who downvoted this post, it seems to me that anyone who downvoted mainly to ensure it didn’t receive any attention was in error. If anyone has evidence to the contrary, I’d request you please present it, as I’d be happy to receive evidence I may be wrong about that. What I’d consider better reasons to downvote this post include:
The criticism in question may not do enough to distinguish that the vast majority of Scott’s own readership, among the EA or rationality communities, seem to likely be opposed to the viewpoints criticized, regardless of the extent to which Scott holds them himself, in contradiction to the vocally persistent but much smaller minority of Scott’s readership who would seem to hold the criticized views most strongly. That’s the gist of David Mathers’ comment here, the most upvoted one on this post. The points raised are ones I expect that it’d be appropriate for David Thorstad to acknowledge or address before he continues writing this series, or at least hopes for future posts like this to be well-received on the EA Forum. That could serve as a show of good faith to the EA community in recognizing a need to sensitively clarify or represent it as not as much of a monolith as his criticisms might lead some to conclude.
The concern that it was unethical for Dr. Thorstad to bring more attention to how Scott was previously doxxed, or his privately leaked emails. While I was informed by Dr. Thorstad that Scott was aware of details like that before the criticism was published, so might’ve objected privately if he was utterly opposed to those past controversies being publicly revisited, though that wouldn’t have been known to any number of EA Forum or LessWrong users who saw or read the criticism for the first time through either of my link posts. (I took Dr. Thorstad at his word about how he’d interacted with Scott before the criticism was published, though I can’t myself corroborate further at this time for those who’d want more evidence or proof of that fact. Only Dr. Thorstad and/or Scott may be able to do so.)
While I don’t consider their inclusion in the criticism of some pieces of evidence for problems with some of Scott’s previously expressed views to be without merit, how representative they are of Scott’s true convictions is exaggerated. That includes Scott’s Tumblr post from several years ago taken out of context and was clearly made mostly in jest, though Dr. Thorstad writes about it as though all that might be entirely be lost on him. I’m not aware of whether he was being obtuse or wasn’t more diligent in checking the context, though either way it’s an oversight that scarcely strengthens the case Dr. Thorstad made.
The astute reason pointed out in this comment as to how this post, regardless of how agreeable or not one may find its contents, is poorly presented by not focusing on the most critical cruxes of disagreement:
I sympathize with this comment as one of the points of contention I have with Dr. Thorstad’s article. While I of course sympathize with what the criticism is hinting at, I’d consider it better if it had been prioritized as the main focus of the article, not a subtext or tangent.
Dr. Thorstad’s post multiple times as ‘unsavoury’ the views expressed in the post, as though they’re like an overcooked pizza. Bad optics for EA being politically inconvenient via association with pseudoscience, or even bigotry, are a significant concern. They’re often underrated in EA. Yet PR concerns might as well be insignificant to me, compared to the possibility of excessive credulity among some effective altruists towards popular pseudo-intellectuals leading them to embracing dehumanizing beliefs about whole classes of people based on junk science. The latter belies what could be a dire blind spot among a non-trivial portion of effective altruists in a way that glaringly contradicts the principles of an effectiveness-based mindset or altruism. If that’s not as much of a concern for criticisms like these as some concern about what some other, often poorly informed leftists on the internet believe about EA, the worth of these criticisms will be much lower than they could or should be.
I’ve been mulling over submitting a response of my own to Dr. Thorstad’s criticism of ACX, clarifying where I agree or disagree with its contents, or how they were presented. I appreciate and respect what Dr. Thorstad has generally been trying to do with his criticisms of EA (though I consider some of the series, other than the one in question about human biodiversity, to be more important), though I also believe that, at least in this case, he could’ve done better. Given that I could summarize my constructive criticism(s) to Dr. Thorstad as a follow-up to my previous correspondence with him, I may do that so as not to take up more of his time, given how very busy he seems to be. I wouldn’t want to disrupt or delay to much the overall thrust of his effort, including his focus on other series that addressing concerns about these controversies might derail or distract him from. Much of what I would want to say in a post of my own I have now presented in this comment. If anyone else would be interested in reading a fuller response from me to this post last month that I linked, please let me know, as that’d help inform my decision of how much more effort I’d want to invest in this dialogue.