Seeing the discussion play out here lately, and in parallel seeing the topic either not be brought up or be totally censored on LessWrong, has made the following more clear to me:
A huge fraction of the EA community’s reputational issues, DEI shortcomings, and internal strife stem from its proximity to/overlap with the rationalist community.
Generalizing a lot, it seems that “normie EAs” (IMO correctly) see glaring problems with Bostrom’s statement and want this incident to serve as a teachable moment so the community can improve in some of the respects above, and “rationalist-EAs” want to debate race and IQ (or think that the issue is so minor/”wokeness-run-amok-y” that it should be ignored or censored). This predictably leads to conflict.
(I am sure many will take issue with this, but I suspect it will ring true/help clarify things for some, and if this isn’t the time/place to discuss it, I don’t know when/where that would be)
[Edit: I elaborated on various aspects of my views in the comments, though one could potentially agree with this comment/not all the below etc.]
There’s definitely no censorship of the topic on LessWrong. Obviously I don’t know for sure why discussion is sparse, but my guess is that people mostly (and, in my opinion, correctly) don’t think it’s a particularly interesting or fruitful topic to discuss on LessWrong, or that the degree to which it’s an interesting subject is significantly outweighed by mindkilling effects.
Edit: with respect to the rest of the comment, I disagree that rationalists are especially interested in object-level discussion of the subjects, but probably are much more likely to disapprove of the idea that discussion of the subject should be verboten.
I think the framing where Bostrom’s apology is a subject which has to be deliberately ignored is mistaken. Your prior for whether something sees active discussion on LessWrong is that it doesn’t, because most things don’t, unless there’s a specific reason you’d expect it to be of interest to the users there. I admit I haven’t seen a compelling argument for there being a teachable moment here, except the obvious “don’t do something like that in the first place”, and perhaps “have a few people read over your apology with a critical eye before posting it” (assuming that didn’t in fact happen). I’m sure you could find a way to tie those in to the practice of rationality, but it’s a bit of a stretch.
I do think it’s pretty surprising and in-need-of-an-explanation that it isn’t being discussed (much?) on LW—LW and EA Forum are often pretty correlated in terms of covering big “[EA/rationalist/longtermist] community news” like developments in AI, controversies related to famous people in one or more of those groups, etc. And it’s hard to think of more than 1-2 people who are bigger deals in those communities than Bostrom (at most, arguably it’s zero). So him being “cancelled” (something that’s being covered in mainstream media) seems like a pretty obvious thing to discuss.
To be clear, I am not suggesting any malicious intent (e.g. “burying” something for reputational purposes), and I probably shouldn’t have used the word censorship. If that’s not what’s going on, then yes, it’s probably just that most LWers think it’s no big deal. But that does line up with my view that there is a huge rationalist-EA vs. normie-EA divide, which I think people could agree with even if they lean more towards the other side of the divide than me.
LessWrong in-general is much less centered around personalities and individuals, and more centered around ideas. Eliezer is a bit of an outlier here, but even then, I don’t think personality-drama around Eliezer could even raise to the level of prominence that personality-drama tends to have on the EA Forum.
I don’t find this explanation convincing fwiw. Eliezer is an incredible case of hero-worship—it’s become the norm to just link to jargon he created as though it’s enough to settle an argument. The closest thing we have here is Will, and most EAs seem to favour him for his character rather than necessarily agreeing with his views—let alone linking to his posts like they were scripture.
Other than the two of them, I wouldn’t say there’s much discussion of personalities and individuals on either forum.
Eliezer is an incredible case of hero-worship—it’s become the norm to just link to jargon he created as though it’s enough to settle an argument.
I think that you misunderstand why people link to things.
If someone didn’t get why I feel morally obligated to help people who live in distant countries, I would likely link them to Singer’s drowning child thought experiment. Either during my explanation of how I feel, or in lieu of one if I were busy.
This is not because I hero-worship Singer. This is not because I think his posts are scripture. This is because I broadly agree with the specific thing he said which I am linking, and he put it well, and he put it first, and there isn’t a lot of point of duplicating that effort. If after reading you disagree, that’s fine, I can be convinced. The argument can continue as long as it doesn’t continue for reasons that are soundly refuted in the thing I just linked.
I link people to things pretty frequently in casual conversation. A lot of the time, I link them to something posted to the EA Forum or LessWrong. A lot of the time, it’s something written by Eliezer Yudkowsky. This isn’t because I hero-worship him, or that I think linking to something he said settles an argument—it’s because I broadly agree with the specific thing I’m linking and don’t see the point of duplicating effort. If after reading you disagree, that’s fine, I can be convinced. The argument can continue as long as it doesn’t continue for reasons that are soundly refuted in the thing I just linked.
There are a ton of people who I’d like to link to as frequently as I do Eliezer. But Eliezer wrote in short easily-digested essays, on the internet instead of as chapters in a paper book or pdf. He’s easy to link to, so he gets linked.
There’s a world of difference between the link-phrases ‘here’s an argument about why you should do x’ and ‘do x’. Only Eliezer seems to regularly merit the latter.
https://forum.effectivealtruism.org/posts/LvwihGYgFEzjGDhBt/?commentId=x5zqnevWR8MQHqqvd—link to Duncan Sabien, “I care about the lives we can save if we don’t rush to conclusions, rush to anger, if we can give each other the benefit of the doubt for five freaking minutes and consider whether it’d make any sense whatsoever for the accusation de jour to be what it looks like,” seems pretty darn ‘do x’-y. I don’t necessarily stand behind how strongly I came on there, I was in a pretty foul mood.
I think that mostly, this is just how people talk.
I am not making the stronger claim that there are zero people who hero-worship Eliezer Yudkowsky.
Also, I would add at the very least Gwern (which might be relevant to note regarding the current topic) and Scott Alexander as other two clear cases of “personalities” in LW
I agree that there are of course individual people that are trusted and that have a reputation within the community, but the frequency of conversations around Scott Alexander’s personality, or his reputation, or his net-effect on the world, is much rarer on LW than it is on the EA Forum, as far as I can tell.
Like, when was actually the last thread on LW about drama caused by a specific organization or individual? In my mind almost all of that tends to congregate on the EA Forum.
My guess is that you see this more in EA because the stakes are higher for EAs. There’s much more of a sense that people here are contributing to the establishment and continuation of a movement, the movement is often core to people’s identities (it’s why they do what they do, live where they live, etc), and ‘drama’ can have consequences on the progress of work people care a lot about. Few people are here just for the interesting ideas.
While LW does have a bit of a corresponding rationality movement I think it’s weaker or less central on all of these angles.
I think Jeff is right, but I would go so far to say the hero worship on LW is so strong that there’s also a selection effect—if you don’t find Eliezer and co convincing, you won’t spend time on a forum that treats them with such reverence (this at least is part of why I’ve never spent much time there, despite being a cold calculating Vulcan type).
Re drama around organisations, there are way more orgs which one might consider EA than which one might consider rationalist, so there’s just more available lightning rods.
It’s a plausible explanation! I do think even for Eliezer, I really don’t remember much discussion of like, him and his personality in the recent years. Do you have any links? (I can maybe remember something from like 7 years ago, but nothing since LW 2.0).
Overall, I think there are a bunch of other also kind of bad dynamics going on on LW, but I do genuinely think that there isn’t that much hero worship, or institution/personality-oriented drama.
I’m saying the people who view him negatively just tend to self-select out of LW. Those who remain might not bother to have substantive discussion—it’s just that the average mention of him seems ridiculously deferent/overzealous in describing his achievements (for example, I recently went to an EAGx talk which described him along with Tetlock as one of the two ‘fathers of forecasting’).
If you want to see negative discussion of him, that seems to be basically what RationalWiki and r/Sneerclub exist for.
Putting Habryka’s claim another way: If Eliezer right now was involved in a huge scandal like say SBF or Will Macaskill was, then I think modern LW would mostly handle it pretty fine. Not perfectly, but I wouldn’t expect nearly the amount of drama that EA’s getting. (Early LW from the 2000s or early 2010s would probably do worse, IMO.) My suspicion is that LW has way less personal drama over Eliezer than say, EA would over SBF or Nick Bostrom.
I think there are a few things going on here, not sure how many we’d disagree on. I claim:
Eliezer has direct influence over far fewer community-relevant organisations than Will does or SBF did (cf comment above that there exist far fewer such orgs for the rationalist community). Therefore a much smaller proportion of his actions are relevant to the LW community than Will’s are and SBF’s were to the EA community.
I don’t think there’s been a huge scandal involving Will? Sure, there are questions we’d like to see him openly address about what he could have done differently re FTX—and I personally am concerned about his aforementioned influence because I don’t want anyone to have that much—but very few if any people here seem to believe he’s done anything in seriously bad faith.
I think the a priori chance of a scandal involving Eliezer on LW is much lower than the chance of a scandal on here involving Will because of the selection effect I mentioned—the people on LW are selected more strongly for being willing to overlook his faults. The people who both have an interest in rationality and get scandalised by Bostrom/Eliezer hang out on Sneerclub, pretty much being scandalised by them all the time.
The culture on here seems more heterogenous than LW. Inasmuch as we’re more drama-prone, I would guess that’s the main reason why—there’s a broader range of viewpoints and events that will trigger a substantial proportion of the userbase.
So these theories support/explain why there might be more drama on here, but push back against the ‘no hero-worship/not personality-oriented’ claims, which both ring false to me. Overall, I also don’t think the lower drama on LW implies a healthier epistemic climate.
I don’t think there’s been a huge scandal involving Will? Sure, there are questions we’d like to see him openly address about what he could have done differently re FTX—and I personally am concerned about his aforementioned influence because I don’t want anyone to have that much—but very few if any people here seem to believe he’s done anything in seriously bad faith.
I was imagining a counterfactual world where William Macaskill did something hugely wrong.
And yeah come to think of it, selection may be quite a bit stronger than I think.
The bigger discussion from maybe 7 years ago that Habryka refers to was as far as my memories goes his April first post in 2014 about Dath Ilan. The resulting discussion was critical enough of EY that from that point on most of EY’s writing was published on Facebook/Twitter and not LessWrong anymore. One his Facebook feed he can simply ban people who he finds annoying but on LessWrong he couldn’t.
Izzat true? Aside from edited versions of other posts and cross-posts by the LW admins, I see zero EY posts on LW between mid-September 2013 and Aug 2016, versus 21 real posts earlier in 2013, 29 in 2012, 12 in 2011, 17 in 2010, and ~180 in 2009.
So I see a big drop-off after the Sequences ended in 2009, and a complete halt in Sep 2013. Though I guess if he’d mostly stopped posting to LW anyway and then had a negative experience when he poked his head back in, that could cement a decision to post less to LW.
(This is the first time I’m hearing that the post got deleted, I thought I saw it on LW more recently than that?)
2017 is when LW 2.0 launched, so 2014-2016 was also a nadir in the site’s quality and general activity.
As a person who went on LW several months ago, I think that Eliezer is a great thinker, but he does get things wrong quite a few times. He is not a perfect thinker or hero, but Eliezer was quite a bit better (arguably far better than most.)
I wouldn’t idolize him, but nor would I ignore Eliezer’s accomplishments.
This seems to fit with the fact that there wasn’t much appetite for the consequentialist argument against Bostrom until the term “information hazard” came up.
I for one probably wouldn’t have brought it up on LessWrong because it seems like a tempest in a teapot. What is there to say? Someone who is clearly not racist accidentally said something that sounds pretty racist, decades ago, and then apologized profusely. Normally this would be standard CW stuff, except for the connection to EA. The most notable thing — scary thing — is how some people on this forum seem to be saying something like “Nick is a bad person, his apology is not acceptable, and it’s awful that not everyone is on board with my interpretation” (“agreed”, whispers the downvote brigade in a long series of −1s against dissenters.) If I bring this up as a metadiscussion on LW, would others understand this sentiment better than me?
I suspect that the neurotypicals most able to explain it to weirdos like me are more likely to be here than there. Since you said that
normie EAs” (IMO correctly) see glaring problems with Bostrom’s statement
I assume you mean the apology, and I would be grateful if you would explain what these glaring problems are. [edit: also, upon reflection maybe it’s not a nuerodiverse vs neurotypical divide, but something else such as political thinking or general rules of thought or moral system. I never wanted to vote Republican, so I’m thinking it’s more like a Democrat vs Independent divide.]
I am curious, too, whether other people see the same problems or different ones. (a general phenomenon in life is that vague statements get a lot more upvotes than specific ones because people often agree with a conclusion while disagreeing on why that conclusion is true.)
Someone who is clearly not racist accidentally said something that sounds pretty racist, decades ago, and then apologized profusely.
Registering strong disagreement with this characterisation. Nick has done vanishingly little to apologise, both now and in 1997. In the original emails and the latest apology, he has done less to distance himself from racism than to endorse it.
In the original emails and the latest apology, he has done less to distance himself from racism than to endorse it.
In what ways do you think the 2023 message endorses racism? Is there a particular quote or feature of it that stands out to you?
The apology contains an emphatic condemnation of the use of a racist slur:
I completely repudiate this disgusting email from 26 years ago. It does not accurately
represent my views, then or now. The invocation of a racial slur was repulsive. I immediately
apologized for writing it at the time, within 24 hours; and I apologize again unreservedly today. I
recoil when I read it and reject it utterly.
The 1996 email was part of a discussion of offensive communication styles. It included a heavily contested and controversial claim about group intelligence, which I will not repeat here. [1] Claims like these have been made by racist groups in the past, and an interest in such claims correlates with racist views. But there is not a strict correlation here: expressing or studying such claims does not entail you have racist values or motivations.
In general I see genetic disparity as one of the biggest underlying causes of inequality and injustice. I’ve no informed views or particular interests in averages between groups of different skin colour. But I do feel terrible for people who find themselves born with a difficult hand in the genetic lottery (e.g. a tendency to severe depression or dementia). And so I endorse research on genetic causes of chronic disadvantage, with the hope that we can improve things.
One of the main complaints people (including me) have about Bostrom’s old_email.pdf is that he focuses on the use of a slur as the thing he is regretful for, and is operating under a very narrow definition of racism where a racist is someone who dislikes people of other races. But the main fault with the 1996 email, for which Bostrom should apologise, the most important harm and the main reason it is racist, was that it propagated the belief that blacks are inherently stupider than whites (it did not comment on the causation, but used language that is conventionally understood to refer to congenital traits, ‘blacks have lower IQ than mankind in general’). Under this view, old_email.pdf omits to apologise for the main thing people are upset about in the 1996 email, namely, the racist belief, and the lack of empathy for those reading it; and it clarifies further that, in Bostrom’s view, the lower IQ of blacks may in fact be in no small part genetically determined, and moreover, as David Thorstad writes, “Bostrom shows no desire to educate himself on the racist and discredited science driving his original beliefs or on the full extent of the harms done by these beliefs. He does not promise to read any books, have hard conversations, or even to behave better in the future. If Bostrom is not planning to change, then why are we to believe that his behavior will be any better than it was in the 1990s?”
So in my view: in total, in 1996 Nick endorses racist views, and in 2023 he clarifies beyond doubt that the IQ gap between blacks and whites may be genetically determined (and says sorry for using a bad word).
I am sorry for saying that black people are stupider than whites. I no longer hold that view.
Even if he, with evidence, still believes it to be true? David Thorstad can write all he wants about changing his views, but the evidence of the existence of a racial IQ gap has not changed. It is as ironclad and universally accepted by all researchers as it was in 1996 following the publication of the APA’s Intelligence: Knowns and Unknowns.
This may be a difference of opinion, but I don’t think that acknowledging observed differences in reality is a racist view. But I am interested to know if you would prefer he make the statement anyway.
By the way, the finding of an IQ gap isn’t (or shouldn’t be?) what is under contention/offensive, because that’s a real finding. It’s the idea that it has a significant genetic component.
I think both Bostrom and I claim that he does not believe that idea, but I’ll entertain your hypothetical below.
I think that, in the world where racial IQ gaps are known not to have a significant genetic component, believing so anyway as a layperson makes one very probably a racist (glossed as a person whose thinking is biased by motivated reasoning on the basis of race); and in the world where racial IQ gaps are known to have a significant genetic component, believing so is not strong evidence of being a racist (with the same gloss). There are also worlds in between.
In any of these worlds, and the world where we live, responsible non-experts should defer to the scientific consensus (as Bostrom seems to in 2023), and when they irresponsibly promote beliefs that are extremely harmful and false, through recklessness, they should apologise for that.
I don’t think anyone should apologise for the very act of believing something one still believes, because an apology is by nature a disagreement with one’s past self. But Bostrom in 2023 does not seem to believe any more, if he ever did, that the racial IQ gap is genetically caused, which frees him up to apologise for his 1996 promotion of the belief.
As a reminder, the original description I took issue with was:
Someone who is clearly not racist accidentally said something that sounds pretty racist, decades ago, and then apologized profusely
It ‘sounds pretty racist’ to say “blacks have lower IQ than mankind in general” because that phrasing usually implies it’s congenital. In other words, in 1996, Bostrom (whose status as a racist is ambiguous to me, and I will continue to judge his character based on his actions in the coming weeks and months) said something that communicates a racist belief, and I want to give him the benefit of the doubt that it was an accident — a reckless one, but an accident. However, apart from apologising for the n-word slur, I haven’t seen much that can be interpreted as an apology for the harm caused by this accident.
Now, if Bostrom, as a non-expert, in fact is secretly confident that IQ and race correlate because of genetics, I think that his thinking is probably biased in a racist way (that is to say, he is a racist) and he should be suspicious of his own motives in holding that belief. If he then finds his view was mistaken, he may meaningfully apologise for any racist bias that influenced his thinking. Otherwise, an apology would not make any sense as he would not think he’d done anything wrong.
The lack of apology for promulgating accidentally (or deliberately) the racist view is wrong if Bostrom does not hold the view (/any more). He is mistaken when in 2023 he skates over acknowledging the main harm he contributed to, by focusing mostly on his mention of the n-word (a lesser harm, partly due to the use-mention distinction).
I feel like some people are reading “I completely repudiate this disgusting email from 26 years ago” and thinking that he has not repudiated the entire email, just because he also says “The invocation of a racial slur was repulsive”. I wonder if you interpreted it that way.
One thing I think Bostrom should have specifically addressed was when he said “I like that sentence”. It’s not a likeable sentence! It’s an ambiguous sentence (one interpretation of which is obviously false) that carries a bad connotation (in the same way that “you did worse than Joe on the test” has a different connotation than “Joe did better than you on the test”, making the second sentence probably better). Worst of all, it sounds like the kind of thing racists say. The nicest thing I would say about this sentence is that it’s very cringe.
Now I’m a “high-decoupler Independent”, and “low-decoupler Democrats” clearly wanted Bostrom to say different things than me. However, I suspect Bostrom is a high-decoupler Independent himself, and on that basis he loses points in my mind for not addressing the sorts of things that I myself notice. On the other hand… apology-crafting is hard and I think he made a genuine attempt.
But there are several things I take issue with in Thorstad’s post, just one of which I will highlight here. He said that the claim “I think it is probable that black people have a lower average IQ than mankind in general” is “widely repudiated, are based on a long history of racist pseudoscience and must be rejected” (emphasis mine). In response to this I want to highlight a comment that discusses an anti-Bostrom post on this forum:
This post says both:
> If you believe there are racial differences in intelligence, and your work forces you to work on the hard problems of resource allocation or longtermist societal evolution, nobody will trust you to do the right tradeoffs.
and
> If he’d said, for instance, “hey I was an idiot for thinking and saying that. We still have IQ gaps between races, which doesn’t make sense. It’s closing, but not fast enough. We should work harder on fixing this.” That would be more sensible. Same for the community itself disavowing the explicit racism.
The first quote says believing X (that there exists a racial IQ gap) is harmful and will result in nobody trusting you. The second says X is, in fact, true.
I think that we high-decouplers tend to feel that it is deeply wrong to treat a proposition X as true if it is expressed in one way, but false/offensive if expressed in another way. If it’s true, it’s true, and it’s okay to say so without getting the wording perfect.[1]
In the Flynn effect, which I don’t believe is controversial, populations vary significantly on IQ depending on when they were born. But if timing of birth is correlated with IQ, then couldn’t location of birth be correlated with IQ? Or poverty, or education? And is there not some correlation between poverty and skin color? And are not correlations usually transitive? I’m not trying to prove the case here, just trying to say that people can reasonably believe there is a correlation, and indeed, you can see that even the anti-Bostrom post above implies that a correlation exists.
Thorstad cites no evidence for his implication that the average IQ of blacks is equal to the average IQ of everyone. To the contrary, he completely ignores environmental effects on intelligence and zeroes in on the topic of genetic effects on intelligence. So even if he made an effort to show that there’s no genetic IQ gap there would still be a big loophole for environmental differences. Thorstad also didn’t make an effort to show that what he was saying about genetics was true, nor did he link to someone who did make that effort (but I will. Here’s someone critiquing the most famous version of HBD, and if you know of a work that directly addresses the whole body of scientific evidence rather than being designed as a rebuttal, I’d like to see it.) Overall, the piece comes across to me as unnecessarily politicized, unfair, judgemental, and not evidence-based in the places it needs to be.
Plus it tends toward dihydrogen monoxide-style arguments. To illustrate this, consider these arguments supporting the idea of man-made global warming: “denial that humans cause global warming is often funded by fossil-fuel companies with a vested interest in blocking environmental regulations, some of which have a history of unethical behavior. And many of the self-proclaimed experts who purport to show humans don’t cause climate change are in fact charlatans. The Great Global Warming Swindle, a denier film, labeled fellow denier Tim Ball as the ‘head of climatology’ at the University of Winnipeg, which does not, in fact, have a climatology department. As droughts, heat waves and hurricane damage figures increase, it’s time to reject denial and affirm that we humans are responsible.” As a former writer for SkepticalScience who fought against climate denial for years, I held my gag reflex as I wrote those sentences, because they were bad arguments. It’s not that they are false; it’s not that I disagree with them; it’s that they are politicized statements that create more heat than light and don’t help demonstrate that humans cause global warming. There are ample explainers and scientific evidence out there for man-made global warming, so you don’t need to rely on guilt-by-association or negative politically-charged narratives like the one I just wrote. Same thing for Bostrom—there may be good arguments against him, but I haven’t seen them.
I also believe actions speak louder than words, so that Bostrom’s value seems much higher than his disvalue (I know little about his value, but a quick look at his bio suggests it is high), and that in EA we should employ the principle of charity.
Also, if someone doesn’t know if an idea is true, it’s wrong to condemn them just for saying they don’t know or for not picking a side, as Thorstad does.
Yes, I agree that there’s a non-trivial divide in attitude. I don’t think the difference in discussion is surprising, at least based on a similar pattern observed with the response to FTX. From a quick search and look at the tag, there were on the order of 10 top-level posts on the subject on LW. There are 151 posts under the FTX collapse tag on the EA forum, and possibly more untagged.
I very much agree with your analysis, except for the “IMO correctly”. Firstly, because I hold the views of a “rationalist-EA”, so it is to be expected following your argument. Secondly, because we should not hold emails/posts against people 25+ years later, unless they are continued and/or deeply relevant to their points today. Looking at his last publications, they do not seem that relevant.
However, I would like to point out that to me the benefits of EA also profit from the rationality influx. EA to me is “rationality applied to doing good”. So the overlap is part of the deal.
A huge fraction of the EA community’s reputational issues, DEI shortcomings, and internal strife stem from its proximity to/overlap with the rationalist community.
Generalizing a lot, it seems that “normie EAs” (IMO correctly) see glaring problems with Bostrom’s statement and want this incident to serve as a teachable moment so the community can improve in some of the respects above, and “rationalist-EAs” want to debate race and IQ (or think that the issue is so minor/”wokeness-run-amok-y” that it should be ignored or censored). This predictably leads to conflict.
This is inaccurate as stated, but there is an important truth nearby. The apparent negatives you attribute to “rationalist” EAs are also true of non-rationalist old-timers in EA, who trend slightly non-woke, while also keeping arms length from the rationalists. SBF himself was not particularly rationalist, for example. What seems to attract scandals is people being consequentialist, ambitious, and intense, which are possible features of rationalists and non-rationalists alike.
Generalizing a lot, it seems that “normie EAs” (IMO correctly) see glaring problems with Bostrom’s statement and want this incident to serve as a teachable moment
As a “rationalist-EA”, I would be curious if you could summarize what lessons you think should be drawn from this teachable moment (or link to such a summary that you endorse).
(To me, their Q1 seems like it highlights what should be the key lesson. While their Q2 provides important context that mitigates how censorious we should be in our response.)
Happy to comment on this, though I’ll add a few caveats first:
- My views on priorities among the below are very unstable - None of this is intended to imply/attribute malice or to demonize all rationalists (“many of my best friends/colleagues are rationalists”), or to imply that there aren’t some upsides to the communities’ overlap - I am not sure what “institutional EA” should be doing about all this - Since some of these are complex topics and ideally I’d want to cite lots of sources etc. in a detailed positive statement on them, I am using the “things to think about” framing. But hopefully this gives some flavor of my actual perspective while also pointing in fruitful directions for open-ended reflection. - I may be able to follow up on specific clarifying Qs though also am not sure how closely I’ll follow replies, so try to get in touch with me offline if you’re interested in further discussion. - The upvoted comment is pretty long and I don’t really want to get into line-by-line discussion of specific agreements/disagreements, so will focus on sharing my own model.
Those caveats aside, I think some things that EA-rationalists might want to think about in light of recent events are below.
- Different senses of the word racism (~the “believing/stating that race is a ‘real thing’/there are non-trivial differences between them (especially cognitive ones) that anyone should care about” definition, and the “consciously or unconsciously treating people better/worse given their race”), why some people think the former is bad/should be treated with extreme levels of skepticism and not just the latter, and whether there might be a finer line between them in practice than some think. - Why the rationalist community seems to treat race/IQ as an area where one should defer to “the scientific consensus” but is quick to question the scientific community and attribute biases to it on a range of other topics like ivermectin/COVID generally, AI safety, etc. - Whether the purported consensus folks often refer to is actually existent + what kind of interpretations/takeaways one might draw from specific results/papers other than literal racism in the first sense above (I recommend The Genetic Lottery’s section on this). - What the information value of “more accurate [in the red pill/blackpill sense] views on race” would even be “if true,” given that one never interacts with a distribution but with specific people. - How Black people and other folks underrepresented in EA/rationalist communities, who often face multiple types of racism in the senses above, might react to seeing people in these communities speaking casually about all of this, and what implications that has for things like recruitment and retention in AI safety.
I’ll limit myself to one (multi-part) follow-up question for now —
Suppose someone in our community decides not to defer to the claimed “scientific consensus” on this issue (which I’ve seen claimed both ways), and looks into the matter themselves, and, for whatever reason, comes to the opposite conclusion that you do. What advice would you have for this person?
I think this is a relevant question because, based in part on comments and votes, I get the impression that a significant number of people in our community are in this position (maybe more so on the rationalist side?).
Let’s assume they try to distinguish between the two senses of “racism” that you mention, and try to treat all people respectfully and fairly. They don’t make a point of trumpeting their conclusion, since it’s not likely to make people feel good, and is generally not very helpful since we interact with individuals rather than distributions, as you say.
Let’s say they also try to examine their own biases and take into account how that might have influenced how they interpreted various claims and pieces of data. But after doing that, their honest assessment is still the same.
Beyond not broadcasting their view, and trying to treat people fairly and respectfully, would you say that they should go further, and pretend not to have reached the conclusion that they did, if it ever comes up?
Would you have any other advice for them, other than maybe something like, “Check your work again. You must have made a mistake. There’s an error in your thinking somewhere.”?
I would have to think more on this to have a super confident reply. See also my point in response to Geoffrey Miller elsewhere here—there are lots of considerations at play.
One view I hold, though, is something like “the optimal amount of self-censorship, by which I mean not always saying things that you think are true/useful, in part because you’re considering the [personal/community-level] social implications thereof, is non-zero.” We can of course disagree on the precise amount/contexts for this, and sometimes it can go too far. And by definition in all such cases you will think you are right and others wrong, so there is a cost. But I don’t think it is automatically/definitionally bad for people to do that to some extent, and indeed much of progress on issues like civil rights, gay rights etc. in the US has resulted in large part from actions getting ahead of beliefs among people who didn’t “get it” yet, with cultural/ideological change gradually following with generational replacement, pop culture changes, etc. Obviously people rarely think that they are in the wrong, but it’s hard to be sure, and I don’t think we [the world, EA] should be aiming for a culture where there are never repercussions for expressing beliefs that, in the speaker’s view, are true. Again, that’s consistent with people disagreeing about particular cases, just sharing my general view here.
This shouldn’t only work in one ideological “direction” of course, which may be a crux in how people react to the above. Some may see the philosophy above as (exclusively) an endorsement of wokism/cancel culture etc. in its entirety/current form [insofar as that were a coherent thing, which I’m not sure it is]. While I am probably less averse to some of those things than the some LW/EAF readers, especially on the rationalist side side, I also think that people should remember that restraint can be positive in many contexts. For example, I am, in my effort to engage and in my social media activities lately, trying to be careful to be respectful to people who identify strongly with the communities I am critiquing, and have held back some spicy jokes (e.g. playing on the “I like this statement and think it is true” line which just begs for memes), precisely because I want to avoid alienating people who might be receptive to the object level points I’m making, and because I don’t want to unduly egg on critiques by other folks on social media who I think sometimes go too far in attacking EAs, etc.
Is it okay if I give my personal perspective on those questions?
I suppose I should first state that I don’t expect that skin color has any effect on IQ whatsoever, and so on. But … I feel like the controversy in this case (among EAs) isn’t about whether one believes that or not [as EAs never express that belief AFAIK], but rather it is about whether one should do things like (i) reach a firm conclusion based purely on moral reasoning (or something like that), and (ii) attack people who gather evidence on the topic, just learn and comment about the topic, or even don’t learn much about the topic but commit the sin of not reaching the “right” conclusion within their state of ignorance.
My impression is that there is no scientific consensus on this question, so we cannot defer to it. Also, doesn’t the rationalist community in general, and EA-rationalists in particular, accept the consensus on most topics such as global warming, vaccine safety, homeopathy, nuclear power, and evolution? I wonder if you are seeing the tolerance of skepticism on LW or the relative tolerance of certain ideas/claims and thinking the tolerance is problematic. But maybe I am mistaken about whether the typical aspiring rationalist agrees with various consensuses.
[Whether the purported consensus folks often refer to is actually existent] The only consensus I think exists is that one’s genetic code can, in principle, affect intelligence, e.g. one could theoretically be a genius, an idiot, or an octopus, for genetic reasons (literally, if you have the right genes, you are an octopus, with the intelligence of an octopus, “because of your genes”). I don’t know whether or not there is some further consensus that relates somehow to skin color, but I do care about the fact that even the first matter is scarily controversial. There are cases where some information is too dangerous to be widely shared, such as “how to build an AGI” or “how to build a deadly infectious virus with stuff you can order online”. Likewise it would be terrible to tell children that their skin color is “linked” to lower intelligence; it’s “infohazardous if true” (because it has been observed that children in general may react to negative information by becoming discouraged and end up less skilled). But adults should be mature enough to be able to talk about this like adults. Since they generally aren’t that mature, what I wonder is how we should act given that there are confusing taboos and culture wars everywhere. For example, we can try adding various caveats and qualifications, but the Bostrom case demonstrates that these are often insufficient.
[What the information value of “more accurate [...] views on race” would even be “if true,”] I’d say the information value is low (which is why I have little interest in this topic) but that the disvalue of taboos is high. Yes, bad things are bad, but merely discussing bad things (without elaborate paranoid social protocols) isn’t.
[How Black people and other folks underrepresented [...] might react to seeing people in these communities speaking casually about all of this, and what implications that has for things like recruitment and retention in AI safety.] That’s a great question! I suspect that reactions differ tremendously between individuals. I also suspect that first impressions are key, so whatever appears at the top of this page, for instance, is important, but not nearly as important as whatever page about this topic is most widely circulated. But… am I wrong to think that the average black person would be less outraged by an apology that begins with “I completely repudiate this disgusting email from 26 years ago” than some people on this very forum?
- Why the rationalist community seems to treat race/IQ as an area where one should defer to “the scientific consensus” but is quick to question the scientific community and attribute biases to it on a range of other topics like ivermectin/COVID generally, AI safety, etc.
With ivermectin we had a time where the best meta-analysis were pro-ivermectin but the scientific establishement was against ivermectin. Trusting those meta reviews that were published in reputable peer reviewed is poorly understood as “not defering to the scientific consensus”. Scott also wrote a deep dive on Ivermectin and the evidence in the scientific literature for it.
You might ask yourself “Why doesn’t Scott Alexander write a deep dive on the literature of IQ and race?” Why don’t other rationalists on LessWrong write deep dives on the literature of IQ and race and questions about which hypothesis are supported by the literature an which aren’t.
From a truth seeking perspective it would be nice to have such literature deep dives. From a practical point, writing deep dives on the literature of IQ and race and have indepth discussions about it has a high likelihood to offend people. The effort and risks that come with it are high enough that Scott is very unlikely to write such a post.
One view I hold, though, is something like “the optimal amount of self-censorship, by which I mean not always saying things that you think are true/useful, in part because you’re considering the [personal/community-level] social implications thereof, is non-zero.”
I think that there’s broad agreement on this and that self-censorship is one of the core reasons why rationalists are not engaging as deeply with the literature around IQ and race as we did with Ivermectin or COVID.
On the other hand, there are situation where there are reasons to actually speak about an issue and people still express their views even if they would prefer to just avoid talking about a topic.
My view is that the rationalist community deeply values the virtue of epistemic integrity at all costs, and of accurately expressing your opinion regardless of social acceptability.
The EA community is focused on approximately maximising consequentialist impact.
Rationalist EAs should recognise when theses virtue of epistemic integrity and epistemic accuracy are in conflict with maximising consequentialist impact, via direct, unintended consequences of expressing your opinions, or via effects on EA’s reputation.
That makes sense and I would agree with the idea that honesty is usually helpful for conseuqentialist reasons, but I think it is important to recognise cases where it is not.
Broadly, these cases are where the view you’re expressing doesn’t really help you do more good and the view brings a lot of harm to your reputation.
So as much as I disagree with Bostrom’s object level views on race / IQ, I think he should have lied about his views.
Another example I wrote down elsewhere:
If you were an atheist in a rural, conservative part of Afghanistan today aiming to improve the world by challenging the mistreatment of women and LGBT people, and you told people that you think that God doesn’t exist, even if that was you accurately expressing your true beliefs, you would be so far from the Overton Window that you’re probably making it more difficult for yourself to improve things for LGBT people and women. Much better to say that you’re a Muslim and you think women and LGBT people should be treated better.
Teachable moment means that you’re supposed to see what the politically advantageous thing is and then do it. In this case that would be completely ejecting Bostrom from all association with EA.
I would say it’s less about rationalists vs non-rationalists and more that people who are inclined to social justice norms (who tend not to be rationalists, though one can be both or neither) think it’s a big deal and people who aren’t are at least less committal.
I think there’s a decent case to be made that a lot of social justice norms (though certainly not all) can be arrived at by utilitarian reasoning (“normie EA”) while a lot of opposition to social justice norms can be arrived at through a sort of truth seeking that actively eschews social norms (“rationalist”).
I think that social justice norms are sometimes harmful from a consequentialist viewpoint. The social justice project largely consists of highlighting disparities between oppressor groups and oppressed groups and attributing disparities to immoral action on the part of oppressor groups. I think that most of these beliefs are actually false and the proposed solutions are harmful in that they will not actually solve the problem because the belief is false. I think that they make social relations worse.
More egregious is social justice advocates propensity for censorship in the name of emotional harm-avoidance, and the willingness to attack the character of people who disagree with their viewpoint as bigots of various types. And most egregious is the small minority who actively causes reputational damage, firings, and ostracism.
I think that various harmful social views and policies persist because many social justice advocates think they’re so right they don’t need particularly good truth-seeking behavior.
Note that there is now at least one post on LW front page that is at least indirectly about the Bostrom stuff. I am not sure if it was there before and I missed it or what.
And others’ comments have updated me a bit towards the forum vs. forum difference being less surprising.
I still think there is something like the above going on, though, as shown by the kinds of views being expressed + who’s expressing them just on EA Forum, and on social media.
But I probably should have left LW out of my “argument” since I’m less familiar with typical patterns/norms there.
The indirectness is also quite relevant to that. On LessWrong it’s pretty encouraged to take current events and try to extract generalizable lessons from them, and make statements that are removed from the local political landscape. I am glad that post was written, and would have been happy about it independently of any Bostrom stuff going on.
Can I ask for a link to this ‘indirect post’? I’m interested in the generalized lessons being advertised here, but couldn’t find the post after looking on LW.
The rationalist community celebrates the virtue of epistemic integrity at all costs and celebrates the expression of opinions when they are deeply unpopular.
’Normie EAs’s are not willing to sacrifice consequentialist impact for these virtues.
Seeing the discussion play out here lately, and in parallel seeing the topic either not be brought up or be totally censored on LessWrong, has made the following more clear to me:
A huge fraction of the EA community’s reputational issues, DEI shortcomings, and internal strife stem from its proximity to/overlap with the rationalist community.
Generalizing a lot, it seems that “normie EAs” (IMO correctly) see glaring problems with Bostrom’s statement and want this incident to serve as a teachable moment so the community can improve in some of the respects above, and “rationalist-EAs” want to debate race and IQ (or think that the issue is so minor/”wokeness-run-amok-y” that it should be ignored or censored). This predictably leads to conflict.
(I am sure many will take issue with this, but I suspect it will ring true/help clarify things for some, and if this isn’t the time/place to discuss it, I don’t know when/where that would be)
[Edit: I elaborated on various aspects of my views in the comments, though one could potentially agree with this comment/not all the below etc.]
There’s definitely no censorship of the topic on LessWrong. Obviously I don’t know for sure why discussion is sparse, but my guess is that people mostly (and, in my opinion, correctly) don’t think it’s a particularly interesting or fruitful topic to discuss on LessWrong, or that the degree to which it’s an interesting subject is significantly outweighed by mindkilling effects.
Edit: with respect to the rest of the comment, I disagree that rationalists are especially interested in object-level discussion of the subjects, but probably are much more likely to disapprove of the idea that discussion of the subject should be verboten.
I think the framing where Bostrom’s apology is a subject which has to be deliberately ignored is mistaken. Your prior for whether something sees active discussion on LessWrong is that it doesn’t, because most things don’t, unless there’s a specific reason you’d expect it to be of interest to the users there. I admit I haven’t seen a compelling argument for there being a teachable moment here, except the obvious “don’t do something like that in the first place”, and perhaps “have a few people read over your apology with a critical eye before posting it” (assuming that didn’t in fact happen). I’m sure you could find a way to tie those in to the practice of rationality, but it’s a bit of a stretch.
Thanks for clarifying on the censorship point!
I do think it’s pretty surprising and in-need-of-an-explanation that it isn’t being discussed (much?) on LW—LW and EA Forum are often pretty correlated in terms of covering big “[EA/rationalist/longtermist] community news” like developments in AI, controversies related to famous people in one or more of those groups, etc. And it’s hard to think of more than 1-2 people who are bigger deals in those communities than Bostrom (at most, arguably it’s zero). So him being “cancelled” (something that’s being covered in mainstream media) seems like a pretty obvious thing to discuss.
To be clear, I am not suggesting any malicious intent (e.g. “burying” something for reputational purposes), and I probably shouldn’t have used the word censorship. If that’s not what’s going on, then yes, it’s probably just that most LWers think it’s no big deal. But that does line up with my view that there is a huge rationalist-EA vs. normie-EA divide, which I think people could agree with even if they lean more towards the other side of the divide than me.
LessWrong in-general is much less centered around personalities and individuals, and more centered around ideas. Eliezer is a bit of an outlier here, but even then, I don’t think personality-drama around Eliezer could even raise to the level of prominence that personality-drama tends to have on the EA Forum.
I don’t find this explanation convincing fwiw. Eliezer is an incredible case of hero-worship—it’s become the norm to just link to jargon he created as though it’s enough to settle an argument. The closest thing we have here is Will, and most EAs seem to favour him for his character rather than necessarily agreeing with his views—let alone linking to his posts like they were scripture.
Other than the two of them, I wouldn’t say there’s much discussion of personalities and individuals on either forum.
I think that you misunderstand why people link to things.
If someone didn’t get why I feel morally obligated to help people who live in distant countries, I would likely link them to Singer’s drowning child thought experiment. Either during my explanation of how I feel, or in lieu of one if I were busy.
This is not because I hero-worship Singer. This is not because I think his posts are scripture. This is because I broadly agree with the specific thing he said which I am linking, and he put it well, and he put it first, and there isn’t a lot of point of duplicating that effort. If after reading you disagree, that’s fine, I can be convinced. The argument can continue as long as it doesn’t continue for reasons that are soundly refuted in the thing I just linked.
I link people to things pretty frequently in casual conversation. A lot of the time, I link them to something posted to the EA Forum or LessWrong. A lot of the time, it’s something written by Eliezer Yudkowsky. This isn’t because I hero-worship him, or that I think linking to something he said settles an argument—it’s because I broadly agree with the specific thing I’m linking and don’t see the point of duplicating effort. If after reading you disagree, that’s fine, I can be convinced. The argument can continue as long as it doesn’t continue for reasons that are soundly refuted in the thing I just linked.
There are a ton of people who I’d like to link to as frequently as I do Eliezer. But Eliezer wrote in short easily-digested essays, on the internet instead of as chapters in a paper book or pdf. He’s easy to link to, so he gets linked.
There’s a world of difference between the link-phrases ‘here’s an argument about why you should do x’ and ‘do x’. Only Eliezer seems to regularly merit the latter.
Here are the last four things I remember seeing linked as supporting evidence in casual conversation on the EA forum, in no particular order:
https://forum.effectivealtruism.org/posts/LvwihGYgFEzjGDhBt/?commentId=HebnLpj2pqyctd72F—link to Scott Alexander, “We have to stop it with the pointless infighting or it’s all we will end up doing,” is ‘do x’-y if anything is. (It also sounds like a perfectly reasonable thing to say and a perfectly reasonable way to say it.)
https://forum.effectivealtruism.org/posts/LvwihGYgFEzjGDhBt/?commentId=SCfBodrdQYZBA6RBy—separate links to Scott Alexander and Eliezer Yudkowsky, neither of which seem very ‘do x’-y to me.
https://forum.effectivealtruism.org/posts/irhgjSgvocfrwnzRz/?commentId=NF9YQfrDGPcH6wYCb—link to Scott Alexander, seems somewhat though not extremely ‘do x’-y to me. Also seems like a perfectly reasonable thing to say and I stand by saying it.
https://forum.effectivealtruism.org/posts/LvwihGYgFEzjGDhBt/?commentId=x5zqnevWR8MQHqqvd—link to Duncan Sabien, “I care about the lives we can save if we don’t rush to conclusions, rush to anger, if we can give each other the benefit of the doubt for five freaking minutes and consider whether it’d make any sense whatsoever for the accusation de jour to be what it looks like,” seems pretty darn ‘do x’-y. I don’t necessarily stand behind how strongly I came on there, I was in a pretty foul mood.
I think that mostly, this is just how people talk.
I am not making the stronger claim that there are zero people who hero-worship Eliezer Yudkowsky.
Also, I would add at the very least Gwern (which might be relevant to note regarding the current topic) and Scott Alexander as other two clear cases of “personalities” in LW
I agree that there are of course individual people that are trusted and that have a reputation within the community, but the frequency of conversations around Scott Alexander’s personality, or his reputation, or his net-effect on the world, is much rarer on LW than it is on the EA Forum, as far as I can tell.
Like, when was actually the last thread on LW about drama caused by a specific organization or individual? In my mind almost all of that tends to congregate on the EA Forum.
My guess is that you see this more in EA because the stakes are higher for EAs. There’s much more of a sense that people here are contributing to the establishment and continuation of a movement, the movement is often core to people’s identities (it’s why they do what they do, live where they live, etc), and ‘drama’ can have consequences on the progress of work people care a lot about. Few people are here just for the interesting ideas.
While LW does have a bit of a corresponding rationality movement I think it’s weaker or less central on all of these angles.
Yep, I agree, that’s a big part of it.
I think Jeff is right, but I would go so far to say the hero worship on LW is so strong that there’s also a selection effect—if you don’t find Eliezer and co convincing, you won’t spend time on a forum that treats them with such reverence (this at least is part of why I’ve never spent much time there, despite being a cold calculating Vulcan type).
Re drama around organisations, there are way more orgs which one might consider EA than which one might consider rationalist, so there’s just more available lightning rods.
It’s a plausible explanation! I do think even for Eliezer, I really don’t remember much discussion of like, him and his personality in the recent years. Do you have any links? (I can maybe remember something from like 7 years ago, but nothing since LW 2.0).
Overall, I think there are a bunch of other also kind of bad dynamics going on on LW, but I do genuinely think that there isn’t that much hero worship, or institution/personality-oriented drama.
I’m saying the people who view him negatively just tend to self-select out of LW. Those who remain might not bother to have substantive discussion—it’s just that the average mention of him seems ridiculously deferent/overzealous in describing his achievements (for example, I recently went to an EAGx talk which described him along with Tetlock as one of the two ‘fathers of forecasting’).
If you want to see negative discussion of him, that seems to be basically what RationalWiki and r/Sneerclub exist for.
Putting Habryka’s claim another way: If Eliezer right now was involved in a huge scandal like say SBF or Will Macaskill was, then I think modern LW would mostly handle it pretty fine. Not perfectly, but I wouldn’t expect nearly the amount of drama that EA’s getting. (Early LW from the 2000s or early 2010s would probably do worse, IMO.) My suspicion is that LW has way less personal drama over Eliezer than say, EA would over SBF or Nick Bostrom.
I think there are a few things going on here, not sure how many we’d disagree on. I claim:
Eliezer has direct influence over far fewer community-relevant organisations than Will does or SBF did (cf comment above that there exist far fewer such orgs for the rationalist community). Therefore a much smaller proportion of his actions are relevant to the LW community than Will’s are and SBF’s were to the EA community.
I don’t think there’s been a huge scandal involving Will? Sure, there are questions we’d like to see him openly address about what he could have done differently re FTX—and I personally am concerned about his aforementioned influence because I don’t want anyone to have that much—but very few if any people here seem to believe he’s done anything in seriously bad faith.
I think the a priori chance of a scandal involving Eliezer on LW is much lower than the chance of a scandal on here involving Will because of the selection effect I mentioned—the people on LW are selected more strongly for being willing to overlook his faults. The people who both have an interest in rationality and get scandalised by Bostrom/Eliezer hang out on Sneerclub, pretty much being scandalised by them all the time.
The culture on here seems more heterogenous than LW. Inasmuch as we’re more drama-prone, I would guess that’s the main reason why—there’s a broader range of viewpoints and events that will trigger a substantial proportion of the userbase.
So these theories support/explain why there might be more drama on here, but push back against the ‘no hero-worship/not personality-oriented’ claims, which both ring false to me. Overall, I also don’t think the lower drama on LW implies a healthier epistemic climate.
I was imagining a counterfactual world where William Macaskill did something hugely wrong.
And yeah come to think of it, selection may be quite a bit stronger than I think.
The bigger discussion from maybe 7 years ago that Habryka refers to was as far as my memories goes his April first post in 2014 about Dath Ilan. The resulting discussion was critical enough of EY that from that point on most of EY’s writing was published on Facebook/Twitter and not LessWrong anymore. One his Facebook feed he can simply ban people who he finds annoying but on LessWrong he couldn’t.
Izzat true? Aside from edited versions of other posts and cross-posts by the LW admins, I see zero EY posts on LW between mid-September 2013 and Aug 2016, versus 21 real posts earlier in 2013, 29 in 2012, 12 in 2011, 17 in 2010, and ~180 in 2009.
So I see a big drop-off after the Sequences ended in 2009, and a complete halt in Sep 2013. Though I guess if he’d mostly stopped posting to LW anyway and then had a negative experience when he poked his head back in, that could cement a decision to post less to LW.
(This is the first time I’m hearing that the post got deleted, I thought I saw it on LW more recently than that?)
2017 is when LW 2.0 launched, so 2014-2016 was also a nadir in the site’s quality and general activity.
I was active at that time on LessWrong and mostly go after my memory and memories for something that happened eight years ago isn’t perfect.
https://yudkowsky.tumblr.com/post/81447230971/my-april-fools-day-confession was to my memory also posted to LessWrong and the LessWrong site of that post is deleted.
When doing a Google search for the timeframe on LessWrong, that doesn’t bring up any mention of Dath Ilan.
Is your memory that Dath Ilan was just never talked about on LessWrong when Eliezer wrote that post?
which post is this? I looked on EY’s LW profile but couldn’t see which one this was referring to. There’s this blog post https://yudkowsky.tumblr.com/post/81447230971/my-april-fools-day-confession but it’s not on LW. also, it looks like there’s been a lot of posts from EY on LW since 2014?
I think that’s the post. As far as my memory goes, the criticism led to Eliezer deleting it from LessWrong.
As a person who went on LW several months ago, I think that Eliezer is a great thinker, but he does get things wrong quite a few times. He is not a perfect thinker or hero, but Eliezer was quite a bit better (arguably far better than most.)
I wouldn’t idolize him, but nor would I ignore Eliezer’s accomplishments.
This seems to fit with the fact that there wasn’t much appetite for the consequentialist argument against Bostrom until the term “information hazard” came up.
I for one probably wouldn’t have brought it up on LessWrong because it seems like a tempest in a teapot. What is there to say? Someone who is clearly not racist accidentally said something that sounds pretty racist, decades ago, and then apologized profusely. Normally this would be standard CW stuff, except for the connection to EA. The most notable thing — scary thing — is how some people on this forum seem to be saying something like “Nick is a bad person, his apology is not acceptable, and it’s awful that not everyone is on board with my interpretation” (“agreed”, whispers the downvote brigade in a long series of −1s against dissenters.) If I bring this up as a metadiscussion on LW, would others understand this sentiment better than me?
I suspect that the neurotypicals most able to explain it to weirdos like me are more likely to be here than there. Since you said that
I assume you mean the apology, and I would be grateful if you would explain what these glaring problems are. [edit: also, upon reflection maybe it’s not a nuerodiverse vs neurotypical divide, but something else such as political thinking or general rules of thought or moral system. I never wanted to vote Republican, so I’m thinking it’s more like a Democrat vs Independent divide.]
I am curious, too, whether other people see the same problems or different ones. (a general phenomenon in life is that vague statements get a lot more upvotes than specific ones because people often agree with a conclusion while disagreeing on why that conclusion is true.)
Registering strong disagreement with this characterisation. Nick has done vanishingly little to apologise, both now and in 1997. In the original emails and the latest apology, he has done less to distance himself from racism than to endorse it.
In what ways do you think the 2023 message endorses racism? Is there a particular quote or feature of it that stands out to you?
The apology contains an emphatic condemnation of the use of a racist slur:
The 1996 email was part of a discussion of offensive communication styles. It included a heavily contested and controversial claim about group intelligence, which I will not repeat here. [1] Claims like these have been made by racist groups in the past, and an interest in such claims correlates with racist views. But there is not a strict correlation here: expressing or studying such claims does not entail you have racist values or motivations.
In general I see genetic disparity as one of the biggest underlying causes of inequality and injustice. I’ve no informed views or particular interests in averages between groups of different skin colour. But I do feel terrible for people who find themselves born with a difficult hand in the genetic lottery (e.g. a tendency to severe depression or dementia). And so I endorse research on genetic causes of chronic disadvantage, with the hope that we can improve things.
[1] This comment by Geoffrey Miller provides a bit more context on why Bostrom may have chosen this particular example.
One of the main complaints people (including me) have about Bostrom’s old_email.pdf is that he focuses on the use of a slur as the thing he is regretful for, and is operating under a very narrow definition of racism where a racist is someone who dislikes people of other races. But the main fault with the 1996 email, for which Bostrom should apologise, the most important harm and the main reason it is racist, was that it propagated the belief that blacks are inherently stupider than whites (it did not comment on the causation, but used language that is conventionally understood to refer to congenital traits, ‘blacks have lower IQ than mankind in general’). Under this view, old_email.pdf omits to apologise for the main thing people are upset about in the 1996 email, namely, the racist belief, and the lack of empathy for those reading it; and it clarifies further that, in Bostrom’s view, the lower IQ of blacks may in fact be in no small part genetically determined, and moreover, as David Thorstad writes, “Bostrom shows no desire to educate himself on the racist and discredited science driving his original beliefs or on the full extent of the harms done by these beliefs. He does not promise to read any books, have hard conversations, or even to behave better in the future. If Bostrom is not planning to change, then why are we to believe that his behavior will be any better than it was in the 1990s?”
So in my view: in total, in 1996 Nick endorses racist views, and in 2023 he clarifies beyond doubt that the IQ gap between blacks and whites may be genetically determined (and says sorry for using a bad word).
A more detailed viewpoint close to my own from David Thorstad: https://ineffectivealtruismblog.com/2023/01/12/off-series-that-bostrom-email/
Would you prefer Bostrom’s apology read:
Even if he, with evidence, still believes it to be true? David Thorstad can write all he wants about changing his views, but the evidence of the existence of a racial IQ gap has not changed. It is as ironclad and universally accepted by all researchers as it was in 1996 following the publication of the APA’s Intelligence: Knowns and Unknowns.
This may be a difference of opinion, but I don’t think that acknowledging observed differences in reality is a racist view. But I am interested to know if you would prefer he make the statement anyway.
By the way, the finding of an IQ gap isn’t (or shouldn’t be?) what is under contention/offensive, because that’s a real finding. It’s the idea that it has a significant genetic component.
I think both Bostrom and I claim that he does not believe that idea, but I’ll entertain your hypothetical below.
I think that, in the world where racial IQ gaps are known not to have a significant genetic component, believing so anyway as a layperson makes one very probably a racist (glossed as a person whose thinking is biased by motivated reasoning on the basis of race); and in the world where racial IQ gaps are known to have a significant genetic component, believing so is not strong evidence of being a racist (with the same gloss). There are also worlds in between.
In any of these worlds, and the world where we live, responsible non-experts should defer to the scientific consensus (as Bostrom seems to in 2023), and when they irresponsibly promote beliefs that are extremely harmful and false, through recklessness, they should apologise for that.
I don’t think anyone should apologise for the very act of believing something one still believes, because an apology is by nature a disagreement with one’s past self. But Bostrom in 2023 does not seem to believe any more, if he ever did, that the racial IQ gap is genetically caused, which frees him up to apologise for his 1996 promotion of the belief.
As a reminder, the original description I took issue with was:
It ‘sounds pretty racist’ to say “blacks have lower IQ than mankind in general” because that phrasing usually implies it’s congenital. In other words, in 1996, Bostrom (whose status as a racist is ambiguous to me, and I will continue to judge his character based on his actions in the coming weeks and months) said something that communicates a racist belief, and I want to give him the benefit of the doubt that it was an accident — a reckless one, but an accident. However, apart from apologising for the n-word slur, I haven’t seen much that can be interpreted as an apology for the harm caused by this accident.
Now, if Bostrom, as a non-expert, in fact is secretly confident that IQ and race correlate because of genetics, I think that his thinking is probably biased in a racist way (that is to say, he is a racist) and he should be suspicious of his own motives in holding that belief. If he then finds his view was mistaken, he may meaningfully apologise for any racist bias that influenced his thinking. Otherwise, an apology would not make any sense as he would not think he’d done anything wrong.
The lack of apology for promulgating accidentally (or deliberately) the racist view is wrong if Bostrom does not hold the view (/any more). He is mistaken when in 2023 he skates over acknowledging the main harm he contributed to, by focusing mostly on his mention of the n-word (a lesser harm, partly due to the use-mention distinction).
I feel like some people are reading “I completely repudiate this disgusting email from 26 years ago” and thinking that he has not repudiated the entire email, just because he also says “The invocation of a racial slur was repulsive”. I wonder if you interpreted it that way.
One thing I think Bostrom should have specifically addressed was when he said “I like that sentence”. It’s not a likeable sentence! It’s an ambiguous sentence (one interpretation of which is obviously false) that carries a bad connotation (in the same way that “you did worse than Joe on the test” has a different connotation than “Joe did better than you on the test”, making the second sentence probably better). Worst of all, it sounds like the kind of thing racists say. The nicest thing I would say about this sentence is that it’s very cringe.
Now I’m a “high-decoupler Independent”, and “low-decoupler Democrats” clearly wanted Bostrom to say different things than me. However, I suspect Bostrom is a high-decoupler Independent himself, and on that basis he loses points in my mind for not addressing the sorts of things that I myself notice. On the other hand… apology-crafting is hard and I think he made a genuine attempt.
But there are several things I take issue with in Thorstad’s post, just one of which I will highlight here. He said that the claim “I think it is probable that black people have a lower average IQ than mankind in general” is “widely repudiated, are based on a long history of racist pseudoscience and must be rejected” (emphasis mine). In response to this I want to highlight a comment that discusses an anti-Bostrom post on this forum:
I think that we high-decouplers tend to feel that it is deeply wrong to treat a proposition X as true if it is expressed in one way, but false/offensive if expressed in another way. If it’s true, it’s true, and it’s okay to say so without getting the wording perfect.[1]
In the Flynn effect, which I don’t believe is controversial, populations vary significantly on IQ depending on when they were born. But if timing of birth is correlated with IQ, then couldn’t location of birth be correlated with IQ? Or poverty, or education? And is there not some correlation between poverty and skin color? And are not correlations usually transitive? I’m not trying to prove the case here, just trying to say that people can reasonably believe there is a correlation, and indeed, you can see that even the anti-Bostrom post above implies that a correlation exists.
Thorstad cites no evidence for his implication that the average IQ of blacks is equal to the average IQ of everyone. To the contrary, he completely ignores environmental effects on intelligence and zeroes in on the topic of genetic effects on intelligence. So even if he made an effort to show that there’s no genetic IQ gap there would still be a big loophole for environmental differences. Thorstad also didn’t make an effort to show that what he was saying about genetics was true, nor did he link to someone who did make that effort (but I will. Here’s someone critiquing the most famous version of HBD, and if you know of a work that directly addresses the whole body of scientific evidence rather than being designed as a rebuttal, I’d like to see it.) Overall, the piece comes across to me as unnecessarily politicized, unfair, judgemental, and not evidence-based in the places it needs to be.
Plus it tends toward dihydrogen monoxide-style arguments. To illustrate this, consider these arguments supporting the idea of man-made global warming: “denial that humans cause global warming is often funded by fossil-fuel companies with a vested interest in blocking environmental regulations, some of which have a history of unethical behavior. And many of the self-proclaimed experts who purport to show humans don’t cause climate change are in fact charlatans. The Great Global Warming Swindle, a denier film, labeled fellow denier Tim Ball as the ‘head of climatology’ at the University of Winnipeg, which does not, in fact, have a climatology department. As droughts, heat waves and hurricane damage figures increase, it’s time to reject denial and affirm that we humans are responsible.” As a former writer for SkepticalScience who fought against climate denial for years, I held my gag reflex as I wrote those sentences, because they were bad arguments. It’s not that they are false; it’s not that I disagree with them; it’s that they are politicized statements that create more heat than light and don’t help demonstrate that humans cause global warming. There are ample explainers and scientific evidence out there for man-made global warming, so you don’t need to rely on guilt-by-association or negative politically-charged narratives like the one I just wrote. Same thing for Bostrom—there may be good arguments against him, but I haven’t seen them.
I also believe actions speak louder than words, so that Bostrom’s value seems much higher than his disvalue (I know little about his value, but a quick look at his bio suggests it is high), and that in EA we should employ the principle of charity.
Also, if someone doesn’t know if an idea is true, it’s wrong to condemn them just for saying they don’t know or for not picking a side, as Thorstad does.
Yes, I agree that there’s a non-trivial divide in attitude. I don’t think the difference in discussion is surprising, at least based on a similar pattern observed with the response to FTX. From a quick search and look at the tag, there were on the order of 10 top-level posts on the subject on LW. There are 151 posts under the FTX collapse tag on the EA forum, and possibly more untagged.
I very much agree with your analysis, except for the “IMO correctly”. Firstly, because I hold the views of a “rationalist-EA”, so it is to be expected following your argument. Secondly, because we should not hold emails/posts against people 25+ years later, unless they are continued and/or deeply relevant to their points today. Looking at his last publications, they do not seem that relevant.
However, I would like to point out that to me the benefits of EA also profit from the rationality influx. EA to me is “rationality applied to doing good”. So the overlap is part of the deal.
(will vaguely follow-up on this in my response to ESRogs’s parallel comment)
This is inaccurate as stated, but there is an important truth nearby. The apparent negatives you attribute to “rationalist” EAs are also true of non-rationalist old-timers in EA, who trend slightly non-woke, while also keeping arms length from the rationalists. SBF himself was not particularly rationalist, for example. What seems to attract scandals is people being consequentialist, ambitious, and intense, which are possible features of rationalists and non-rationalists alike.
As a “rationalist-EA”, I would be curious if you could summarize what lessons you think should be drawn from this teachable moment (or link to such a summary that you endorse).
In particular, do you disagree with the current top comment on this post?
(To me, their Q1 seems like it highlights what should be the key lesson. While their Q2 provides important context that mitigates how censorious we should be in our response.)
Happy to comment on this, though I’ll add a few caveats first:
- My views on priorities among the below are very unstable
- None of this is intended to imply/attribute malice or to demonize all rationalists (“many of my best friends/colleagues are rationalists”), or to imply that there aren’t some upsides to the communities’ overlap
- I am not sure what “institutional EA” should be doing about all this
- Since some of these are complex topics and ideally I’d want to cite lots of sources etc. in a detailed positive statement on them, I am using the “things to think about” framing. But hopefully this gives some flavor of my actual perspective while also pointing in fruitful directions for open-ended reflection.
- I may be able to follow up on specific clarifying Qs though also am not sure how closely I’ll follow replies, so try to get in touch with me offline if you’re interested in further discussion.
- The upvoted comment is pretty long and I don’t really want to get into line-by-line discussion of specific agreements/disagreements, so will focus on sharing my own model.
Those caveats aside, I think some things that EA-rationalists might want to think about in light of recent events are below.
- Different senses of the word racism (~the “believing/stating that race is a ‘real thing’/there are non-trivial differences between them (especially cognitive ones) that anyone should care about” definition, and the “consciously or unconsciously treating people better/worse given their race”), why some people think the former is bad/should be treated with extreme levels of skepticism and not just the latter, and whether there might be a finer line between them in practice than some think.
- Why the rationalist community seems to treat race/IQ as an area where one should defer to “the scientific consensus” but is quick to question the scientific community and attribute biases to it on a range of other topics like ivermectin/COVID generally, AI safety, etc.
- Whether the purported consensus folks often refer to is actually existent + what kind of interpretations/takeaways one might draw from specific results/papers other than literal racism in the first sense above (I recommend The Genetic Lottery’s section on this).
- What the information value of “more accurate [in the red pill/blackpill sense] views on race” would even be “if true,” given that one never interacts with a distribution but with specific people.
- How Black people and other folks underrepresented in EA/rationalist communities, who often face multiple types of racism in the senses above, might react to seeing people in these communities speaking casually about all of this, and what implications that has for things like recruitment and retention in AI safety.
I’ll limit myself to one (multi-part) follow-up question for now —
Suppose someone in our community decides not to defer to the claimed “scientific consensus” on this issue (which I’ve seen claimed both ways), and looks into the matter themselves, and, for whatever reason, comes to the opposite conclusion that you do. What advice would you have for this person?
I think this is a relevant question because, based in part on comments and votes, I get the impression that a significant number of people in our community are in this position (maybe more so on the rationalist side?).
Let’s assume they try to distinguish between the two senses of “racism” that you mention, and try to treat all people respectfully and fairly. They don’t make a point of trumpeting their conclusion, since it’s not likely to make people feel good, and is generally not very helpful since we interact with individuals rather than distributions, as you say.
Let’s say they also try to examine their own biases and take into account how that might have influenced how they interpreted various claims and pieces of data. But after doing that, their honest assessment is still the same.
Beyond not broadcasting their view, and trying to treat people fairly and respectfully, would you say that they should go further, and pretend not to have reached the conclusion that they did, if it ever comes up?
Would you have any other advice for them, other than maybe something like, “Check your work again. You must have made a mistake. There’s an error in your thinking somewhere.”?
I would have to think more on this to have a super confident reply. See also my point in response to Geoffrey Miller elsewhere here—there are lots of considerations at play.
One view I hold, though, is something like “the optimal amount of self-censorship, by which I mean not always saying things that you think are true/useful, in part because you’re considering the [personal/community-level] social implications thereof, is non-zero.” We can of course disagree on the precise amount/contexts for this, and sometimes it can go too far. And by definition in all such cases you will think you are right and others wrong, so there is a cost. But I don’t think it is automatically/definitionally bad for people to do that to some extent, and indeed much of progress on issues like civil rights, gay rights etc. in the US has resulted in large part from actions getting ahead of beliefs among people who didn’t “get it” yet, with cultural/ideological change gradually following with generational replacement, pop culture changes, etc. Obviously people rarely think that they are in the wrong, but it’s hard to be sure, and I don’t think we [the world, EA] should be aiming for a culture where there are never repercussions for expressing beliefs that, in the speaker’s view, are true. Again, that’s consistent with people disagreeing about particular cases, just sharing my general view here.
This shouldn’t only work in one ideological “direction” of course, which may be a crux in how people react to the above. Some may see the philosophy above as (exclusively) an endorsement of wokism/cancel culture etc. in its entirety/current form [insofar as that were a coherent thing, which I’m not sure it is]. While I am probably less averse to some of those things than the some LW/EAF readers, especially on the rationalist side side, I also think that people should remember that restraint can be positive in many contexts. For example, I am, in my effort to engage and in my social media activities lately, trying to be careful to be respectful to people who identify strongly with the communities I am critiquing, and have held back some spicy jokes (e.g. playing on the “I like this statement and think it is true” line which just begs for memes), precisely because I want to avoid alienating people who might be receptive to the object level points I’m making, and because I don’t want to unduly egg on critiques by other folks on social media who I think sometimes go too far in attacking EAs, etc.
Is it okay if I give my personal perspective on those questions?
I suppose I should first state that I don’t expect that skin color has any effect on IQ whatsoever, and so on. But … I feel like the controversy in this case (among EAs) isn’t about whether one believes that or not [as EAs never express that belief AFAIK], but rather it is about whether one should do things like (i) reach a firm conclusion based purely on moral reasoning (or something like that), and (ii) attack people who gather evidence on the topic, just learn and comment about the topic, or even don’t learn much about the topic but commit the sin of not reaching the “right” conclusion within their state of ignorance.
My impression is that there is no scientific consensus on this question, so we cannot defer to it. Also, doesn’t the rationalist community in general, and EA-rationalists in particular, accept the consensus on most topics such as global warming, vaccine safety, homeopathy, nuclear power, and evolution? I wonder if you are seeing the tolerance of skepticism on LW or the relative tolerance of certain ideas/claims and thinking the tolerance is problematic. But maybe I am mistaken about whether the typical aspiring rationalist agrees with various consensuses.
[Whether the purported consensus folks often refer to is actually existent] The only consensus I think exists is that one’s genetic code can, in principle, affect intelligence, e.g. one could theoretically be a genius, an idiot, or an octopus, for genetic reasons (literally, if you have the right genes, you are an octopus, with the intelligence of an octopus, “because of your genes”). I don’t know whether or not there is some further consensus that relates somehow to skin color, but I do care about the fact that even the first matter is scarily controversial. There are cases where some information is too dangerous to be widely shared, such as “how to build an AGI” or “how to build a deadly infectious virus with stuff you can order online”. Likewise it would be terrible to tell children that their skin color is “linked” to lower intelligence; it’s “infohazardous if true” (because it has been observed that children in general may react to negative information by becoming discouraged and end up less skilled). But adults should be mature enough to be able to talk about this like adults. Since they generally aren’t that mature, what I wonder is how we should act given that there are confusing taboos and culture wars everywhere. For example, we can try adding various caveats and qualifications, but the Bostrom case demonstrates that these are often insufficient.
[What the information value of “more accurate [...] views on race” would even be “if true,”] I’d say the information value is low (which is why I have little interest in this topic) but that the disvalue of taboos is high. Yes, bad things are bad, but merely discussing bad things (without elaborate paranoid social protocols) isn’t.
[How Black people and other folks underrepresented [...] might react to seeing people in these communities speaking casually about all of this, and what implications that has for things like recruitment and retention in AI safety.] That’s a great question! I suspect that reactions differ tremendously between individuals. I also suspect that first impressions are key, so whatever appears at the top of this page, for instance, is important, but not nearly as important as whatever page about this topic is most widely circulated. But… am I wrong to think that the average black person would be less outraged by an apology that begins with “I completely repudiate this disgusting email from 26 years ago” than some people on this very forum?
With ivermectin we had a time where the best meta-analysis were pro-ivermectin but the scientific establishement was against ivermectin. Trusting those meta reviews that were published in reputable peer reviewed is poorly understood as “not defering to the scientific consensus”. Scott also wrote a deep dive on Ivermectin and the evidence in the scientific literature for it.
You might ask yourself “Why doesn’t Scott Alexander write a deep dive on the literature of IQ and race?” Why don’t other rationalists on LessWrong write deep dives on the literature of IQ and race and questions about which hypothesis are supported by the literature an which aren’t.
From a truth seeking perspective it would be nice to have such literature deep dives. From a practical point, writing deep dives on the literature of IQ and race and have indepth discussions about it has a high likelihood to offend people. The effort and risks that come with it are high enough that Scott is very unlikely to write such a post.
I think that there’s broad agreement on this and that self-censorship is one of the core reasons why rationalists are not engaging as deeply with the literature around IQ and race as we did with Ivermectin or COVID.
On the other hand, there are situation where there are reasons to actually speak about an issue and people still express their views even if they would prefer to just avoid talking about a topic.
Thanks, I appreciate the thoughtful response!
My view is that the rationalist community deeply values the virtue of epistemic integrity at all costs, and of accurately expressing your opinion regardless of social acceptability.
The EA community is focused on approximately maximising consequentialist impact.
Rationalist EAs should recognise when theses virtue of epistemic integrity and epistemic accuracy are in conflict with maximising consequentialist impact, via direct, unintended consequences of expressing your opinions, or via effects on EA’s reputation.
For what it’s worth, I have my commitment to honesty primarily for consequentialist reasons.
That makes sense and I would agree with the idea that honesty is usually helpful for conseuqentialist reasons, but I think it is important to recognise cases where it is not.
Broadly, these cases are where the view you’re expressing doesn’t really help you do more good and the view brings a lot of harm to your reputation.
So as much as I disagree with Bostrom’s object level views on race / IQ, I think he should have lied about his views.
Another example I wrote down elsewhere:
If you were an atheist in a rural, conservative part of Afghanistan today aiming to improve the world by challenging the mistreatment of women and LGBT people, and you told people that you think that God doesn’t exist, even if that was you accurately expressing your true beliefs, you would be so far from the Overton Window that you’re probably making it more difficult for yourself to improve things for LGBT people and women. Much better to say that you’re a Muslim and you think women and LGBT people should be treated better.
Teachable moment means that you’re supposed to see what the politically advantageous thing is and then do it. In this case that would be completely ejecting Bostrom from all association with EA.
I think it’s a bit more nuanced than that + added some more detail on my views below.
I would say it’s less about rationalists vs non-rationalists and more that people who are inclined to social justice norms (who tend not to be rationalists, though one can be both or neither) think it’s a big deal and people who aren’t are at least less committal.
I think there’s a decent case to be made that a lot of social justice norms (though certainly not all) can be arrived at by utilitarian reasoning (“normie EA”) while a lot of opposition to social justice norms can be arrived at through a sort of truth seeking that actively eschews social norms (“rationalist”).
I think that social justice norms are sometimes harmful from a consequentialist viewpoint. The social justice project largely consists of highlighting disparities between oppressor groups and oppressed groups and attributing disparities to immoral action on the part of oppressor groups. I think that most of these beliefs are actually false and the proposed solutions are harmful in that they will not actually solve the problem because the belief is false. I think that they make social relations worse.
More egregious is social justice advocates propensity for censorship in the name of emotional harm-avoidance, and the willingness to attack the character of people who disagree with their viewpoint as bigots of various types. And most egregious is the small minority who actively causes reputational damage, firings, and ostracism.
I think that various harmful social views and policies persist because many social justice advocates think they’re so right they don’t need particularly good truth-seeking behavior.
You might be able to arrive at diversity and inclusion from utilitarian and truth seeking norms but you can’t get to equity from them.
Note that there is now at least one post on LW front page that is at least indirectly about the Bostrom stuff. I am not sure if it was there before and I missed it or what.
And others’ comments have updated me a bit towards the forum vs. forum difference being less surprising.
I still think there is something like the above going on, though, as shown by the kinds of views being expressed + who’s expressing them just on EA Forum, and on social media.
But I probably should have left LW out of my “argument” since I’m less familiar with typical patterns/norms there.
The indirectness is also quite relevant to that. On LessWrong it’s pretty encouraged to take current events and try to extract generalizable lessons from them, and make statements that are removed from the local political landscape. I am glad that post was written, and would have been happy about it independently of any Bostrom stuff going on.
Can I ask for a link to this ‘indirect post’? I’m interested in the generalized lessons being advertised here, but couldn’t find the post after looking on LW.
https://www.lesswrong.com/posts/GqD9ZKeAbNWDqy8Mz/a-general-comment-on-discussions-of-genetic-group
Spot on.
The rationalist community celebrates the virtue of epistemic integrity at all costs and celebrates the expression of opinions when they are deeply unpopular.
’Normie EAs’s are not willing to sacrifice consequentialist impact for these virtues.
Is the conversation censored or are people just not discussing it?