Generalizing a lot, it seems that “normie EAs” (IMO correctly) see glaring problems with Bostrom’s statement and want this incident to serve as a teachable moment
As a “rationalist-EA”, I would be curious if you could summarize what lessons you think should be drawn from this teachable moment (or link to such a summary that you endorse).
(To me, their Q1 seems like it highlights what should be the key lesson. While their Q2 provides important context that mitigates how censorious we should be in our response.)
Happy to comment on this, though I’ll add a few caveats first:
- My views on priorities among the below are very unstable - None of this is intended to imply/attribute malice or to demonize all rationalists (“many of my best friends/colleagues are rationalists”), or to imply that there aren’t some upsides to the communities’ overlap - I am not sure what “institutional EA” should be doing about all this - Since some of these are complex topics and ideally I’d want to cite lots of sources etc. in a detailed positive statement on them, I am using the “things to think about” framing. But hopefully this gives some flavor of my actual perspective while also pointing in fruitful directions for open-ended reflection. - I may be able to follow up on specific clarifying Qs though also am not sure how closely I’ll follow replies, so try to get in touch with me offline if you’re interested in further discussion. - The upvoted comment is pretty long and I don’t really want to get into line-by-line discussion of specific agreements/disagreements, so will focus on sharing my own model.
Those caveats aside, I think some things that EA-rationalists might want to think about in light of recent events are below.
- Different senses of the word racism (~the “believing/stating that race is a ‘real thing’/there are non-trivial differences between them (especially cognitive ones) that anyone should care about” definition, and the “consciously or unconsciously treating people better/worse given their race”), why some people think the former is bad/should be treated with extreme levels of skepticism and not just the latter, and whether there might be a finer line between them in practice than some think. - Why the rationalist community seems to treat race/IQ as an area where one should defer to “the scientific consensus” but is quick to question the scientific community and attribute biases to it on a range of other topics like ivermectin/COVID generally, AI safety, etc. - Whether the purported consensus folks often refer to is actually existent + what kind of interpretations/takeaways one might draw from specific results/papers other than literal racism in the first sense above (I recommend The Genetic Lottery’s section on this). - What the information value of “more accurate [in the red pill/blackpill sense] views on race” would even be “if true,” given that one never interacts with a distribution but with specific people. - How Black people and other folks underrepresented in EA/rationalist communities, who often face multiple types of racism in the senses above, might react to seeing people in these communities speaking casually about all of this, and what implications that has for things like recruitment and retention in AI safety.
I’ll limit myself to one (multi-part) follow-up question for now —
Suppose someone in our community decides not to defer to the claimed “scientific consensus” on this issue (which I’ve seen claimed both ways), and looks into the matter themselves, and, for whatever reason, comes to the opposite conclusion that you do. What advice would you have for this person?
I think this is a relevant question because, based in part on comments and votes, I get the impression that a significant number of people in our community are in this position (maybe more so on the rationalist side?).
Let’s assume they try to distinguish between the two senses of “racism” that you mention, and try to treat all people respectfully and fairly. They don’t make a point of trumpeting their conclusion, since it’s not likely to make people feel good, and is generally not very helpful since we interact with individuals rather than distributions, as you say.
Let’s say they also try to examine their own biases and take into account how that might have influenced how they interpreted various claims and pieces of data. But after doing that, their honest assessment is still the same.
Beyond not broadcasting their view, and trying to treat people fairly and respectfully, would you say that they should go further, and pretend not to have reached the conclusion that they did, if it ever comes up?
Would you have any other advice for them, other than maybe something like, “Check your work again. You must have made a mistake. There’s an error in your thinking somewhere.”?
I would have to think more on this to have a super confident reply. See also my point in response to Geoffrey Miller elsewhere here—there are lots of considerations at play.
One view I hold, though, is something like “the optimal amount of self-censorship, by which I mean not always saying things that you think are true/useful, in part because you’re considering the [personal/community-level] social implications thereof, is non-zero.” We can of course disagree on the precise amount/contexts for this, and sometimes it can go too far. And by definition in all such cases you will think you are right and others wrong, so there is a cost. But I don’t think it is automatically/definitionally bad for people to do that to some extent, and indeed much of progress on issues like civil rights, gay rights etc. in the US has resulted in large part from actions getting ahead of beliefs among people who didn’t “get it” yet, with cultural/ideological change gradually following with generational replacement, pop culture changes, etc. Obviously people rarely think that they are in the wrong, but it’s hard to be sure, and I don’t think we [the world, EA] should be aiming for a culture where there are never repercussions for expressing beliefs that, in the speaker’s view, are true. Again, that’s consistent with people disagreeing about particular cases, just sharing my general view here.
This shouldn’t only work in one ideological “direction” of course, which may be a crux in how people react to the above. Some may see the philosophy above as (exclusively) an endorsement of wokism/cancel culture etc. in its entirety/current form [insofar as that were a coherent thing, which I’m not sure it is]. While I am probably less averse to some of those things than the some LW/EAF readers, especially on the rationalist side side, I also think that people should remember that restraint can be positive in many contexts. For example, I am, in my effort to engage and in my social media activities lately, trying to be careful to be respectful to people who identify strongly with the communities I am critiquing, and have held back some spicy jokes (e.g. playing on the “I like this statement and think it is true” line which just begs for memes), precisely because I want to avoid alienating people who might be receptive to the object level points I’m making, and because I don’t want to unduly egg on critiques by other folks on social media who I think sometimes go too far in attacking EAs, etc.
Is it okay if I give my personal perspective on those questions?
I suppose I should first state that I don’t expect that skin color has any effect on IQ whatsoever, and so on. But … I feel like the controversy in this case (among EAs) isn’t about whether one believes that or not [as EAs never express that belief AFAIK], but rather it is about whether one should do things like (i) reach a firm conclusion based purely on moral reasoning (or something like that), and (ii) attack people who gather evidence on the topic, just learn and comment about the topic, or even don’t learn much about the topic but commit the sin of not reaching the “right” conclusion within their state of ignorance.
My impression is that there is no scientific consensus on this question, so we cannot defer to it. Also, doesn’t the rationalist community in general, and EA-rationalists in particular, accept the consensus on most topics such as global warming, vaccine safety, homeopathy, nuclear power, and evolution? I wonder if you are seeing the tolerance of skepticism on LW or the relative tolerance of certain ideas/claims and thinking the tolerance is problematic. But maybe I am mistaken about whether the typical aspiring rationalist agrees with various consensuses.
[Whether the purported consensus folks often refer to is actually existent] The only consensus I think exists is that one’s genetic code can, in principle, affect intelligence, e.g. one could theoretically be a genius, an idiot, or an octopus, for genetic reasons (literally, if you have the right genes, you are an octopus, with the intelligence of an octopus, “because of your genes”). I don’t know whether or not there is some further consensus that relates somehow to skin color, but I do care about the fact that even the first matter is scarily controversial. There are cases where some information is too dangerous to be widely shared, such as “how to build an AGI” or “how to build a deadly infectious virus with stuff you can order online”. Likewise it would be terrible to tell children that their skin color is “linked” to lower intelligence; it’s “infohazardous if true” (because it has been observed that children in general may react to negative information by becoming discouraged and end up less skilled). But adults should be mature enough to be able to talk about this like adults. Since they generally aren’t that mature, what I wonder is how we should act given that there are confusing taboos and culture wars everywhere. For example, we can try adding various caveats and qualifications, but the Bostrom case demonstrates that these are often insufficient.
[What the information value of “more accurate [...] views on race” would even be “if true,”] I’d say the information value is low (which is why I have little interest in this topic) but that the disvalue of taboos is high. Yes, bad things are bad, but merely discussing bad things (without elaborate paranoid social protocols) isn’t.
[How Black people and other folks underrepresented [...] might react to seeing people in these communities speaking casually about all of this, and what implications that has for things like recruitment and retention in AI safety.] That’s a great question! I suspect that reactions differ tremendously between individuals. I also suspect that first impressions are key, so whatever appears at the top of this page, for instance, is important, but not nearly as important as whatever page about this topic is most widely circulated. But… am I wrong to think that the average black person would be less outraged by an apology that begins with “I completely repudiate this disgusting email from 26 years ago” than some people on this very forum?
- Why the rationalist community seems to treat race/IQ as an area where one should defer to “the scientific consensus” but is quick to question the scientific community and attribute biases to it on a range of other topics like ivermectin/COVID generally, AI safety, etc.
With ivermectin we had a time where the best meta-analysis were pro-ivermectin but the scientific establishement was against ivermectin. Trusting those meta reviews that were published in reputable peer reviewed is poorly understood as “not defering to the scientific consensus”. Scott also wrote a deep dive on Ivermectin and the evidence in the scientific literature for it.
You might ask yourself “Why doesn’t Scott Alexander write a deep dive on the literature of IQ and race?” Why don’t other rationalists on LessWrong write deep dives on the literature of IQ and race and questions about which hypothesis are supported by the literature an which aren’t.
From a truth seeking perspective it would be nice to have such literature deep dives. From a practical point, writing deep dives on the literature of IQ and race and have indepth discussions about it has a high likelihood to offend people. The effort and risks that come with it are high enough that Scott is very unlikely to write such a post.
One view I hold, though, is something like “the optimal amount of self-censorship, by which I mean not always saying things that you think are true/useful, in part because you’re considering the [personal/community-level] social implications thereof, is non-zero.”
I think that there’s broad agreement on this and that self-censorship is one of the core reasons why rationalists are not engaging as deeply with the literature around IQ and race as we did with Ivermectin or COVID.
On the other hand, there are situation where there are reasons to actually speak about an issue and people still express their views even if they would prefer to just avoid talking about a topic.
My view is that the rationalist community deeply values the virtue of epistemic integrity at all costs, and of accurately expressing your opinion regardless of social acceptability.
The EA community is focused on approximately maximising consequentialist impact.
Rationalist EAs should recognise when theses virtue of epistemic integrity and epistemic accuracy are in conflict with maximising consequentialist impact, via direct, unintended consequences of expressing your opinions, or via effects on EA’s reputation.
That makes sense and I would agree with the idea that honesty is usually helpful for conseuqentialist reasons, but I think it is important to recognise cases where it is not.
Broadly, these cases are where the view you’re expressing doesn’t really help you do more good and the view brings a lot of harm to your reputation.
So as much as I disagree with Bostrom’s object level views on race / IQ, I think he should have lied about his views.
Another example I wrote down elsewhere:
If you were an atheist in a rural, conservative part of Afghanistan today aiming to improve the world by challenging the mistreatment of women and LGBT people, and you told people that you think that God doesn’t exist, even if that was you accurately expressing your true beliefs, you would be so far from the Overton Window that you’re probably making it more difficult for yourself to improve things for LGBT people and women. Much better to say that you’re a Muslim and you think women and LGBT people should be treated better.
Teachable moment means that you’re supposed to see what the politically advantageous thing is and then do it. In this case that would be completely ejecting Bostrom from all association with EA.
As a “rationalist-EA”, I would be curious if you could summarize what lessons you think should be drawn from this teachable moment (or link to such a summary that you endorse).
In particular, do you disagree with the current top comment on this post?
(To me, their Q1 seems like it highlights what should be the key lesson. While their Q2 provides important context that mitigates how censorious we should be in our response.)
Happy to comment on this, though I’ll add a few caveats first:
- My views on priorities among the below are very unstable
- None of this is intended to imply/attribute malice or to demonize all rationalists (“many of my best friends/colleagues are rationalists”), or to imply that there aren’t some upsides to the communities’ overlap
- I am not sure what “institutional EA” should be doing about all this
- Since some of these are complex topics and ideally I’d want to cite lots of sources etc. in a detailed positive statement on them, I am using the “things to think about” framing. But hopefully this gives some flavor of my actual perspective while also pointing in fruitful directions for open-ended reflection.
- I may be able to follow up on specific clarifying Qs though also am not sure how closely I’ll follow replies, so try to get in touch with me offline if you’re interested in further discussion.
- The upvoted comment is pretty long and I don’t really want to get into line-by-line discussion of specific agreements/disagreements, so will focus on sharing my own model.
Those caveats aside, I think some things that EA-rationalists might want to think about in light of recent events are below.
- Different senses of the word racism (~the “believing/stating that race is a ‘real thing’/there are non-trivial differences between them (especially cognitive ones) that anyone should care about” definition, and the “consciously or unconsciously treating people better/worse given their race”), why some people think the former is bad/should be treated with extreme levels of skepticism and not just the latter, and whether there might be a finer line between them in practice than some think.
- Why the rationalist community seems to treat race/IQ as an area where one should defer to “the scientific consensus” but is quick to question the scientific community and attribute biases to it on a range of other topics like ivermectin/COVID generally, AI safety, etc.
- Whether the purported consensus folks often refer to is actually existent + what kind of interpretations/takeaways one might draw from specific results/papers other than literal racism in the first sense above (I recommend The Genetic Lottery’s section on this).
- What the information value of “more accurate [in the red pill/blackpill sense] views on race” would even be “if true,” given that one never interacts with a distribution but with specific people.
- How Black people and other folks underrepresented in EA/rationalist communities, who often face multiple types of racism in the senses above, might react to seeing people in these communities speaking casually about all of this, and what implications that has for things like recruitment and retention in AI safety.
I’ll limit myself to one (multi-part) follow-up question for now —
Suppose someone in our community decides not to defer to the claimed “scientific consensus” on this issue (which I’ve seen claimed both ways), and looks into the matter themselves, and, for whatever reason, comes to the opposite conclusion that you do. What advice would you have for this person?
I think this is a relevant question because, based in part on comments and votes, I get the impression that a significant number of people in our community are in this position (maybe more so on the rationalist side?).
Let’s assume they try to distinguish between the two senses of “racism” that you mention, and try to treat all people respectfully and fairly. They don’t make a point of trumpeting their conclusion, since it’s not likely to make people feel good, and is generally not very helpful since we interact with individuals rather than distributions, as you say.
Let’s say they also try to examine their own biases and take into account how that might have influenced how they interpreted various claims and pieces of data. But after doing that, their honest assessment is still the same.
Beyond not broadcasting their view, and trying to treat people fairly and respectfully, would you say that they should go further, and pretend not to have reached the conclusion that they did, if it ever comes up?
Would you have any other advice for them, other than maybe something like, “Check your work again. You must have made a mistake. There’s an error in your thinking somewhere.”?
I would have to think more on this to have a super confident reply. See also my point in response to Geoffrey Miller elsewhere here—there are lots of considerations at play.
One view I hold, though, is something like “the optimal amount of self-censorship, by which I mean not always saying things that you think are true/useful, in part because you’re considering the [personal/community-level] social implications thereof, is non-zero.” We can of course disagree on the precise amount/contexts for this, and sometimes it can go too far. And by definition in all such cases you will think you are right and others wrong, so there is a cost. But I don’t think it is automatically/definitionally bad for people to do that to some extent, and indeed much of progress on issues like civil rights, gay rights etc. in the US has resulted in large part from actions getting ahead of beliefs among people who didn’t “get it” yet, with cultural/ideological change gradually following with generational replacement, pop culture changes, etc. Obviously people rarely think that they are in the wrong, but it’s hard to be sure, and I don’t think we [the world, EA] should be aiming for a culture where there are never repercussions for expressing beliefs that, in the speaker’s view, are true. Again, that’s consistent with people disagreeing about particular cases, just sharing my general view here.
This shouldn’t only work in one ideological “direction” of course, which may be a crux in how people react to the above. Some may see the philosophy above as (exclusively) an endorsement of wokism/cancel culture etc. in its entirety/current form [insofar as that were a coherent thing, which I’m not sure it is]. While I am probably less averse to some of those things than the some LW/EAF readers, especially on the rationalist side side, I also think that people should remember that restraint can be positive in many contexts. For example, I am, in my effort to engage and in my social media activities lately, trying to be careful to be respectful to people who identify strongly with the communities I am critiquing, and have held back some spicy jokes (e.g. playing on the “I like this statement and think it is true” line which just begs for memes), precisely because I want to avoid alienating people who might be receptive to the object level points I’m making, and because I don’t want to unduly egg on critiques by other folks on social media who I think sometimes go too far in attacking EAs, etc.
Is it okay if I give my personal perspective on those questions?
I suppose I should first state that I don’t expect that skin color has any effect on IQ whatsoever, and so on. But … I feel like the controversy in this case (among EAs) isn’t about whether one believes that or not [as EAs never express that belief AFAIK], but rather it is about whether one should do things like (i) reach a firm conclusion based purely on moral reasoning (or something like that), and (ii) attack people who gather evidence on the topic, just learn and comment about the topic, or even don’t learn much about the topic but commit the sin of not reaching the “right” conclusion within their state of ignorance.
My impression is that there is no scientific consensus on this question, so we cannot defer to it. Also, doesn’t the rationalist community in general, and EA-rationalists in particular, accept the consensus on most topics such as global warming, vaccine safety, homeopathy, nuclear power, and evolution? I wonder if you are seeing the tolerance of skepticism on LW or the relative tolerance of certain ideas/claims and thinking the tolerance is problematic. But maybe I am mistaken about whether the typical aspiring rationalist agrees with various consensuses.
[Whether the purported consensus folks often refer to is actually existent] The only consensus I think exists is that one’s genetic code can, in principle, affect intelligence, e.g. one could theoretically be a genius, an idiot, or an octopus, for genetic reasons (literally, if you have the right genes, you are an octopus, with the intelligence of an octopus, “because of your genes”). I don’t know whether or not there is some further consensus that relates somehow to skin color, but I do care about the fact that even the first matter is scarily controversial. There are cases where some information is too dangerous to be widely shared, such as “how to build an AGI” or “how to build a deadly infectious virus with stuff you can order online”. Likewise it would be terrible to tell children that their skin color is “linked” to lower intelligence; it’s “infohazardous if true” (because it has been observed that children in general may react to negative information by becoming discouraged and end up less skilled). But adults should be mature enough to be able to talk about this like adults. Since they generally aren’t that mature, what I wonder is how we should act given that there are confusing taboos and culture wars everywhere. For example, we can try adding various caveats and qualifications, but the Bostrom case demonstrates that these are often insufficient.
[What the information value of “more accurate [...] views on race” would even be “if true,”] I’d say the information value is low (which is why I have little interest in this topic) but that the disvalue of taboos is high. Yes, bad things are bad, but merely discussing bad things (without elaborate paranoid social protocols) isn’t.
[How Black people and other folks underrepresented [...] might react to seeing people in these communities speaking casually about all of this, and what implications that has for things like recruitment and retention in AI safety.] That’s a great question! I suspect that reactions differ tremendously between individuals. I also suspect that first impressions are key, so whatever appears at the top of this page, for instance, is important, but not nearly as important as whatever page about this topic is most widely circulated. But… am I wrong to think that the average black person would be less outraged by an apology that begins with “I completely repudiate this disgusting email from 26 years ago” than some people on this very forum?
With ivermectin we had a time where the best meta-analysis were pro-ivermectin but the scientific establishement was against ivermectin. Trusting those meta reviews that were published in reputable peer reviewed is poorly understood as “not defering to the scientific consensus”. Scott also wrote a deep dive on Ivermectin and the evidence in the scientific literature for it.
You might ask yourself “Why doesn’t Scott Alexander write a deep dive on the literature of IQ and race?” Why don’t other rationalists on LessWrong write deep dives on the literature of IQ and race and questions about which hypothesis are supported by the literature an which aren’t.
From a truth seeking perspective it would be nice to have such literature deep dives. From a practical point, writing deep dives on the literature of IQ and race and have indepth discussions about it has a high likelihood to offend people. The effort and risks that come with it are high enough that Scott is very unlikely to write such a post.
I think that there’s broad agreement on this and that self-censorship is one of the core reasons why rationalists are not engaging as deeply with the literature around IQ and race as we did with Ivermectin or COVID.
On the other hand, there are situation where there are reasons to actually speak about an issue and people still express their views even if they would prefer to just avoid talking about a topic.
Thanks, I appreciate the thoughtful response!
My view is that the rationalist community deeply values the virtue of epistemic integrity at all costs, and of accurately expressing your opinion regardless of social acceptability.
The EA community is focused on approximately maximising consequentialist impact.
Rationalist EAs should recognise when theses virtue of epistemic integrity and epistemic accuracy are in conflict with maximising consequentialist impact, via direct, unintended consequences of expressing your opinions, or via effects on EA’s reputation.
For what it’s worth, I have my commitment to honesty primarily for consequentialist reasons.
That makes sense and I would agree with the idea that honesty is usually helpful for conseuqentialist reasons, but I think it is important to recognise cases where it is not.
Broadly, these cases are where the view you’re expressing doesn’t really help you do more good and the view brings a lot of harm to your reputation.
So as much as I disagree with Bostrom’s object level views on race / IQ, I think he should have lied about his views.
Another example I wrote down elsewhere:
If you were an atheist in a rural, conservative part of Afghanistan today aiming to improve the world by challenging the mistreatment of women and LGBT people, and you told people that you think that God doesn’t exist, even if that was you accurately expressing your true beliefs, you would be so far from the Overton Window that you’re probably making it more difficult for yourself to improve things for LGBT people and women. Much better to say that you’re a Muslim and you think women and LGBT people should be treated better.
Teachable moment means that you’re supposed to see what the politically advantageous thing is and then do it. In this case that would be completely ejecting Bostrom from all association with EA.
I think it’s a bit more nuanced than that + added some more detail on my views below.