Some brief, off-the-cuff sociological reflections on the Bostrom email:
EA will continue to be appealing to those with an ‘edgelord’ streak. It’s worth owning up to this, and considering how to communicate going forward in light of that fact.
I think some of the reaction to the importance of population-level averages is indicative of an unhealthy attitude towards population-level averages.
I also think the ‘epistemic integrity’ angle is important.
Each consideration is discussed below.
1.
I think basically everyone here agrees that white people shouldn’t be using (or mentioning) racial slurs. I think you should also avoid generics. I think that you very rarely gain anything, even from a purely epistemic point of view, from saying (for example): “men are more stupid than women”.
EA skews young, and does a lot of outreach on university campuses. I also think that EA will continue to be attractive to people who like to engage in the world via a certain kind of communication, and I think many people interested in EA are likely to be drawn to controversial topics. I think this is unavoidable. Given that it’s unavoidable, it’s worth being conscious of this, and directly tackling what’s to be gained (and lost) from certain provocative modes of communication, in what contexts.
Lead poisoning affects IQ. The Flint water crisis affected an area that was majority African American, and we have strong evidence that lead poisoning affects IQ. Here’s one way of ‘provocatively communicating’ certain facts.
The US water system is poisoning black people.
I don’t like this statement and think it is true.
Deliberately provocative communication probably does have its uses, but it’s a mode of communication that can be in tension with nuanced epistemics, as well as kindness. If I’m to get back in touch with my own old edgelord streak for a moment, I’d say that one (though obviously not the major) benefit of EA is the way it can transform ‘edgelord energy’ into something that can actually make the world better.
I think, as with SBF, there’s a cognitive cluster which draws people towards both EA, and towards certain sorts of actions most of us wish to reflectively disavow. I think it’s reasonable to say: “EA messaging will appeal (though obviously will not only appeal) disproportionately to a certain kind of person. We recognize the downsides in this, and here’s what we’re doing in light of that.”
2.
There are fewer women than men in computer science. This doesn’t give you any grounds — once a woman says “I’m interested in computer science”, to be like “oh, well, but maybe she’s lying, or maybe it’s a joke, or … ”
Why? You have evidence that means you don’t need to rely on such coarse data! You don’t need to rely on population-level averages! To the extent that you do, or continue to treat such population-averages as highly salient in the face of more relevant evidence, I think you should be criticized. I think you should be criticized because it’s a sign that your epistemics have been infected by pernicious stereotypes, which makes you worse at understanding the world, in addition to being more likely to cause harm when interacting in that world.
3.
You should be careful to believe true things, even when they’re inconvenient.
On the epistemic level, I actually think that we’re not in an inconvenient possible world wrt the ‘genetic influence on IQ’, partially because I think certain conceptual discussions of ‘heritability’ are confused, and partially because I think that it’s obviously reasonable to look at the (historically quite recent!) effects of slavery, and conclude “yeaah, I’m not sure I’d expect the data we have to look all that different, conditioned on the effects of racism causing basically all of the effects we currently see”.
But, fine, suppose I’m in an inconvenient possible world. I could be faced with data that I’d hate to see, and I’d want to maintain epistemic integrity.
One reason I personally found Bostrom’s email sad was that I sensed a missing mood. To support this, here’s an intuition pump that might be helpful: suppose you’re back in the early days of EA, working for $15k in the basement of an estate agent. You’ve sacrificed a lot to do something weird, sometimes you feel a bit on the defensive, and you worry that people aren’t treating you with the seriousness you deserve. Then, someone comes along, says they’ve run some numbers, and told you that EA was more racist than other cosmopolitan groups, and — despite EA’s intention to do good — is actually far more harmful to the world than other comparable groups. Suppose further that we also ran surveys and IQ tests, and found that EA is also more stupid and unattractive than other groups. I wouldn’t say:
EA is harmful, racist, ugly, and stupid. I like this sentence and think it’s true.
Instead, I’d communicate the information, if I thought it was important, in a careful and nuanced way. If I saw someone make the unqualified statement quoted above, I wouldn’t personally wish to entrust that person with promoting my best interests, or with leading an institute directed towards the future of humanity.
I raise this example not because I wish to opine on contemporary Bostrom, based on his email twenty-six years ago. I bring this example up because, while (like 𝕮𝖎𝖓𝖊𝖗𝖆) I’m glad that Bostrom didn’t distort his epistemics in the face of social pressure, I think it’s reasonable to think (like Habiba, apologies if this is an unfair phrase) that Bostrom didn’t take ownership for his previously missing mood, and communicate why his subsequent development leads him to now repudiate what he said.
I don’t want to be unnecessarily punitive towards people who do shitty things. That’s not kindness. But I also want to be part of a community that promotes genuinely altruistic standards, including a fair sense of penance. With that in mind, I think it’s healthy for people to say: “we accept that you don’t endorse your earlier remark (Bostrom originally apologized within 24 hours, after all), but we still think your apology misses something important, and we’re a community that wants people who are currently involved to meet certain standards.”
Some brief, off-the-cuff sociological reflections on the Bostrom email:
EA will continue to be appealing to those with an ‘edgelord’ streak. It’s worth owning up to this, and considering how to communicate going forward in light of that fact.
I think some of the reaction to the importance of population-level averages is indicative of an unhealthy attitude towards population-level averages.
I also think the ‘epistemic integrity’ angle is important.
Each consideration is discussed below.
1.
I think basically everyone here agrees that white people shouldn’t be using (or mentioning) racial slurs. I think you should also avoid generics. I think that you very rarely gain anything, even from a purely epistemic point of view, from saying (for example): “men are more stupid than women”.
EA skews young, and does a lot of outreach on university campuses. I also think that EA will continue to be attractive to people who like to engage in the world via a certain kind of communication, and I think many people interested in EA are likely to be drawn to controversial topics. I think this is unavoidable. Given that it’s unavoidable, it’s worth being conscious of this, and directly tackling what’s to be gained (and lost) from certain provocative modes of communication, in what contexts.
Lead poisoning affects IQ. The Flint water crisis affected an area that was majority African American, and we have strong evidence that lead poisoning affects IQ. Here’s one way of ‘provocatively communicating’ certain facts.
I don’t like this statement and think it is true.
Deliberately provocative communication probably does have its uses, but it’s a mode of communication that can be in tension with nuanced epistemics, as well as kindness. If I’m to get back in touch with my own old edgelord streak for a moment, I’d say that one (though obviously not the major) benefit of EA is the way it can transform ‘edgelord energy’ into something that can actually make the world better.
I think, as with SBF, there’s a cognitive cluster which draws people towards both EA, and towards certain sorts of actions most of us wish to reflectively disavow. I think it’s reasonable to say: “EA messaging will appeal (though obviously will not only appeal) disproportionately to a certain kind of person. We recognize the downsides in this, and here’s what we’re doing in light of that.”
2.
There are fewer women than men in computer science. This doesn’t give you any grounds — once a woman says “I’m interested in computer science”, to be like “oh, well, but maybe she’s lying, or maybe it’s a joke, or … ”
Why? You have evidence that means you don’t need to rely on such coarse data! You don’t need to rely on population-level averages! To the extent that you do, or continue to treat such population-averages as highly salient in the face of more relevant evidence, I think you should be criticized. I think you should be criticized because it’s a sign that your epistemics have been infected by pernicious stereotypes, which makes you worse at understanding the world, in addition to being more likely to cause harm when interacting in that world.
3.
You should be careful to believe true things, even when they’re inconvenient.
On the epistemic level, I actually think that we’re not in an inconvenient possible world wrt the ‘genetic influence on IQ’, partially because I think certain conceptual discussions of ‘heritability’ are confused, and partially because I think that it’s obviously reasonable to look at the (historically quite recent!) effects of slavery, and conclude “yeaah, I’m not sure I’d expect the data we have to look all that different, conditioned on the effects of racism causing basically all of the effects we currently see”.
But, fine, suppose I’m in an inconvenient possible world. I could be faced with data that I’d hate to see, and I’d want to maintain epistemic integrity.
One reason I personally found Bostrom’s email sad was that I sensed a missing mood. To support this, here’s an intuition pump that might be helpful: suppose you’re back in the early days of EA, working for $15k in the basement of an estate agent. You’ve sacrificed a lot to do something weird, sometimes you feel a bit on the defensive, and you worry that people aren’t treating you with the seriousness you deserve. Then, someone comes along, says they’ve run some numbers, and told you that EA was more racist than other cosmopolitan groups, and — despite EA’s intention to do good — is actually far more harmful to the world than other comparable groups. Suppose further that we also ran surveys and IQ tests, and found that EA is also more stupid and unattractive than other groups. I wouldn’t say:
Instead, I’d communicate the information, if I thought it was important, in a careful and nuanced way. If I saw someone make the unqualified statement quoted above, I wouldn’t personally wish to entrust that person with promoting my best interests, or with leading an institute directed towards the future of humanity.
I raise this example not because I wish to opine on contemporary Bostrom, based on his email twenty-six years ago. I bring this example up because, while (like 𝕮𝖎𝖓𝖊𝖗𝖆) I’m glad that Bostrom didn’t distort his epistemics in the face of social pressure, I think it’s reasonable to think (like Habiba, apologies if this is an unfair phrase) that Bostrom didn’t take ownership for his previously missing mood, and communicate why his subsequent development leads him to now repudiate what he said.
I don’t want to be unnecessarily punitive towards people who do shitty things. That’s not kindness. But I also want to be part of a community that promotes genuinely altruistic standards, including a fair sense of penance. With that in mind, I think it’s healthy for people to say: “we accept that you don’t endorse your earlier remark (Bostrom originally apologized within 24 hours, after all), but we still think your apology misses something important, and we’re a community that wants people who are currently involved to meet certain standards.”