Just to be clear, I think many of us in the community are not uncertain about whether racism and sexism are part of EA. Rather I’m certain that they are, in the sense that many in the community exhibited them in discussions in the last few weeks.
I think if we found a comment that you considered racist/sexist and asked the author if they thought their comment was racist/sexist, the author would likely say no.
I wish the Google Doc had been more specific. It could’ve said things like:
It’s important to treat people with respect regardless of their race/sex
It’s important to reduce suffering and increase joy for everyone regardless of their race/sex
We should be reluctant to make statements which could be taken as “scientific” justification for ignoring either of the previous bullet points
As written, it seems like the doc has the disadvantage of being ripe for abuse, without the advantage of providing guidelines that let someone know whether the signatories dislike their comment. I think on the margin, this doc pushes us towards a world where EAs are spending less time on high-impact do-gooding, and more time reading social media to make sure we comply with the latest thinking around anti-racism/anti-sexism.
We should be reluctant to make statements which could be taken as “scientific” justification for ignoring either of the previous bullet points
Thank you for stating plainly what I suspect the original doc was trying to hint at.
That said, now that it’s plainly stated, I disagree with it. The world is too connected for that.
Taken literally, “could be taken” is a ridiculously broad standard. I’m sure a sufficiently motivated reasoner could take “2+2=4″ as justification for racism. This is not as silly a concern as it sounds, since we’re mostly worried about motivated reasoners, and it’s unclear how motivated a reasoner we should be reluctant to offer comfort to. But let’s look at some more concrete examples:
In early 2020, people were reluctant to warn about covid-19 because it could be taken as justification for anti-chinese racism. I can’t actually follow the logic that goes from “A dangerous new disease emerged in China” to “I should go beat up someone of Chinese ancestry” but it seems a few people who had been itching for an excuse did. Nevertheless, given the relative death tolls, we clearly should have had more warnings and more preparations. The next pandemic will likely also emerge in a place containing people against whom racism is possible (base rate, if nothing else), and pandemic preparedness people need to be ready to act anyway.
Similarly, many people tried to bury the fact that monkeypox was sexually transmitted because it could lead to homophobia. So instead they warned of a coming pandemic. False warnings are extremely bad for preparedness, draining both our energy and our credibility.
Political and Economic Institutions are a potentially high-impact cause area in both near- and far-term (albeit, dubiously tractable). Investigating them is pretty much going to require looking at history, and at least sometimes saying that western institutions are better than others.
Going back to Bostrom’s original letter, many anti-racists have taken to denying the very idea of intelligence in order to reject it. Hard to work on super-intelligence-based x-risk (or many other things) without that concept.
I think you make good points—these are good cases to discuss.
I also think that motivated reasoners are not the main concern.
My last bullet point was meant as a nudge towards consequentialist communication. I don’t think consequentialism should be the last word in communication (e.g. lying to people because you think it will lead to good consequences is not great).
But consequences are an important factor, and I think there’s a decent case to be made that e.g. Bostrom neglected consequences in his apology letter. (Essentially making statements which violated important and valuable taboos, without any benefit. See my previous comment on this.)
For something like COVID, it seems bad to downplay it, but it also seems bad to continually emphasize its location of origin in contexts where that information isn’t relevant or important.
“We should be reluctant” represents a consideration against doing something, not a complete ban.
I think if we found a comment that you considered racist/sexist and asked the author if they thought their comment was racist/sexist, the author would likely say no.
James Watson’s denial of having made racist statements is a social fact worth noting. Most ‘alt-center,’ etc. researchers in HBD and the latest thinking on euphemisms intended to reappropriate racism for metapolitical and game-theoretic purposes scientifically will, perforce, never outright say this.
To be clear, I don’t think many EAs are formally working in race science, and surely skeptical and morally astute EAs can have the integrity to admit to having made racist comments or reasonably disagree. (And no: as an African American EA on the left, I don’t think we should unsubscribe every HBD-EA, Bostrom, etc., from social life. Instead, we should model a safer environment for us all to be wrong categorically. Effective means getting all x-risks and compound x-risks, etc. right the first time.)
But after mulling over most of the HBD-affirmed defenses of Bostrom’s email/apology that I’ve read or engaged on the EA forum that weren’t obviously (yet also highly upvoted) red pills by bad actors, I think there are other reasons many of those EAs won’t say their comments were racist even if they themselves are not actually certain they are non-racist.
My hunch is whether those EAs see HBD as part of hard core or protective belt of longtermism/EA’s program may be a good predictor of whether they believe and therefore would be willing say that their comments were racist.[1]
For these, among other reasons, I think this instance of Hirshman’s rhetoric of reaction above is mistaken. It is not disvaluable that community builders in a demographically, socially and epistemically isolated elitist technocratic movement like EA doesn’t allow the best provisional statement clearly stating their stance on these issues to become the enemy of the good.
As I was relieved to see this, as well as the fact that Guy made the pushback I wish I had time to do 3 days ago. If there’s any way I can support your efforts, please let me know!
1.1 For want of an intensional definition of value-alignment. 1.2. I take little pleasure in suggesting that HBD-relevant beliefs strongly coupled with, e.g., Beckstead et al.’s (frankly narrow and imaginatively lacking) stance on the most likely sources of economic innovation in the future which therefore may have greater instrumental value to longtermist utopia may be one contributing factor for this problem within EA. And even anti-eugenics has its missteps.
I think if we found a comment that you considered racist/sexist and asked the author if they thought their comment was racist/sexist, the author would likely say no.
I wish the Google Doc had been more specific. It could’ve said things like:
It’s important to treat people with respect regardless of their race/sex
It’s important to reduce suffering and increase joy for everyone regardless of their race/sex
We should be reluctant to make statements which could be taken as “scientific” justification for ignoring either of the previous bullet points
As written, it seems like the doc has the disadvantage of being ripe for abuse, without the advantage of providing guidelines that let someone know whether the signatories dislike their comment. I think on the margin, this doc pushes us towards a world where EAs are spending less time on high-impact do-gooding, and more time reading social media to make sure we comply with the latest thinking around anti-racism/anti-sexism.
Thank you for stating plainly what I suspect the original doc was trying to hint at.
That said, now that it’s plainly stated, I disagree with it. The world is too connected for that.
Taken literally, “could be taken” is a ridiculously broad standard. I’m sure a sufficiently motivated reasoner could take “2+2=4″ as justification for racism. This is not as silly a concern as it sounds, since we’re mostly worried about motivated reasoners, and it’s unclear how motivated a reasoner we should be reluctant to offer comfort to. But let’s look at some more concrete examples:
In early 2020, people were reluctant to warn about covid-19 because it could be taken as justification for anti-chinese racism. I can’t actually follow the logic that goes from “A dangerous new disease emerged in China” to “I should go beat up someone of Chinese ancestry” but it seems a few people who had been itching for an excuse did. Nevertheless, given the relative death tolls, we clearly should have had more warnings and more preparations. The next pandemic will likely also emerge in a place containing people against whom racism is possible (base rate, if nothing else), and pandemic preparedness people need to be ready to act anyway.
Similarly, many people tried to bury the fact that monkeypox was sexually transmitted because it could lead to homophobia. So instead they warned of a coming pandemic. False warnings are extremely bad for preparedness, draining both our energy and our credibility.
Political and Economic Institutions are a potentially high-impact cause area in both near- and far-term (albeit, dubiously tractable). Investigating them is pretty much going to require looking at history, and at least sometimes saying that western institutions are better than others.
Going back to Bostrom’s original letter, many anti-racists have taken to denying the very idea of intelligence in order to reject it. Hard to work on super-intelligence-based x-risk (or many other things) without that concept.
I think you make good points—these are good cases to discuss.
I also think that motivated reasoners are not the main concern.
My last bullet point was meant as a nudge towards consequentialist communication. I don’t think consequentialism should be the last word in communication (e.g. lying to people because you think it will lead to good consequences is not great).
But consequences are an important factor, and I think there’s a decent case to be made that e.g. Bostrom neglected consequences in his apology letter. (Essentially making statements which violated important and valuable taboos, without any benefit. See my previous comment on this.)
For something like COVID, it seems bad to downplay it, but it also seems bad to continually emphasize its location of origin in contexts where that information isn’t relevant or important.
“We should be reluctant” represents a consideration against doing something, not a complete ban.
James Watson’s denial of having made racist statements is a social fact worth noting. Most ‘alt-center,’ etc. researchers in HBD and the latest thinking on euphemisms intended to reappropriate racism for metapolitical and game-theoretic purposes scientifically will, perforce, never outright say this.
To be clear, I don’t think many EAs are formally working in race science, and surely skeptical and morally astute EAs can have the integrity to admit to having made racist comments or reasonably disagree. (And no: as an African American EA on the left, I don’t think we should unsubscribe every HBD-EA, Bostrom, etc., from social life. Instead, we should model a safer environment for us all to be wrong categorically. Effective means getting all x-risks and compound x-risks, etc. right the first time.)
But after mulling over most of the HBD-affirmed defenses of Bostrom’s email/apology that I’ve read or engaged on the EA forum that weren’t obviously (yet also highly upvoted) red pills by bad actors, I think there are other reasons many of those EAs won’t say their comments were racist even if they themselves are not actually certain they are non-racist.
My hunch is whether those EAs see HBD as part of hard core or protective belt of longtermism/EA’s program may be a good predictor of whether they believe and therefore would be willing say that their comments were racist.[1]
For these, among other reasons, I think this instance of Hirshman’s rhetoric of reaction above is mistaken. It is not disvaluable that community builders in a demographically, socially and epistemically isolated elitist technocratic movement like EA doesn’t allow the best provisional statement clearly stating their stance on these issues to become the enemy of the good.
As I was relieved to see this, as well as the fact that Guy made the pushback I wish I had time to do 3 days ago. If there’s any way I can support your efforts, please let me know!
1.1 For want of an intensional definition of value-alignment.
1.2. I take little pleasure in suggesting that HBD-relevant beliefs strongly coupled with, e.g., Beckstead et al.’s (frankly narrow and imaginatively lacking) stance on the most likely sources of economic innovation in the future which therefore may have greater instrumental value to longtermist utopia may be one contributing factor for this problem within EA. And even anti-eugenics has its missteps.