We should be reluctant to make statements which could be taken as “scientific” justification for ignoring either of the previous bullet points
Thank you for stating plainly what I suspect the original doc was trying to hint at.
That said, now that it’s plainly stated, I disagree with it. The world is too connected for that.
Taken literally, “could be taken” is a ridiculously broad standard. I’m sure a sufficiently motivated reasoner could take “2+2=4″ as justification for racism. This is not as silly a concern as it sounds, since we’re mostly worried about motivated reasoners, and it’s unclear how motivated a reasoner we should be reluctant to offer comfort to. But let’s look at some more concrete examples:
In early 2020, people were reluctant to warn about covid-19 because it could be taken as justification for anti-chinese racism. I can’t actually follow the logic that goes from “A dangerous new disease emerged in China” to “I should go beat up someone of Chinese ancestry” but it seems a few people who had been itching for an excuse did. Nevertheless, given the relative death tolls, we clearly should have had more warnings and more preparations. The next pandemic will likely also emerge in a place containing people against whom racism is possible (base rate, if nothing else), and pandemic preparedness people need to be ready to act anyway.
Similarly, many people tried to bury the fact that monkeypox was sexually transmitted because it could lead to homophobia. So instead they warned of a coming pandemic. False warnings are extremely bad for preparedness, draining both our energy and our credibility.
Political and Economic Institutions are a potentially high-impact cause area in both near- and far-term (albeit, dubiously tractable). Investigating them is pretty much going to require looking at history, and at least sometimes saying that western institutions are better than others.
Going back to Bostrom’s original letter, many anti-racists have taken to denying the very idea of intelligence in order to reject it. Hard to work on super-intelligence-based x-risk (or many other things) without that concept.
I think you make good points—these are good cases to discuss.
I also think that motivated reasoners are not the main concern.
My last bullet point was meant as a nudge towards consequentialist communication. I don’t think consequentialism should be the last word in communication (e.g. lying to people because you think it will lead to good consequences is not great).
But consequences are an important factor, and I think there’s a decent case to be made that e.g. Bostrom neglected consequences in his apology letter. (Essentially making statements which violated important and valuable taboos, without any benefit. See my previous comment on this.)
For something like COVID, it seems bad to downplay it, but it also seems bad to continually emphasize its location of origin in contexts where that information isn’t relevant or important.
“We should be reluctant” represents a consideration against doing something, not a complete ban.
Thank you for stating plainly what I suspect the original doc was trying to hint at.
That said, now that it’s plainly stated, I disagree with it. The world is too connected for that.
Taken literally, “could be taken” is a ridiculously broad standard. I’m sure a sufficiently motivated reasoner could take “2+2=4″ as justification for racism. This is not as silly a concern as it sounds, since we’re mostly worried about motivated reasoners, and it’s unclear how motivated a reasoner we should be reluctant to offer comfort to. But let’s look at some more concrete examples:
In early 2020, people were reluctant to warn about covid-19 because it could be taken as justification for anti-chinese racism. I can’t actually follow the logic that goes from “A dangerous new disease emerged in China” to “I should go beat up someone of Chinese ancestry” but it seems a few people who had been itching for an excuse did. Nevertheless, given the relative death tolls, we clearly should have had more warnings and more preparations. The next pandemic will likely also emerge in a place containing people against whom racism is possible (base rate, if nothing else), and pandemic preparedness people need to be ready to act anyway.
Similarly, many people tried to bury the fact that monkeypox was sexually transmitted because it could lead to homophobia. So instead they warned of a coming pandemic. False warnings are extremely bad for preparedness, draining both our energy and our credibility.
Political and Economic Institutions are a potentially high-impact cause area in both near- and far-term (albeit, dubiously tractable). Investigating them is pretty much going to require looking at history, and at least sometimes saying that western institutions are better than others.
Going back to Bostrom’s original letter, many anti-racists have taken to denying the very idea of intelligence in order to reject it. Hard to work on super-intelligence-based x-risk (or many other things) without that concept.
I think you make good points—these are good cases to discuss.
I also think that motivated reasoners are not the main concern.
My last bullet point was meant as a nudge towards consequentialist communication. I don’t think consequentialism should be the last word in communication (e.g. lying to people because you think it will lead to good consequences is not great).
But consequences are an important factor, and I think there’s a decent case to be made that e.g. Bostrom neglected consequences in his apology letter. (Essentially making statements which violated important and valuable taboos, without any benefit. See my previous comment on this.)
For something like COVID, it seems bad to downplay it, but it also seems bad to continually emphasize its location of origin in contexts where that information isn’t relevant or important.
“We should be reluctant” represents a consideration against doing something, not a complete ban.