To get the ball rolling, and give examples of some insights from these areas of research and how they might be relevant to EA, here’s an adapted version of a shortform comment I wrote a while ago:
Potential downsides of EA’s epistemic norms (which overall seem great to me)
This is a quick attempt to summarise some insights from psychological findings on the continued influence effect of misinformation, and related areas, which might suggest downsides to some of EA’s epistemic norms. Examples of the norms I’m talking about include just honestly contributing your views/data points to the general pool and trusting people will update on them only to the appropriate degree, or clearly acknowledging counterarguments even when you believe your position is strong.
From memory, this paper reviews research on CIE, and I perceived it to be high-quality and a good intro to the topic.
Information that initially is presumed to be correct, but that is later retracted or corrected, often continues to influence memory and reasoning. This occurs even if the retraction itself is well remembered.The present study investigated whether the continued influence of misinformation can be reduced by explicitly warning people at the outset that they may be misled. A specific warning—giving detailed information about the continued influence effect (CIE)--succeeded in reducing the continued reliance on outdated information but did not eliminate it. A more general warning—reminding people that facts are not always properly checked before information is disseminated—was even less effective. In an additional experiment, a specific warning was combined with the provision of a plausible alternative explanation for the retracted information. This combined manipulation further reduced the CIE but still failed to eliminate it altogether. (emphasis added)
This seems to me to suggest some value in including “epistemic status” messages up front, but that this don’t make it totally “safe” to make posts before having familiarised oneself with the literature and checked one’s claims. (This may suggest potential downsides to both this comment and this whole AMA, so please consider yourself both warned and warned that the warning might not be sufficient!)
Similar things also make me a bit concerned about the “better wrong than vague” norm/slogan that crops up sometimes, and also make me hesitant to optimise too much for brevity at the expense of nuance. I see value in the “better wrong than vague” idea, and in being brief at the cost of some nuance, but it seems a good idea to make tradeoffs like this with these psychological findings in mind as one factor.
Here are a couple other seemingly relevant quotes from papers I read back then (and haven’t vetted since then):
“retractions [of misinformation] are less effective if the misinformation is congruent with a person’s relevant attitudes, in which case the retractions can even backfire [i.e., increase belief in the misinformation].” (source) (see also this source)
“we randomly assigned 320 undergraduate participants to read a news article presenting either claims both for/against an autism-vaccine link [a “false balance”], link claims only, no-link claims only or non-health-related information. Participants who read the balanced article were less certain that vaccines are safe, more likely to believe experts were less certain that vaccines are safe and less likely to have their future children vaccinated. Results suggest that balancing conflicting views of the autism-vaccine controversy may lead readers to erroneously infer the state of expert knowledge regarding vaccine safety and negatively impact vaccine intentions.” (emphasis added) (source)
This seems relevant to norms around “steelmanning” and explaining reasons why one’s own view may be inaccurate. Those overall seem like very good norms to me, especially given EAs typically write about issues where there truly is far less consensus than there is around things like the autism-vaccine “controversy” or climate change. But it does seem those norms could perhaps lead to overweighting of the counterarguments when they’re actually very weak, perhaps especially when communicating to wider publics who might read and consider posts less carefully than self-identifying EAs/rationalists would. But that’s all just my own speculative generalisations of the findings on “falsely balanced” coverage.
Two more examples of how these sorts of findings can be applied to matters of interest to EAs:
Seth Baum has written a paper entitled Countering Superintelligence Misinformation drawing on this body of research. (I stumbled upon this recently and haven’t yet had a chance to read beyond the abstract and citations.)
In a comment, Jonas Vollmer applied ideas from this body of research to the matter of how best to handle interactions about EA with journalists
To get the ball rolling, and give examples of some insights from these areas of research and how they might be relevant to EA, here’s an adapted version of a shortform comment I wrote a while ago:
Potential downsides of EA’s epistemic norms (which overall seem great to me)
This is a quick attempt to summarise some insights from psychological findings on the continued influence effect of misinformation, and related areas, which might suggest downsides to some of EA’s epistemic norms. Examples of the norms I’m talking about include just honestly contributing your views/data points to the general pool and trusting people will update on them only to the appropriate degree, or clearly acknowledging counterarguments even when you believe your position is strong.
From memory, this paper reviews research on CIE, and I perceived it to be high-quality and a good intro to the topic.
From this paper’s abstract:
This seems to me to suggest some value in including “epistemic status” messages up front, but that this don’t make it totally “safe” to make posts before having familiarised oneself with the literature and checked one’s claims. (This may suggest potential downsides to both this comment and this whole AMA, so please consider yourself both warned and warned that the warning might not be sufficient!)
Similar things also make me a bit concerned about the “better wrong than vague” norm/slogan that crops up sometimes, and also make me hesitant to optimise too much for brevity at the expense of nuance. I see value in the “better wrong than vague” idea, and in being brief at the cost of some nuance, but it seems a good idea to make tradeoffs like this with these psychological findings in mind as one factor.
Here are a couple other seemingly relevant quotes from papers I read back then (and haven’t vetted since then):
“retractions [of misinformation] are less effective if the misinformation is congruent with a person’s relevant attitudes, in which case the retractions can even backfire [i.e., increase belief in the misinformation].” (source) (see also this source)
“we randomly assigned 320 undergraduate participants to read a news article presenting either claims both for/against an autism-vaccine link [a “false balance”], link claims only, no-link claims only or non-health-related information. Participants who read the balanced article were less certain that vaccines are safe, more likely to believe experts were less certain that vaccines are safe and less likely to have their future children vaccinated. Results suggest that balancing conflicting views of the autism-vaccine controversy may lead readers to erroneously infer the state of expert knowledge regarding vaccine safety and negatively impact vaccine intentions.” (emphasis added) (source)
This seems relevant to norms around “steelmanning” and explaining reasons why one’s own view may be inaccurate. Those overall seem like very good norms to me, especially given EAs typically write about issues where there truly is far less consensus than there is around things like the autism-vaccine “controversy” or climate change. But it does seem those norms could perhaps lead to overweighting of the counterarguments when they’re actually very weak, perhaps especially when communicating to wider publics who might read and consider posts less carefully than self-identifying EAs/rationalists would. But that’s all just my own speculative generalisations of the findings on “falsely balanced” coverage.
Two more examples of how these sorts of findings can be applied to matters of interest to EAs:
Seth Baum has written a paper entitled Countering Superintelligence Misinformation drawing on this body of research. (I stumbled upon this recently and haven’t yet had a chance to read beyond the abstract and citations.)
In a comment, Jonas Vollmer applied ideas from this body of research to the matter of how best to handle interactions about EA with journalists