Potential downsides of EAās epistemic norms (which overall seem great to me)
This is adapted from this comment, and I may develop it into a proper post later. I welcome feedback on whether itād be worth doing so, as well as feedback more generally.
Epistemic status: During my psychology undergrad, I did a decent amount of reading on topics related to the ācontinued influence effectā (CIE) of misinformation. My Honours thesis (adapted into this paper) also partially related to these topics. But Iām a bit rusty (my Honours was in 2017, and I havenāt reviewed the literature since then).
This is a quick attempt to summarise some insights from psychological findings on the continued influence effect of misinformation (and related areas) that (speculatively) might suggest downsides to some of EAās epistemic norms (e.g., just honestly contributing your views/ādata points to the general pool and trusting people will update on them only to the appropriate degree, or clearly acknowledging counterarguments even when you believe your position is strong).
From memory, this paper reviews research on CIE, and I perceived it to be high-quality and a good intro to the topic.
Information that initially is presumed to be correct, but that is later retracted or corrected, often continues to influence memory and reasoning. This occurs even if the retraction itself is well remembered.The present study investigated whether the continued influence of misinformation can be reduced by explicitly warning people at the outset that they may be misled. A specific warningāgiving detailed information about the continued influence effect (CIE)--succeeded in reducing the continued reliance on outdated information but did not eliminate it. A more general warningāreminding people that facts are not always properly checked before information is disseminatedāwas even less effective. In an additional experiment, a specific warning was combined with the provision of a plausible alternative explanation for the retracted information. This combined manipulation further reduced the CIE but still failed to eliminate it altogether. (emphasis added)
This seems to me to suggest some value in including āepistemic statusā messages up front, but that this donāt make it totally āsafeā to make posts before having familiarised oneself with the literature and checked oneās claims.
Hereās a couple other seemingly relevant quotes from papers I read back then:
āretractions [of misinformation] are less effective if the misinformation is congruent with a personās relevant attitudes, in which case the retractions can even backfire [i.e., increase belief in the misinformation].ā (source) (see also this source)
āwe randomly assigned 320 undergraduate participants to read a news article presenting either claims both for/āagainst an autism-vaccine link [a āfalse balanceā], link claims only, no-link claims only or non-health-related information. Participants who read the balanced article were less certain that vaccines are safe, more likely to believe experts were less certain that vaccines are safe and less likely to have their future children vaccinated. Results suggest that balancing conflicting views of the autism-vaccine controversy may lead readers to erroneously infer the state of expert knowledge regarding vaccine safety and negatively impact vaccine intentions.ā (emphasis added) (source)
This seems relevant to norms around āsteelmanningā and explaining reasons why oneās own view may be inaccurate. Those overall seem like very good norms to me, especially given EAs typically write about issues where there truly is far less consensus than there is around things like the autism-vaccine ācontroversyā or climate change. But it does seem those norms could perhaps lead to overweighting of the counterarguments when theyāre actually very weak, perhaps especially when communicating to wider publics who might read and consider posts less carefully than self-identifying EAs/ārationalists would. But thatās all my own speculative generalisations of the findings on āfalsely balancedā coverage.
Potential downsides of EAās epistemic norms (which overall seem great to me)
This is adapted from this comment, and I may develop it into a proper post later. I welcome feedback on whether itād be worth doing so, as well as feedback more generally.
Epistemic status: During my psychology undergrad, I did a decent amount of reading on topics related to the ācontinued influence effectā (CIE) of misinformation. My Honours thesis (adapted into this paper) also partially related to these topics. But Iām a bit rusty (my Honours was in 2017, and I havenāt reviewed the literature since then).
This is a quick attempt to summarise some insights from psychological findings on the continued influence effect of misinformation (and related areas) that (speculatively) might suggest downsides to some of EAās epistemic norms (e.g., just honestly contributing your views/ādata points to the general pool and trusting people will update on them only to the appropriate degree, or clearly acknowledging counterarguments even when you believe your position is strong).
From memory, this paper reviews research on CIE, and I perceived it to be high-quality and a good intro to the topic.
From this paperās abstract:
This seems to me to suggest some value in including āepistemic statusā messages up front, but that this donāt make it totally āsafeā to make posts before having familiarised oneself with the literature and checked oneās claims.
Hereās a couple other seemingly relevant quotes from papers I read back then:
āretractions [of misinformation] are less effective if the misinformation is congruent with a personās relevant attitudes, in which case the retractions can even backfire [i.e., increase belief in the misinformation].ā (source) (see also this source)
āwe randomly assigned 320 undergraduate participants to read a news article presenting either claims both for/āagainst an autism-vaccine link [a āfalse balanceā], link claims only, no-link claims only or non-health-related information. Participants who read the balanced article were less certain that vaccines are safe, more likely to believe experts were less certain that vaccines are safe and less likely to have their future children vaccinated. Results suggest that balancing conflicting views of the autism-vaccine controversy may lead readers to erroneously infer the state of expert knowledge regarding vaccine safety and negatively impact vaccine intentions.ā (emphasis added) (source)
This seems relevant to norms around āsteelmanningā and explaining reasons why oneās own view may be inaccurate. Those overall seem like very good norms to me, especially given EAs typically write about issues where there truly is far less consensus than there is around things like the autism-vaccine ācontroversyā or climate change. But it does seem those norms could perhaps lead to overweighting of the counterarguments when theyāre actually very weak, perhaps especially when communicating to wider publics who might read and consider posts less carefully than self-identifying EAs/ārationalists would. But thatās all my own speculative generalisations of the findings on āfalsely balancedā coverage.