Epistemic status: During my psychology undergrad, I did a decent amount of reading on relevant topics, in particular under the broad label of the âcontinued influence effectâ (CIE) of misinformation. My Honours thesis (adapted into this paper) also partially related to these topics. But Iâm a bit rusty (my Honours was in 2017).
Information that initially is presumed to be correct, but that is later retracted or corrected, often continues to influence memory and reasoning. This occurs even if the retraction itself is well remembered.The present study investigated whether the continued influence of misinformation can be reduced by explicitly warning people at the outset that they may be misled. A specific warningâgiving detailed information about the continued influence effect (CIE)--succeeded in reducing the continued reliance on outdated information but did not eliminate it. A more general warningâreminding people that facts are not always properly checked before information is disseminatedâwas even less effective. In an additional experiment, a specific warning was combined with the provision of a plausible alternative explanation for the retracted information. This combined manipulation further reduced the CIE but still failed to eliminate it altogether. (emphasis added)
This seems to me to suggest some value in including âepistemic statusâ messages up front, but that this donât make it totally âsafeâ to make posts before having familiarised oneself with the literature and checked oneâs claims.
From memory, this paper reviews research on CIE, and I perceived it to be high-quality and a good intro to the topic.
Hereâs a couple other seemingly relevant quotes from papers I read back then:
âretractions [of misinformation] are less effective if the misinformation is congruent with a personâs relevant attitudes, in which case the retractions can even backfire [i.e., increase belief in the misinformation].â (source) (see also this source)
âwe randomly assigned 320 undergraduate participants to read a news article presenting either claims both for/âagainst an autism-vaccine link [a âfalse balanceâ], link claims only, no-link claims only or non-health-related information. Participants who read the balanced article were less certain that vaccines are safe, more likely to believe experts were less certain that vaccines are safe and less likely to have their future children vaccinated. Results suggest that balancing conflicting views of the autism-vaccine controversy may lead readers to erroneously infer the state of expert knowledge regarding vaccine safety and negatively impact vaccine intentions.â (emphasis added) (source)
This seems relevant to norms around âsteelmanningâ and explaining reasons why oneâs own view may be inaccurate. Those overall seem like very good norms to me, especially given EAs typically write about issues where there truly is far less consensus than there is around things like the autism-vaccine âcontroversyâ or climate change. But it does seem those norms could perhaps lead to overweighting of the counterarguments when theyâre actually very weak, perhaps especially when communicating to wider publics who might read and consider posts less carefully than self-identifying EAs/ârationalists would. But thatâs all my own speculative generalisations of the findings on âfalsely balancedâ coverage.
Iâve been considering brushing up on this literature to write a post for the forum on how to balance risks of spreading misinformation/âflawed ideas with norms among EAs and rationalists around things like just honestly contributing your views/âdata points to the general pool and trusting people will update on them only to the appropriate degree. Reactions to this comment with inform whether I decide investing time into that would be worthwhile.
Epistemic status: During my psychology undergrad, I did a decent amount of reading on relevant topics, in particular under the broad label of the âcontinued influence effectâ (CIE) of misinformation. My Honours thesis (adapted into this paper) also partially related to these topics. But Iâm a bit rusty (my Honours was in 2017).
From this paperâs abstract:
This seems to me to suggest some value in including âepistemic statusâ messages up front, but that this donât make it totally âsafeâ to make posts before having familiarised oneself with the literature and checked oneâs claims.
From memory, this paper reviews research on CIE, and I perceived it to be high-quality and a good intro to the topic.
Hereâs a couple other seemingly relevant quotes from papers I read back then:
âretractions [of misinformation] are less effective if the misinformation is congruent with a personâs relevant attitudes, in which case the retractions can even backfire [i.e., increase belief in the misinformation].â (source) (see also this source)
âwe randomly assigned 320 undergraduate participants to read a news article presenting either claims both for/âagainst an autism-vaccine link [a âfalse balanceâ], link claims only, no-link claims only or non-health-related information. Participants who read the balanced article were less certain that vaccines are safe, more likely to believe experts were less certain that vaccines are safe and less likely to have their future children vaccinated. Results suggest that balancing conflicting views of the autism-vaccine controversy may lead readers to erroneously infer the state of expert knowledge regarding vaccine safety and negatively impact vaccine intentions.â (emphasis added) (source)
This seems relevant to norms around âsteelmanningâ and explaining reasons why oneâs own view may be inaccurate. Those overall seem like very good norms to me, especially given EAs typically write about issues where there truly is far less consensus than there is around things like the autism-vaccine âcontroversyâ or climate change. But it does seem those norms could perhaps lead to overweighting of the counterarguments when theyâre actually very weak, perhaps especially when communicating to wider publics who might read and consider posts less carefully than self-identifying EAs/ârationalists would. But thatâs all my own speculative generalisations of the findings on âfalsely balancedâ coverage.
Iâve been considering brushing up on this literature to write a post for the forum on how to balance risks of spreading misinformation/âflawed ideas with norms among EAs and rationalists around things like just honestly contributing your views/âdata points to the general pool and trusting people will update on them only to the appropriate degree. Reactions to this comment with inform whether I decide investing time into that would be worthwhile.