Unfortunately, I think these specific questions are mostly about stuff that people started talking about a lot more after 2017. (Or at least, I didnât pick up on much writing and discussion about these points.) So itâs a bit beyond my area.
But I can offer some speculations and related thoughts, informed in a general sense by the things I did learn:
I suspect misinformation at least could be an âeffective weaponâ against countries or peoples, in the sense of causing them substantial damage.
Iâd see (unfounded) conspiracy theories and smear campaigns as subtypes of spreading misinformation, rather than as something qualitatively different. But I think todayâs technology allows for spreading misinformation (of any type) much more easily and rapidly than people could previously.
At the same time, todayâs technology also makes flagging, fact-checking, and otherwise countering misinformation easier.
Iâd wildly speculate that, overall, the general public are much better informed than they used to be, but that purposeful efforts to spread misinformation will more easily have major effects now than previously.
This is primarily based on the research Iâve seen (see my other comment on this post) that indicates that even warnings about misinfo and (correctly recalled!) corrections of misinfo wonât stop that misinfo having an effect.
But I donât actually know of research thatâs looked into this. We could perhaps call this question: How does the âoffense-defenseâ balance of (mis)information spreading scale with better technology, more interconnectedness, etc.? (I take the phrase âoffense-defense balanceâ from this paper, though itâs possible my usage here is not in line with what the phrase should mean.)
My understanding is that, in general, standard ways of counteracting misinfo (e.g., fact-checking, warnings) tend to be somewhat but not completely effective in countering misinfo. I expect this would be true for accidentally spread misinfo, misinfo spread deliberately by e.g. just a random troll, or misinfo spread deliberately by e.g. a major effort on the part of a rival country.
But Iâd expect that the latter case would be one where the resources dedicated to spreading the misinfo will more likely overwhelm the resources dedicated towards counteracting it. So the misinfo may end up having more influence for that reason.
We could also perhaps wonder about how the âoffense-defenseâ balance of (mis)information spreading scales with more resources. It seems plausible that, after a certain amount of resources dedicated by both sides, the public are just saturated with the misinfo to such an extent that fact-checking doesnât help much anymore. But I donât know of any actual research on that.
Good questions!
Unfortunately, I think these specific questions are mostly about stuff that people started talking about a lot more after 2017. (Or at least, I didnât pick up on much writing and discussion about these points.) So itâs a bit beyond my area.
But I can offer some speculations and related thoughts, informed in a general sense by the things I did learn:
I suspect misinformation at least could be an âeffective weaponâ against countries or peoples, in the sense of causing them substantial damage.
Iâd see (unfounded) conspiracy theories and smear campaigns as subtypes of spreading misinformation, rather than as something qualitatively different. But I think todayâs technology allows for spreading misinformation (of any type) much more easily and rapidly than people could previously.
At the same time, todayâs technology also makes flagging, fact-checking, and otherwise countering misinformation easier.
Iâd wildly speculate that, overall, the general public are much better informed than they used to be, but that purposeful efforts to spread misinformation will more easily have major effects now than previously.
This is primarily based on the research Iâve seen (see my other comment on this post) that indicates that even warnings about misinfo and (correctly recalled!) corrections of misinfo wonât stop that misinfo having an effect.
But I donât actually know of research thatâs looked into this. We could perhaps call this question: How does the âoffense-defenseâ balance of (mis)information spreading scale with better technology, more interconnectedness, etc.? (I take the phrase âoffense-defense balanceâ from this paper, though itâs possible my usage here is not in line with what the phrase should mean.)
My understanding is that, in general, standard ways of counteracting misinfo (e.g., fact-checking, warnings) tend to be somewhat but not completely effective in countering misinfo. I expect this would be true for accidentally spread misinfo, misinfo spread deliberately by e.g. just a random troll, or misinfo spread deliberately by e.g. a major effort on the part of a rival country.
But Iâd expect that the latter case would be one where the resources dedicated to spreading the misinfo will more likely overwhelm the resources dedicated towards counteracting it. So the misinfo may end up having more influence for that reason.
We could also perhaps wonder about how the âoffense-defenseâ balance of (mis)information spreading scales with more resources. It seems plausible that, after a certain amount of resources dedicated by both sides, the public are just saturated with the misinfo to such an extent that fact-checking doesnât help much anymore. But I donât know of any actual research on that.