Unfortunately, I think these specific questions are mostly about stuff that people started talking about a lot more after 2017. (Or at least, I didnāt pick up on much writing and discussion about these points.) So itās a bit beyond my area.
But I can offer some speculations and related thoughts, informed in a general sense by the things I did learn:
I suspect misinformation at least could be an āeffective weaponā against countries or peoples, in the sense of causing them substantial damage.
Iād see (unfounded) conspiracy theories and smear campaigns as subtypes of spreading misinformation, rather than as something qualitatively different. But I think todayās technology allows for spreading misinformation (of any type) much more easily and rapidly than people could previously.
At the same time, todayās technology also makes flagging, fact-checking, and otherwise countering misinformation easier.
Iād wildly speculate that, overall, the general public are much better informed than they used to be, but that purposeful efforts to spread misinformation will more easily have major effects now than previously.
This is primarily based on the research Iāve seen (see my other comment on this post) that indicates that even warnings about misinfo and (correctly recalled!) corrections of misinfo wonāt stop that misinfo having an effect.
But I donāt actually know of research thatās looked into this. We could perhaps call this question: How does the āoffense-defenseā balance of (mis)information spreading scale with better technology, more interconnectedness, etc.? (I take the phrase āoffense-defense balanceā from this paper, though itās possible my usage here is not in line with what the phrase should mean.)
My understanding is that, in general, standard ways of counteracting misinfo (e.g., fact-checking, warnings) tend to be somewhat but not completely effective in countering misinfo. I expect this would be true for accidentally spread misinfo, misinfo spread deliberately by e.g. just a random troll, or misinfo spread deliberately by e.g. a major effort on the part of a rival country.
But Iād expect that the latter case would be one where the resources dedicated to spreading the misinfo will more likely overwhelm the resources dedicated towards counteracting it. So the misinfo may end up having more influence for that reason.
We could also perhaps wonder about how the āoffense-defenseā balance of (mis)information spreading scales with more resources. It seems plausible that, after a certain amount of resources dedicated by both sides, the public are just saturated with the misinfo to such an extent that fact-checking doesnāt help much anymore. But I donāt know of any actual research on that.
Good questions!
Unfortunately, I think these specific questions are mostly about stuff that people started talking about a lot more after 2017. (Or at least, I didnāt pick up on much writing and discussion about these points.) So itās a bit beyond my area.
But I can offer some speculations and related thoughts, informed in a general sense by the things I did learn:
I suspect misinformation at least could be an āeffective weaponā against countries or peoples, in the sense of causing them substantial damage.
Iād see (unfounded) conspiracy theories and smear campaigns as subtypes of spreading misinformation, rather than as something qualitatively different. But I think todayās technology allows for spreading misinformation (of any type) much more easily and rapidly than people could previously.
At the same time, todayās technology also makes flagging, fact-checking, and otherwise countering misinformation easier.
Iād wildly speculate that, overall, the general public are much better informed than they used to be, but that purposeful efforts to spread misinformation will more easily have major effects now than previously.
This is primarily based on the research Iāve seen (see my other comment on this post) that indicates that even warnings about misinfo and (correctly recalled!) corrections of misinfo wonāt stop that misinfo having an effect.
But I donāt actually know of research thatās looked into this. We could perhaps call this question: How does the āoffense-defenseā balance of (mis)information spreading scale with better technology, more interconnectedness, etc.? (I take the phrase āoffense-defense balanceā from this paper, though itās possible my usage here is not in line with what the phrase should mean.)
My understanding is that, in general, standard ways of counteracting misinfo (e.g., fact-checking, warnings) tend to be somewhat but not completely effective in countering misinfo. I expect this would be true for accidentally spread misinfo, misinfo spread deliberately by e.g. just a random troll, or misinfo spread deliberately by e.g. a major effort on the part of a rival country.
But Iād expect that the latter case would be one where the resources dedicated to spreading the misinfo will more likely overwhelm the resources dedicated towards counteracting it. So the misinfo may end up having more influence for that reason.
We could also perhaps wonder about how the āoffense-defenseā balance of (mis)information spreading scales with more resources. It seems plausible that, after a certain amount of resources dedicated by both sides, the public are just saturated with the misinfo to such an extent that fact-checking doesnāt help much anymore. But I donāt know of any actual research on that.