Unfortunately, I think these specific questions are mostly about stuff that people started talking about a lot more after 2017. (Or at least, I didn’t pick up on much writing and discussion about these points.) So it’s a bit beyond my area.
But I can offer some speculations and related thoughts, informed in a general sense by the things I did learn:
I suspect misinformation at least could be an “effective weapon” against countries or peoples, in the sense of causing them substantial damage.
I’d see (unfounded) conspiracy theories and smear campaigns as subtypes of spreading misinformation, rather than as something qualitatively different. But I think today’s technology allows for spreading misinformation (of any type) much more easily and rapidly than people could previously.
At the same time, today’s technology also makes flagging, fact-checking, and otherwise countering misinformation easier.
I’d wildly speculate that, overall, the general public are much better informed than they used to be, but that purposeful efforts to spread misinformation will more easily have major effects now than previously.
This is primarily based on the research I’ve seen (see my other comment on this post) that indicates that even warnings about misinfo and (correctly recalled!) corrections of misinfo won’t stop that misinfo having an effect.
But I don’t actually know of research that’s looked into this. We could perhaps call this question: How does the “offense-defense” balance of (mis)information spreading scale with better technology, more interconnectedness, etc.? (I take the phrase “offense-defense balance” from this paper, though it’s possible my usage here is not in line with what the phrase should mean.)
My understanding is that, in general, standard ways of counteracting misinfo (e.g., fact-checking, warnings) tend to be somewhat but not completely effective in countering misinfo. I expect this would be true for accidentally spread misinfo, misinfo spread deliberately by e.g. just a random troll, or misinfo spread deliberately by e.g. a major effort on the part of a rival country.
But I’d expect that the latter case would be one where the resources dedicated to spreading the misinfo will more likely overwhelm the resources dedicated towards counteracting it. So the misinfo may end up having more influence for that reason.
We could also perhaps wonder about how the “offense-defense” balance of (mis)information spreading scales with more resources. It seems plausible that, after a certain amount of resources dedicated by both sides, the public are just saturated with the misinfo to such an extent that fact-checking doesn’t help much anymore. But I don’t know of any actual research on that.
Good questions!
Unfortunately, I think these specific questions are mostly about stuff that people started talking about a lot more after 2017. (Or at least, I didn’t pick up on much writing and discussion about these points.) So it’s a bit beyond my area.
But I can offer some speculations and related thoughts, informed in a general sense by the things I did learn:
I suspect misinformation at least could be an “effective weapon” against countries or peoples, in the sense of causing them substantial damage.
I’d see (unfounded) conspiracy theories and smear campaigns as subtypes of spreading misinformation, rather than as something qualitatively different. But I think today’s technology allows for spreading misinformation (of any type) much more easily and rapidly than people could previously.
At the same time, today’s technology also makes flagging, fact-checking, and otherwise countering misinformation easier.
I’d wildly speculate that, overall, the general public are much better informed than they used to be, but that purposeful efforts to spread misinformation will more easily have major effects now than previously.
This is primarily based on the research I’ve seen (see my other comment on this post) that indicates that even warnings about misinfo and (correctly recalled!) corrections of misinfo won’t stop that misinfo having an effect.
But I don’t actually know of research that’s looked into this. We could perhaps call this question: How does the “offense-defense” balance of (mis)information spreading scale with better technology, more interconnectedness, etc.? (I take the phrase “offense-defense balance” from this paper, though it’s possible my usage here is not in line with what the phrase should mean.)
My understanding is that, in general, standard ways of counteracting misinfo (e.g., fact-checking, warnings) tend to be somewhat but not completely effective in countering misinfo. I expect this would be true for accidentally spread misinfo, misinfo spread deliberately by e.g. just a random troll, or misinfo spread deliberately by e.g. a major effort on the part of a rival country.
But I’d expect that the latter case would be one where the resources dedicated to spreading the misinfo will more likely overwhelm the resources dedicated towards counteracting it. So the misinfo may end up having more influence for that reason.
We could also perhaps wonder about how the “offense-defense” balance of (mis)information spreading scales with more resources. It seems plausible that, after a certain amount of resources dedicated by both sides, the public are just saturated with the misinfo to such an extent that fact-checking doesn’t help much anymore. But I don’t know of any actual research on that.