I appreciate you writing this, it seems like a good and important post. I’m not sure how compelling I find it, however. Some scattered thoughts:
In point 1, it seems like the takeaway is “democracy is broken because most voters don’t care about factual accuracy, don’t follow the news, and elections are not a good system for deciding things; because so little about elections depends on voters getting reliable information, misinformation can’t make things much worse”. You don’t actually say this, but this appears to me to be the central thrust of your argument — to the extent modern political systems are broken, it is not in ways that are easily exacerbated by misinformation.
Point 3 seems to be mainly relevant to mainstream media, but I think the worries about misinformation typically focus on non-mainstream media. In particular, when people say they “saw X on Facebook”, they’re not basing their information diet on trustworthiness and reputation. You write, “As noted above (#1), the overwhelming majority of citizens get their political information from establishment sources (if they bother to get such information at all).” I’m not sure what exactly you’re referencing here, but it looks to me like people are getting news from social media about ⅔ as much as from news websites/apps (see “News consumption across digital platforms”). This is still a lot of social media news, which should not be discounted.
I don’t think I find point 4 compelling. I expect the establishment to have access to slightly, but not massively better AI. But more importantly, I don’t see how this helps? If it’s easy to make pro-vaccine propaganda but hard to make anti-vax propaganda, I don’t see how this is a good situation? It’s not clear that propaganda counteracts other propaganda in an efficient way such that those with better AI propaganda will win out (e.g., insularity and people mostly seeing content that aligns with their beliefs might imply little effect of counter-propaganda existing). You write “Anything that anti-establishment propagandists can do with AI, the establishment can do better”, but propaganda is probably not a zero-sum, symmetric weapon.
Overall, it feels to me like these are decent arguments about why AI-based disinformation is likely to be less of a big deal than I might have previously thought, but they don’t feel super strong. They feel largely handwavy in the sense of “here is an argument which points in that direction”, but it’s really hard to know how hard they push that direction. There is ample opportunity for quantitative and detailed analysis (which I generally would find more convincing), but that isn’t made here, and is instead obfuscated in links to other work. It’s possible that the argument I would actually find super convincing here is just way to long to be worth writing.
Again, thanks for writing this, I think it’s a service to the commons.
Re. point 1: Yes, I agree with your characterisation in a way: democracy is already a kind of epistemological disaster. Many treatments of dis/misinfo assume that people would be well-informed and make good decisions if not for exposure to dis/misinfo. That’s completely wrong and should affect how we view the marginal impact of AI-based disinformation.
Re. point 4: I’m not complaining it’s a good thing if establishment propaganda outcompetes anti-establishment propaganda. As I explicitly say, I think this is actually a genuine danger. It’s just different from the danger most people focus on when they think about this issue. More generally, in thinking about AI-based disinformation, people tend to assume that AI will only benefit disinformation campaigns, which is not true, and the fact it is not true should be taken into consideration when evaluating the impact of AI-based disinformation.
You write of my arguments: “They feel largely handwavy in the sense of “here is an argument which points in that direction”, but it’s really hard to know how hard they push that direction. There is ample opportunity for quantitative and detailed analysis (which I generally would find more convincing), but that isn’t made here, and is instead obfuscated in links to other work.”
It’s a fair point. This was just written up as a blog post as a hobby in my spare time. However, what really bothers me is the asymmetry here: There is a VAST amount of alarmism about AI-based disinformation that is guilty of the problems you criticise in my arguments, and in fact features much less reference to existing empirical research. So it felt important for me to push back a bit, and more generally I think it’s important that arguments for and against risks here aren’t evaluated via asymmetric standards.
Elaborating on point 1 and the “misinformation is only a small part of why the system is broken” idea:
The current system could be broken in many ways but at some equilibrium of sorts. Upsetting this equilibrium could have substantial effects because, for instance, people’s built immune response to current misinformation is not as well trained as their built immune response to traditionally biased media.
Additionally, intervening on misinformation could be far more tractable than other methods of improving things. I don’t have a solid grasp of what the problem is and what makes is worse, but a number of potential causes do seem much harder to intervene on than misinformation: general ignorance, poor education, political apathy. It can be the case that misinformation makes the situation merely 5% worse but is substantially easier to fix than these other issues.
I appreciate you writing this, it seems like a good and important post. I’m not sure how compelling I find it, however. Some scattered thoughts:
In point 1, it seems like the takeaway is “democracy is broken because most voters don’t care about factual accuracy, don’t follow the news, and elections are not a good system for deciding things; because so little about elections depends on voters getting reliable information, misinformation can’t make things much worse”. You don’t actually say this, but this appears to me to be the central thrust of your argument — to the extent modern political systems are broken, it is not in ways that are easily exacerbated by misinformation.
Point 3 seems to be mainly relevant to mainstream media, but I think the worries about misinformation typically focus on non-mainstream media. In particular, when people say they “saw X on Facebook”, they’re not basing their information diet on trustworthiness and reputation. You write, “As noted above (#1), the overwhelming majority of citizens get their political information from establishment sources (if they bother to get such information at all).” I’m not sure what exactly you’re referencing here, but it looks to me like people are getting news from social media about ⅔ as much as from news websites/apps (see “News consumption across digital platforms”). This is still a lot of social media news, which should not be discounted.
I don’t think I find point 4 compelling. I expect the establishment to have access to slightly, but not massively better AI. But more importantly, I don’t see how this helps? If it’s easy to make pro-vaccine propaganda but hard to make anti-vax propaganda, I don’t see how this is a good situation? It’s not clear that propaganda counteracts other propaganda in an efficient way such that those with better AI propaganda will win out (e.g., insularity and people mostly seeing content that aligns with their beliefs might imply little effect of counter-propaganda existing). You write “Anything that anti-establishment propagandists can do with AI, the establishment can do better”, but propaganda is probably not a zero-sum, symmetric weapon.
Overall, it feels to me like these are decent arguments about why AI-based disinformation is likely to be less of a big deal than I might have previously thought, but they don’t feel super strong. They feel largely handwavy in the sense of “here is an argument which points in that direction”, but it’s really hard to know how hard they push that direction. There is ample opportunity for quantitative and detailed analysis (which I generally would find more convincing), but that isn’t made here, and is instead obfuscated in links to other work. It’s possible that the argument I would actually find super convincing here is just way to long to be worth writing.
Again, thanks for writing this, I think it’s a service to the commons.
Thanks for this. A few points:
Re. point 1: Yes, I agree with your characterisation in a way: democracy is already a kind of epistemological disaster. Many treatments of dis/misinfo assume that people would be well-informed and make good decisions if not for exposure to dis/misinfo. That’s completely wrong and should affect how we view the marginal impact of AI-based disinformation.
On the issue of social media, see my post: https://www.conspicuouscognition.com/p/debunking-disinformation-myths-part. Roughly: People tend to greatly overestimate how much people are informing themselves about social media in my view.
Re. point 4: I’m not complaining it’s a good thing if establishment propaganda outcompetes anti-establishment propaganda. As I explicitly say, I think this is actually a genuine danger. It’s just different from the danger most people focus on when they think about this issue. More generally, in thinking about AI-based disinformation, people tend to assume that AI will only benefit disinformation campaigns, which is not true, and the fact it is not true should be taken into consideration when evaluating the impact of AI-based disinformation.
You write of my arguments: “They feel largely handwavy in the sense of “here is an argument which points in that direction”, but it’s really hard to know how hard they push that direction. There is ample opportunity for quantitative and detailed analysis (which I generally would find more convincing), but that isn’t made here, and is instead obfuscated in links to other work.”
It’s a fair point. This was just written up as a blog post as a hobby in my spare time. However, what really bothers me is the asymmetry here: There is a VAST amount of alarmism about AI-based disinformation that is guilty of the problems you criticise in my arguments, and in fact features much less reference to existing empirical research. So it felt important for me to push back a bit, and more generally I think it’s important that arguments for and against risks here aren’t evaluated via asymmetric standards.
Elaborating on point 1 and the “misinformation is only a small part of why the system is broken” idea:
The current system could be broken in many ways but at some equilibrium of sorts. Upsetting this equilibrium could have substantial effects because, for instance, people’s built immune response to current misinformation is not as well trained as their built immune response to traditionally biased media.
Additionally, intervening on misinformation could be far more tractable than other methods of improving things. I don’t have a solid grasp of what the problem is and what makes is worse, but a number of potential causes do seem much harder to intervene on than misinformation: general ignorance, poor education, political apathy. It can be the case that misinformation makes the situation merely 5% worse but is substantially easier to fix than these other issues.