Re. point 1: Yes, I agree with your characterisation in a way: democracy is already a kind of epistemological disaster. Many treatments of dis/misinfo assume that people would be well-informed and make good decisions if not for exposure to dis/misinfo. That’s completely wrong and should affect how we view the marginal impact of AI-based disinformation.
Re. point 4: I’m not complaining it’s a good thing if establishment propaganda outcompetes anti-establishment propaganda. As I explicitly say, I think this is actually a genuine danger. It’s just different from the danger most people focus on when they think about this issue. More generally, in thinking about AI-based disinformation, people tend to assume that AI will only benefit disinformation campaigns, which is not true, and the fact it is not true should be taken into consideration when evaluating the impact of AI-based disinformation.
You write of my arguments: “They feel largely handwavy in the sense of “here is an argument which points in that direction”, but it’s really hard to know how hard they push that direction. There is ample opportunity for quantitative and detailed analysis (which I generally would find more convincing), but that isn’t made here, and is instead obfuscated in links to other work.”
It’s a fair point. This was just written up as a blog post as a hobby in my spare time. However, what really bothers me is the asymmetry here: There is a VAST amount of alarmism about AI-based disinformation that is guilty of the problems you criticise in my arguments, and in fact features much less reference to existing empirical research. So it felt important for me to push back a bit, and more generally I think it’s important that arguments for and against risks here aren’t evaluated via asymmetric standards.
Thanks for this. A few points:
Re. point 1: Yes, I agree with your characterisation in a way: democracy is already a kind of epistemological disaster. Many treatments of dis/misinfo assume that people would be well-informed and make good decisions if not for exposure to dis/misinfo. That’s completely wrong and should affect how we view the marginal impact of AI-based disinformation.
On the issue of social media, see my post: https://www.conspicuouscognition.com/p/debunking-disinformation-myths-part. Roughly: People tend to greatly overestimate how much people are informing themselves about social media in my view.
Re. point 4: I’m not complaining it’s a good thing if establishment propaganda outcompetes anti-establishment propaganda. As I explicitly say, I think this is actually a genuine danger. It’s just different from the danger most people focus on when they think about this issue. More generally, in thinking about AI-based disinformation, people tend to assume that AI will only benefit disinformation campaigns, which is not true, and the fact it is not true should be taken into consideration when evaluating the impact of AI-based disinformation.
You write of my arguments: “They feel largely handwavy in the sense of “here is an argument which points in that direction”, but it’s really hard to know how hard they push that direction. There is ample opportunity for quantitative and detailed analysis (which I generally would find more convincing), but that isn’t made here, and is instead obfuscated in links to other work.”
It’s a fair point. This was just written up as a blog post as a hobby in my spare time. However, what really bothers me is the asymmetry here: There is a VAST amount of alarmism about AI-based disinformation that is guilty of the problems you criticise in my arguments, and in fact features much less reference to existing empirical research. So it felt important for me to push back a bit, and more generally I think it’s important that arguments for and against risks here aren’t evaluated via asymmetric standards.