Longer title for this question: To what extent does misinformation/disinformation (or the rise of deepfakes) pose a problem? And to what extent is it tractable?
Are there good analyses of the scope of this problem? If not, does anyone want to do a shallow exploration?
Are there promising interventions (e.g.certificates of some kind) that could be effective (in the important sense)?
Context and possibly relevant links:
Deepfakes: A Grounded Threat Assessment—Center for Security and Emerging Technology (I’ve only skimmed the beginning of this paper — would really appreciate a partial summary or an epistemic spot check of some kind)
Deepfake video of Zelenskyy could be ‘tip of the iceberg’ in info war, experts warn
Nina Schick on disinformation and the rise of synthetic media − 80,000 Hours
I’m posting this because I’m genuinely curious, and feel like I lack a lot of context on this. I haven’t done any relevant research myself.
This isn’t a particularly deep or informed take, but my perspective on it is that the “misinformation problem” is similar to what Scott called the cowpox of doubt:
10 years ago, it was popular to hate on moon-hoaxing and homeopathy, now it’s popular to hate on “misinformation”. Fixating on obviously-wrong beliefs is probably counterproductive to forming correct beliefs on important and hard questions.
You mean people hate on others who fall for misinformation? I haven’t noticed that so far. My impression of the misinformation discourse is ~ “Yeah, this shit is scary, today it might still be mostly easy to avoid, but we’ll soon drown in an ocean of AI-generated misinformation!”
Which also doesn’t seem right. I think I expect this to be in large part a technical problem that will mostly get solved because it is and probably will be such a prominent issue in the coming years, affecting many of the most profitable tech firms.
Excerpt from Deepfakes: A Grounded Threat Assessment—Center for Security and Emerging Technology (I haven’t read the whole paper):
Is it tractable?
One might argue that the amount of misinformation in the world is decreasing, not increasing. Maybe we’re much more aware of it, which would be a good thing.
Lesswrong and the EA Forum are making progress on this, no? This is one of my top ideas for how tech can help our causes
Wikipedia also helps a lot, I think. There might be other such ideas (because of inadequate equilibria), so if we find them, it might be a worthy use of EA founders+funds: A relatively easy way to provide a ton of value to society in a way that is hard (or maybe impossible) to monitize.
Regarding deep fakes:
Scott Alexander wrote about it:
https://slatestarcodex.com/2020/01/30/book-review-human-compatible/
This part stuck with me: