This isn’t a particularly deep or informed take, but my perspective on it is that the “misinformation problem” is similar to what Scott called the cowpox of doubt:
What annoys me about the people who harp on moon-hoaxing and homeopathy – without any interest in the rest of medicine or space history – is that it seems like an attempt to Other irrationality.
It’s saying “Look, over here! It’s irrational people, believing things that we can instantly dismiss as dumb. Things we feel no temptation, not one bit, to believe. It must be that they are defective and we are rational.”
But to me, the rationality movement is about Self-ing irrationality.
It is about realizing that you, yes you, might be wrong about the things that you’re most certain of, and nothing can save you except maybe extreme epistemic paranoia.
10 years ago, it was popular to hate on moon-hoaxing and homeopathy, now it’s popular to hate on “misinformation”. Fixating on obviously-wrong beliefs is probably counterproductive to forming correct beliefs on important and hard questions.
You mean people hate on others who fall for misinformation? I haven’t noticed that so far. My impression of the misinformation discourse is ~ “Yeah, this shit is scary, today it might still be mostly easy to avoid, but we’ll soon drown in an ocean of AI-generated misinformation!”
Which also doesn’t seem right. I think I expect this to be in large part a technical problem that will mostly get solved because it is and probably will be such a prominent issue in the coming years, affecting many of the most profitable tech firms.
This isn’t a particularly deep or informed take, but my perspective on it is that the “misinformation problem” is similar to what Scott called the cowpox of doubt:
10 years ago, it was popular to hate on moon-hoaxing and homeopathy, now it’s popular to hate on “misinformation”. Fixating on obviously-wrong beliefs is probably counterproductive to forming correct beliefs on important and hard questions.
You mean people hate on others who fall for misinformation? I haven’t noticed that so far. My impression of the misinformation discourse is ~ “Yeah, this shit is scary, today it might still be mostly easy to avoid, but we’ll soon drown in an ocean of AI-generated misinformation!”
Which also doesn’t seem right. I think I expect this to be in large part a technical problem that will mostly get solved because it is and probably will be such a prominent issue in the coming years, affecting many of the most profitable tech firms.