One thing that you didnāt raise, but which seems related and important, is how advancements in certain AI capabilities could affect the impacts of misinformation. I find this concerning, especially in connection with the point you make with this statement:
warning about it wil increase mistrust and polarization, which might be the goal of the campaign
Early last year, shortly after learning about EA, I wrote a brief research proposal related to the combination of these points. I never pursued the research project, and have now learned of other problems I see as likely more important, but I still do think itād be good for someone to pursue this sort of research. Here it is:
AI will likely allow for easier creation of fake news, videos, images, and audio (AI-generated misinformation; AIGM) [note: this is not an established term]. This may be hard to distinguish from genuine information. Researchers have begun exploring potential political security ramifications of this (e.g., Brundage et al., 2018). Such explorations could valuably draw on the literatures on the continued influence effect of misinformation (CIE; e.g., Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012), motivated reasoning (e.g., Nyhan & Reifler, 2010), and the false balance effect (e.g., Koehler, 2016).
For example, CIE refers to the finding that corrections of misinformation donāt entirely eliminate the influence of that misinformation on beliefs and behaviours, even among people who remember and believe the corrections. For misinformation that aligns with oneās attitudes, corrections are particularly ineffective, and may even ābackfireā, strengthening belief in the misinformation (Nyhan & Reifler, 2010). Thus, even if credible messages debunking AIGM can be rapidly disseminated, the misinformationās impacts may linger or even be exacerbated. Furthermore, as the public becomes aware of the possibility or prevalence of AIGM, genuine information may be regularly argued to be fake. These arguments could themselves be subject to the CIE and motivated reasoning, with further and complicated ramifications.
Thus, itād be valuable to conduct experiments exposing participants to various combinations of fake articles, fake images, fake videos, fake audio, and/āor a correction of one or more of these. This misinformation could vary in how indistinguishable from genuine information it is; whether it was human- or AI-generated; and whether it supports, challenges, or is irrelevant to participantsā attitudes. Data should be gathered on participantsā beliefs, attitudes, and recall of the correction. This would aid in determining how much the issue of CIE is exacerbated by the addition of video, images, or audio; how it varies by the quality of the fake or whether itās AI-generated; and how these things interact with motivated reasoning.
Such studies could include multiple rounds, some of which would use genuine rather than fake information. This could explore issues akin to false balance or motivated dismissal of genuine information. Such studies could also measure the effects of various ātreatmentsā, such as explanations of AIGM capabilities or how to distinguish such misinformation from genuine information. Ideally, these studies would be complemented by opportunistic evaluations of authentic AIGMās impacts.
One concern regarding this idea is that Iām unsure of the current capabilities of AI relevant to generating misinformation, and thus of what sorts of simulations or stimuli could be provided to participants. Thus, the study design sketched above is preliminary, to be updated as I learn more about relevant AI capabilities. Another concern is that relevant capabilities may currently be so inferior to how theyāll later be that discoveries regarding how people react to present AIGM would not generalise to their reactions to later, stronger AIGM.
References:
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., ā¦ & Anderson, H. (2018). The malicious use of artificial intelligence: forecasting, prevention, and mitigation. arXiv preprint arXiv:1802.07228.
Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106-131.
Koehler, D. J. (2016). Can journalistic āfalse balanceā distort public perception of consensus in expert opinion?. Journal of experimental psychology: Applied, 22(1), 24-38.
Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303-330.
I think āoffense-deffense balanceā is a very accurate term here. I wonder if you have any personal opinion on how to improve our situation on that. I guess when it comes to AI-powered misinformation through media, itās particularly concerning how easily it can overrun our defensesāso that, even if we succeed by fact-checking every inaccurate statement, itāll require a lot of resources and probably lead to a situation of widespread uncertainty or mistrust, where people, incapable of screening reliable info, will succumb to confirmatory bias or peer pressure (I feel tempted to draw an analogy with DDoS attacks, or even with the lemons problem).
So, despite everything Iāve read about the subject (though notvery sistematically), I havenāt seen feasible well-written strategies to address this asymmetryāexcept for some papers on moderation in social networks and forums (even so, itās quite time consuming, unless moderators draw clear guidelinesālike in this forum). I wonder why societies (through authorities or self-regulation) canāt agree to impose even minimal reliability requirementes, like demanding captcha tests before spreading messages (so making it harder to use bots) or, my favorite, holding people liable for spreading misinformation, unless they explicitly reference a sourceāsomething even newspapers refuse to do (my guess is that they are affraid this norm would compromise source confidentiality and their protections against legal suits). If people had this as an established practice, one could easily screen for (at least grossly) unreliable messages by checking their source (or pointing out its absence), besides deterring them.
I think Iāve got similar concerns and thoughts on this. Iām vaguely aware of various ideas for dealing with these issues, but I havenāt kept up with that, and Iām not sure how effective they are or will be in future.
The idea of making captcha requirements before things like commenting very widespread is one I havenāt heard before, and seems like it could plausibly cut off part of the problem at relatively low cost.
I would also quite like it if there were much better epistemic norms widespread across society, such as people feeling embarrassed if people point out they stated something non-obvious as a fact without referencing sources. (Whereas it could still be fine to state very obvious things as facts without sharing sources all the time, or to state non-obvious things as fairly confident conjectures rather than as facts.)
But some issues also come to mind (note: these are basically speculation, rather than drawing on research Iāve read):
It seems somewhat hard to draw the line between ok and not ok behaviours (e.g., what claims are self-evident enough that itās ok to omit a source? What sort of tone and caveats are sufficient for various sorts of claims?)
And itās therefore conceivable that these sorts of norms could be counterproductive in various ways. E.g., lead to (more) silencing or ridicule of people raising alarm bells about low probability high stakes events, because thereās not yet strong evidence about that, but no one will look for the evidence until someone starts raising the alarm bells.
Though I think there are some steps that seems obviously good, like requiring sources for specific statistical claims (e.g., ā67% of teenagers are doing [whatever]ā).
This is a sociological/āpsychological rather than technological fix, which does seem quite needed, but also seems quite hard to implement. Spreading norms like that widely seems hard to do.
With a lot of solutions, it seems not too hard to imagine ways they could be (at least partly) circumvented by people or groups who are actively trying to spread misinformation. (At least when those people/āgroups are quite well-resourced.)
E.g., even if society adopted a strong norm that people must include sources when making relatively specific, non-obvious claims, there could then perhaps be large-scale human- or AI-generated sources being produced, and made to look respectable at first glance, which can then be shared alongside the claims being made elsewhere.
We could probably also think of things like more generally improving critical thinking or rationality as similar broad, sociological approaches to mitigating the spread/āimpacts of misinformation. Iād guess that those more general approaches may better avoid the issue of difficulty drawing lines in the appropriate places and being circumventable by active efforts, but may suffer more strongly from being quite intractable or crowded. (But this is just a quick guess.)
We could probably also think of things like more generally improving critical thinking or rationality as similar broad, sociological approaches to mitigating the spread/āimpacts of misinformation.
Agreed. But I donāt think we could do that without changing the environment a little bit. My point is that rationality isnāt just about avoiding false beliefs (maximal skepticism), but about forming them adequately, and itās way more costly to do that in some environments. Think about the different degrees of caution one needs when reading something in a peer-reviewed meta-analysis, in a wikipedia entry, in a newspaper, in a whatsapp message...
The core issue isnāt really āstatements that are falseā, or people who are actually fooled by them. The problem is that, if Iām convinced Iām surrounded by lies and nonsense, Iāll keep following the same path I was before (because I have a high credence my beliefs are OK); it will just fuel my confirmatory bias. Thus, the real problem with fake news is an externality. I havenāt found any paper testing this hypothesis, though. If it is right, then most articles Iāve seen on āfake news didnāt affect political outcomesā might be wrong.
You can fool someone even without telling any sort of lies. To steal an example I once saw in LW (still trying to find the source): imagine a random sequence of 0s and 1s; now, an Agent feeds a Principal with information about the sequence, like ādigit 1 in position nthā. To make a Principal believe the sequence is mainly made of 1s, all an Agent has to do is to select information, like ādigit 1 in positions n, m and oā.
But why would someone hire such an agent? Well, maybe the Principal is convinced most other accessible agents are liars; itās even worse if the Agent already knows some of the Principalās biases, and easier if Principals with similar biases are clustered in groups with similar interests and jobsālike social activists, churches, military staff and financial investors. Even to denounce this scenario does not necessarily improve things; I think, at least for some countries, political outcomes were affected by having common knowledge about statements like āmilitary personnel support this, financial investors would never accept thatā. If you can convince voters theyāll face an economic crisis or political instability by voting candidate A, they will avoid it.
My personal anecdote on how this process may work for a smart and scientifically educated: I remember having a conversation with a childhood friend, who surprised me by being a climate change denier. I tried my ārationality skillsā arguing with him; to summarize it, he replied that greenhouses work by convection, which wouldnāt extrapolate to the atmosphere. I was astonished that I had ignored it so far (well, maybe it was mentioned en passant in a science class), and that he didnāt take 2 min to google it (and find out that, yes, āgreenhouseā is an analogy, the problem is that CO2 deflects radiation back to Earth); but maybe I wouldnāt have done it myself if I didnāt already know that CO2 is pivotal in keeping Earth warm. However, after days of this, no happy end: our discussion basically ended with me pointing out: a) he couldnāt provide any scientific paper backing his overall thesis (even though I would be happy to pay him if he could); b) he would provide objections against āanthropic global warmingā, without even caring to put a consistent credence on themālike first pointing to alternative causes for warming, and then denying the warming itself. He didnāt really believe (i.e., assigned a high posterior credence) there was no warming, nor that it was a random anomaly, because these would be ungrounded, and so a target in a discussion. Since then, we barely spoke.
P.S.: I wonder if fact-checking agencies could evolve to some sort of ārating agenciesā; I mean, they shouldnāt only screen for false statements, but actually provide information about who is accurateāso mitigating what Iāve been calling the ālemons problem in newsā. But who rates the raters? Besides the risk of capture, I donāt know how to make people actually trust the agencies in the first place.
Your paragraph on climate change denial among a smart, scientifically educated person reminded me of some very interesting work by a researcher called Dan Kahan.
Decision scientists have identified various plausible sources of ideological polarization over climate change, gun violence, national security, and like issues that turn on empirical evidence. This paper describes a study of three of them: the predominance of heuristic-driven information processing by members of the public; ideologically motivated reasoning; and the cognitive-style correlates of political conservativism. The study generated both observational and experimental data inconsistent with the hypothesis that political conservatism is distinctively associated with either un- reflective thinking or motivated reasoning. Conservatives did no better or worse than liberals on the Cognitive Reflection Test (Frederick, 2005), an objective measure of information-processing dispositions associated with cognitive biases. In addition, the study found that ideologically motivated reasoning is not a consequence of over-reliance on heuristic or intuitive forms of reasoning generally. On the contrary, subjects who scored highest in cognitive reflection were the most likely to display ideologically motivated cognition. These findings corroborated an alternative hypothesis, which identifies ideologically motivated cognition as a form of information processing that promotes individualsā interests in forming and maintaining beliefs that signify their loyalty to important affinity groups. The paper discusses the practical significance of these findings, including the need to develop science communication strategies that shield policy-relevant facts from the influences that turn them into divisive symbols of political identity.
Parts of your comment reminded me of something thatās perhaps unrelated, but seems interesting to bring up, which is Stefan Schubertās prior work on āargument-checkingā, as discussed on an 80k episode:
Stefan Schubert: I was always interested in āWhat would it be like if politicians were actually truthful in election debates, and said relevant things?ā [...]
So then I started this blog in Swedish on something that I call argument checking. You know, thereās fact checking. But then I went, āWell thereās so many other ways that you can deceive people except outright lying.ā So, that was fairly fun, in a way. I had this South African friend at LSE whom I told about this, that I was pointing out fallacies which people made. And she was like āThat suits you perfectly. Youāre so judge-y.ā And unfortunately thereās something to that.
[...]
Robert Wiblin: What kinds of things did you try to do? I remember you had fact checking, this live fact checking on-
Stefan Schubert: Actually that is, we might have called it fact checking at some point. But the name which I wanted to use was argument checking. So that was like in addition to fact checking, we also checked argument.
Robert Wiblin: Did you get many people watching your live argument checking?
Stefan Schubert: Yeah, in Sweden, I got some traction. I guess, I had probably hoped for more people to read about this. But on the plus side, I think that the very top showed at least some interest in it. A smaller interest than what I had thought, but at least you reach the most influential people.
Robert Wiblin: I guess my doubt about this strategy would be, obviously you can fact check politicians, you can argument check them. But how much do people care? How much do voters really care? And even if they were to read this site, how much would it change their mind about anything?
Stefan Schubert: Thatās fair. I think one approach which one might take would be to, following up on this experience, the very top people who write opinion pieces for newspapers, they were at least interested, and just double down on that, and try to reach them. I think that something that people think is that, okay, so there are the tabloids, and everyone agrees what theyāre saying is generally not that good. But then you go to the the highbrow papers, and then everything there would actually make sense.
So that is what I did. I went for the Swedish equivalent of somewhere between the Guardian and the Telegraph. A decently well-respected paper. And even there, you can point out this glaring fallacies if you dig deeper.
Robert Wiblin: You mean, the journalists are just messing up.
Stefan Schubert: Yeah, or here it was often outside writers, like politicians or civil servants. I think ideally you should get people who are a bit more influential and more well-respected to realize how careful you actually have to be in order to really get to the truth.
Just to take one subject that effective altruists are very interested in, all the writings about AI, where you get people like professors who write the articles which are really very poor on this extremely important subject. Itās just outrageous if you think about it.
Robert Wiblin: Yeah, when I read those articles, I imagine weāre referring to similar things, Iām just astonished. And I donāt know how to react. Because I read it, and I could just see egregious errors, egregious misunderstandings. But then, weāve got this modesty issue, that weāre bringing up before. These are well-respected people. At least in their fields in kind of adjacent areas. And then, Iām thinking, āAm I the crazy one?ā Do they read what I write, and they have the same reaction?
Stefan Schubert: I donāt feel that. So I probably reveal my immodesty.
Of course, you should be modest if people show some signs of reasonableness. And obviously if someone is arguing for a position where your prior that itās true is very low. But if theyāre a reasonable person, and theyāre arguing for it well, then you should update. But if theyāre arguing in a way which is very emotive ā theyāre not really addressing the positions that weāre holding ā then I donāt think modesty is the right approach.
Robert Wiblin: I guess it does go to show how difficult being modest is when the rubber really hits the road, and youāre just sure about something that someone else you respect just disagrees.
But I agree. There is real red flag when people donāt seem to be actually engaging with the substance of the issues which happens surprisingly often. Theyāll write something, which just suggests, āI just donāt like the toneā or āI donāt like this topicā or āThis whole thing makes me kind of madā but they canāt explain why exactly.
I think you raise interesting points. A few thoughts (which are again more like my views rather than āwhat the research saysā):
I agree that something like the general trustworthiness of the environment also matters. And it seems good to me to both increase the proportion of reliable to unreliable messages one receives and to make people better able to spot unreliable messages and avoid updating (incorrectly) on them and to make people better able to update on correct messages. (Though Iām not sure how tractable any of those things are.)
I agree that it seems like a major risk from proliferation of misinformation, fake news, etc., is that people stop seeking out or updating on info in general, rather than just that they update incorrectly on the misinfo. But I wouldnāt say that thatās āthe real problem with fake newsā; Iād say thatās a real problem, but that updating on the misinfo is another real problem (and Iām not sure which is bigger).
As a minor thing, I think when people spread misinfo, someone else updates on it, and then the world more generally gets worse due to voting for stupid policies or whatever, thatās also an externality. (The actions taken caused harm to people who werenāt involved in the original ātransactionā.)
I agree you can fool/āmislead people without lies. You can use faulty arguments, cherry-picking, fairly empty rhetoric that āfeelsā like it points a certain way, etc.
I wonder if fact-checking agencies could evolve to some sort of ārating agenciesā; I mean, they shouldnāt only screen for false statements, but actually provide information about who is accurate
Not sure if I understand the suggestion, or rather how you envision it adding value compared to the current system.
Fact-checkers already do say both that some statements are false and that others are accurate.
Also, at least some of them already have ways to see what proportion of a certain personās claims that the fact-checker evaluated turned out to be true vs false. Although thatās obviously not the same as what proportion of all a sourceās claims (or all of a sourceās important claims, or whatever) are true.
But it seems like trying to objectively assess various sourcesā overall accuracy would be very hard and controversial. And it seems like one way we could view the current situation is that most info thatās spread is roughly accurate (though often out of context, not highly important, etc.), and some is not, and the fact-checkers pick up claims that seem like they might be inaccurate and then say if they are. So we can perhaps see ourselves as already having something like an overall screening for general inaccuracy of quite prominent sources, in that, if fact-checking agencies havenāt pointed out false statements of theirs, theyāre probably generally roughly accurate.
Thatās obviously not a very fine-grained assessment, but I guess what Iām saying is that itās something, and that adding value beyond that might be very hard.
One thing that you didnāt raise, but which seems related and important, is how advancements in certain AI capabilities could affect the impacts of misinformation. I find this concerning, especially in connection with the point you make with this statement:
Early last year, shortly after learning about EA, I wrote a brief research proposal related to the combination of these points. I never pursued the research project, and have now learned of other problems I see as likely more important, but I still do think itād be good for someone to pursue this sort of research. Here it is:
References:
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., ā¦ & Anderson, H. (2018). The malicious use of artificial intelligence: forecasting, prevention, and mitigation. arXiv preprint arXiv:1802.07228.
Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106-131.
Koehler, D. J. (2016). Can journalistic āfalse balanceā distort public perception of consensus in expert opinion?. Journal of experimental psychology: Applied, 22(1), 24-38.
Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303-330.
Really thanks!
I think āoffense-deffense balanceā is a very accurate term here. I wonder if you have any personal opinion on how to improve our situation on that. I guess when it comes to AI-powered misinformation through media, itās particularly concerning how easily it can overrun our defensesāso that, even if we succeed by fact-checking every inaccurate statement, itāll require a lot of resources and probably lead to a situation of widespread uncertainty or mistrust, where people, incapable of screening reliable info, will succumb to confirmatory bias or peer pressure (I feel tempted to draw an analogy with DDoS attacks, or even with the lemons problem).
So, despite everything Iāve read about the subject (though notvery sistematically), I havenāt seen feasible well-written strategies to address this asymmetryāexcept for some papers on moderation in social networks and forums (even so, itās quite time consuming, unless moderators draw clear guidelinesālike in this forum). I wonder why societies (through authorities or self-regulation) canāt agree to impose even minimal reliability requirementes, like demanding captcha tests before spreading messages (so making it harder to use bots) or, my favorite, holding people liable for spreading misinformation, unless they explicitly reference a sourceāsomething even newspapers refuse to do (my guess is that they are affraid this norm would compromise source confidentiality and their protections against legal suits). If people had this as an established practice, one could easily screen for (at least grossly) unreliable messages by checking their source (or pointing out its absence), besides deterring them.
I think Iāve got similar concerns and thoughts on this. Iām vaguely aware of various ideas for dealing with these issues, but I havenāt kept up with that, and Iām not sure how effective they are or will be in future.
The idea of making captcha requirements before things like commenting very widespread is one I havenāt heard before, and seems like it could plausibly cut off part of the problem at relatively low cost.
I would also quite like it if there were much better epistemic norms widespread across society, such as people feeling embarrassed if people point out they stated something non-obvious as a fact without referencing sources. (Whereas it could still be fine to state very obvious things as facts without sharing sources all the time, or to state non-obvious things as fairly confident conjectures rather than as facts.)
But some issues also come to mind (note: these are basically speculation, rather than drawing on research Iāve read):
It seems somewhat hard to draw the line between ok and not ok behaviours (e.g., what claims are self-evident enough that itās ok to omit a source? What sort of tone and caveats are sufficient for various sorts of claims?)
And itās therefore conceivable that these sorts of norms could be counterproductive in various ways. E.g., lead to (more) silencing or ridicule of people raising alarm bells about low probability high stakes events, because thereās not yet strong evidence about that, but no one will look for the evidence until someone starts raising the alarm bells.
Though I think there are some steps that seems obviously good, like requiring sources for specific statistical claims (e.g., ā67% of teenagers are doing [whatever]ā).
This is a sociological/āpsychological rather than technological fix, which does seem quite needed, but also seems quite hard to implement. Spreading norms like that widely seems hard to do.
With a lot of solutions, it seems not too hard to imagine ways they could be (at least partly) circumvented by people or groups who are actively trying to spread misinformation. (At least when those people/āgroups are quite well-resourced.)
E.g., even if society adopted a strong norm that people must include sources when making relatively specific, non-obvious claims, there could then perhaps be large-scale human- or AI-generated sources being produced, and made to look respectable at first glance, which can then be shared alongside the claims being made elsewhere.
We could probably also think of things like more generally improving critical thinking or rationality as similar broad, sociological approaches to mitigating the spread/āimpacts of misinformation. Iād guess that those more general approaches may better avoid the issue of difficulty drawing lines in the appropriate places and being circumventable by active efforts, but may suffer more strongly from being quite intractable or crowded. (But this is just a quick guess.)
Agreed. But I donāt think we could do that without changing the environment a little bit. My point is that rationality isnāt just about avoiding false beliefs (maximal skepticism), but about forming them adequately, and itās way more costly to do that in some environments. Think about the different degrees of caution one needs when reading something in a peer-reviewed meta-analysis, in a wikipedia entry, in a newspaper, in a whatsapp message...
The core issue isnāt really āstatements that are falseā, or people who are actually fooled by them. The problem is that, if Iām convinced Iām surrounded by lies and nonsense, Iāll keep following the same path I was before (because I have a high credence my beliefs are OK); it will just fuel my confirmatory bias. Thus, the real problem with fake news is an externality. I havenāt found any paper testing this hypothesis, though. If it is right, then most articles Iāve seen on āfake news didnāt affect political outcomesā might be wrong.
You can fool someone even without telling any sort of lies. To steal an example I once saw in LW (still trying to find the source): imagine a random sequence of 0s and 1s; now, an Agent feeds a Principal with information about the sequence, like ādigit 1 in position nthā. To make a Principal believe the sequence is mainly made of 1s, all an Agent has to do is to select information, like ādigit 1 in positions n, m and oā.
But why would someone hire such an agent? Well, maybe the Principal is convinced most other accessible agents are liars; itās even worse if the Agent already knows some of the Principalās biases, and easier if Principals with similar biases are clustered in groups with similar interests and jobsālike social activists, churches, military staff and financial investors. Even to denounce this scenario does not necessarily improve things; I think, at least for some countries, political outcomes were affected by having common knowledge about statements like āmilitary personnel support this, financial investors would never accept thatā. If you can convince voters theyāll face an economic crisis or political instability by voting candidate A, they will avoid it.
My personal anecdote on how this process may work for a smart and scientifically educated: I remember having a conversation with a childhood friend, who surprised me by being a climate change denier. I tried my ārationality skillsā arguing with him; to summarize it, he replied that greenhouses work by convection, which wouldnāt extrapolate to the atmosphere. I was astonished that I had ignored it so far (well, maybe it was mentioned en passant in a science class), and that he didnāt take 2 min to google it (and find out that, yes, āgreenhouseā is an analogy, the problem is that CO2 deflects radiation back to Earth); but maybe I wouldnāt have done it myself if I didnāt already know that CO2 is pivotal in keeping Earth warm. However, after days of this, no happy end: our discussion basically ended with me pointing out: a) he couldnāt provide any scientific paper backing his overall thesis (even though I would be happy to pay him if he could); b) he would provide objections against āanthropic global warmingā, without even caring to put a consistent credence on themālike first pointing to alternative causes for warming, and then denying the warming itself. He didnāt really believe (i.e., assigned a high posterior credence) there was no warming, nor that it was a random anomaly, because these would be ungrounded, and so a target in a discussion. Since then, we barely spoke.
P.S.: I wonder if fact-checking agencies could evolve to some sort of ārating agenciesā; I mean, they shouldnāt only screen for false statements, but actually provide information about who is accurateāso mitigating what Iāve been calling the ālemons problem in newsā. But who rates the raters? Besides the risk of capture, I donāt know how to make people actually trust the agencies in the first place.
Your paragraph on climate change denial among a smart, scientifically educated person reminded me of some very interesting work by a researcher called Dan Kahan.
An abstract from one paper:
Two other relevant papers:
Cultural Cognition of Scientific Consensus
ClimateāScience Communication and the Measurement Problem
Parts of your comment reminded me of something thatās perhaps unrelated, but seems interesting to bring up, which is Stefan Schubertās prior work on āargument-checkingā, as discussed on an 80k episode:
I think you raise interesting points. A few thoughts (which are again more like my views rather than āwhat the research saysā):
I agree that something like the general trustworthiness of the environment also matters. And it seems good to me to both increase the proportion of reliable to unreliable messages one receives and to make people better able to spot unreliable messages and avoid updating (incorrectly) on them and to make people better able to update on correct messages. (Though Iām not sure how tractable any of those things are.)
I agree that it seems like a major risk from proliferation of misinformation, fake news, etc., is that people stop seeking out or updating on info in general, rather than just that they update incorrectly on the misinfo. But I wouldnāt say that thatās āthe real problem with fake newsā; Iād say thatās a real problem, but that updating on the misinfo is another real problem (and Iām not sure which is bigger).
As a minor thing, I think when people spread misinfo, someone else updates on it, and then the world more generally gets worse due to voting for stupid policies or whatever, thatās also an externality. (The actions taken caused harm to people who werenāt involved in the original ātransactionā.)
I agree you can fool/āmislead people without lies. You can use faulty arguments, cherry-picking, fairly empty rhetoric that āfeelsā like it points a certain way, etc.
Not sure if I understand the suggestion, or rather how you envision it adding value compared to the current system.
Fact-checkers already do say both that some statements are false and that others are accurate.
Also, at least some of them already have ways to see what proportion of a certain personās claims that the fact-checker evaluated turned out to be true vs false. Although thatās obviously not the same as what proportion of all a sourceās claims (or all of a sourceās important claims, or whatever) are true.
But it seems like trying to objectively assess various sourcesā overall accuracy would be very hard and controversial. And it seems like one way we could view the current situation is that most info thatās spread is roughly accurate (though often out of context, not highly important, etc.), and some is not, and the fact-checkers pick up claims that seem like they might be inaccurate and then say if they are. So we can perhaps see ourselves as already having something like an overall screening for general inaccuracy of quite prominent sources, in that, if fact-checking agencies havenāt pointed out false statements of theirs, theyāre probably generally roughly accurate.
Thatās obviously not a very fine-grained assessment, but I guess what Iām saying is that itās something, and that adding value beyond that might be very hard.