I think Iâve got similar concerns and thoughts on this. Iâm vaguely aware of various ideas for dealing with these issues, but I havenât kept up with that, and Iâm not sure how effective they are or will be in future.
The idea of making captcha requirements before things like commenting very widespread is one I havenât heard before, and seems like it could plausibly cut off part of the problem at relatively low cost.
I would also quite like it if there were much better epistemic norms widespread across society, such as people feeling embarrassed if people point out they stated something non-obvious as a fact without referencing sources. (Whereas it could still be fine to state very obvious things as facts without sharing sources all the time, or to state non-obvious things as fairly confident conjectures rather than as facts.)
But some issues also come to mind (note: these are basically speculation, rather than drawing on research Iâve read):
It seems somewhat hard to draw the line between ok and not ok behaviours (e.g., what claims are self-evident enough that itâs ok to omit a source? What sort of tone and caveats are sufficient for various sorts of claims?)
And itâs therefore conceivable that these sorts of norms could be counterproductive in various ways. E.g., lead to (more) silencing or ridicule of people raising alarm bells about low probability high stakes events, because thereâs not yet strong evidence about that, but no one will look for the evidence until someone starts raising the alarm bells.
Though I think there are some steps that seems obviously good, like requiring sources for specific statistical claims (e.g., â67% of teenagers are doing [whatever]â).
This is a sociological/âpsychological rather than technological fix, which does seem quite needed, but also seems quite hard to implement. Spreading norms like that widely seems hard to do.
With a lot of solutions, it seems not too hard to imagine ways they could be (at least partly) circumvented by people or groups who are actively trying to spread misinformation. (At least when those people/âgroups are quite well-resourced.)
E.g., even if society adopted a strong norm that people must include sources when making relatively specific, non-obvious claims, there could then perhaps be large-scale human- or AI-generated sources being produced, and made to look respectable at first glance, which can then be shared alongside the claims being made elsewhere.
We could probably also think of things like more generally improving critical thinking or rationality as similar broad, sociological approaches to mitigating the spread/âimpacts of misinformation. Iâd guess that those more general approaches may better avoid the issue of difficulty drawing lines in the appropriate places and being circumventable by active efforts, but may suffer more strongly from being quite intractable or crowded. (But this is just a quick guess.)
We could probably also think of things like more generally improving critical thinking or rationality as similar broad, sociological approaches to mitigating the spread/âimpacts of misinformation.
Agreed. But I donât think we could do that without changing the environment a little bit. My point is that rationality isnât just about avoiding false beliefs (maximal skepticism), but about forming them adequately, and itâs way more costly to do that in some environments. Think about the different degrees of caution one needs when reading something in a peer-reviewed meta-analysis, in a wikipedia entry, in a newspaper, in a whatsapp message...
The core issue isnât really âstatements that are falseâ, or people who are actually fooled by them. The problem is that, if Iâm convinced Iâm surrounded by lies and nonsense, Iâll keep following the same path I was before (because I have a high credence my beliefs are OK); it will just fuel my confirmatory bias. Thus, the real problem with fake news is an externality. I havenât found any paper testing this hypothesis, though. If it is right, then most articles Iâve seen on âfake news didnât affect political outcomesâ might be wrong.
You can fool someone even without telling any sort of lies. To steal an example I once saw in LW (still trying to find the source): imagine a random sequence of 0s and 1s; now, an Agent feeds a Principal with information about the sequence, like âdigit 1 in position nthâ. To make a Principal believe the sequence is mainly made of 1s, all an Agent has to do is to select information, like âdigit 1 in positions n, m and oâ.
But why would someone hire such an agent? Well, maybe the Principal is convinced most other accessible agents are liars; itâs even worse if the Agent already knows some of the Principalâs biases, and easier if Principals with similar biases are clustered in groups with similar interests and jobsâlike social activists, churches, military staff and financial investors. Even to denounce this scenario does not necessarily improve things; I think, at least for some countries, political outcomes were affected by having common knowledge about statements like âmilitary personnel support this, financial investors would never accept thatâ. If you can convince voters theyâll face an economic crisis or political instability by voting candidate A, they will avoid it.
My personal anecdote on how this process may work for a smart and scientifically educated: I remember having a conversation with a childhood friend, who surprised me by being a climate change denier. I tried my ârationality skillsâ arguing with him; to summarize it, he replied that greenhouses work by convection, which wouldnât extrapolate to the atmosphere. I was astonished that I had ignored it so far (well, maybe it was mentioned en passant in a science class), and that he didnât take 2 min to google it (and find out that, yes, âgreenhouseâ is an analogy, the problem is that CO2 deflects radiation back to Earth); but maybe I wouldnât have done it myself if I didnât already know that CO2 is pivotal in keeping Earth warm. However, after days of this, no happy end: our discussion basically ended with me pointing out: a) he couldnât provide any scientific paper backing his overall thesis (even though I would be happy to pay him if he could); b) he would provide objections against âanthropic global warmingâ, without even caring to put a consistent credence on themâlike first pointing to alternative causes for warming, and then denying the warming itself. He didnât really believe (i.e., assigned a high posterior credence) there was no warming, nor that it was a random anomaly, because these would be ungrounded, and so a target in a discussion. Since then, we barely spoke.
P.S.: I wonder if fact-checking agencies could evolve to some sort of ârating agenciesâ; I mean, they shouldnât only screen for false statements, but actually provide information about who is accurateâso mitigating what Iâve been calling the âlemons problem in newsâ. But who rates the raters? Besides the risk of capture, I donât know how to make people actually trust the agencies in the first place.
Your paragraph on climate change denial among a smart, scientifically educated person reminded me of some very interesting work by a researcher called Dan Kahan.
Decision scientists have identified various plausible sources of ideological polarization over climate change, gun violence, national security, and like issues that turn on empirical evidence. This paper describes a study of three of them: the predominance of heuristic-driven information processing by members of the public; ideologically motivated reasoning; and the cognitive-style correlates of political conservativism. The study generated both observational and experimental data inconsistent with the hypothesis that political conservatism is distinctively associated with either un- reflective thinking or motivated reasoning. Conservatives did no better or worse than liberals on the Cognitive Reflection Test (Frederick, 2005), an objective measure of information-processing dispositions associated with cognitive biases. In addition, the study found that ideologically motivated reasoning is not a consequence of over-reliance on heuristic or intuitive forms of reasoning generally. On the contrary, subjects who scored highest in cognitive reflection were the most likely to display ideologically motivated cognition. These findings corroborated an alternative hypothesis, which identifies ideologically motivated cognition as a form of information processing that promotes individualsâ interests in forming and maintaining beliefs that signify their loyalty to important affinity groups. The paper discusses the practical significance of these findings, including the need to develop science communication strategies that shield policy-relevant facts from the influences that turn them into divisive symbols of political identity.
Parts of your comment reminded me of something thatâs perhaps unrelated, but seems interesting to bring up, which is Stefan Schubertâs prior work on âargument-checkingâ, as discussed on an 80k episode:
Stefan Schubert: I was always interested in âWhat would it be like if politicians were actually truthful in election debates, and said relevant things?â [...]
So then I started this blog in Swedish on something that I call argument checking. You know, thereâs fact checking. But then I went, âWell thereâs so many other ways that you can deceive people except outright lying.â So, that was fairly fun, in a way. I had this South African friend at LSE whom I told about this, that I was pointing out fallacies which people made. And she was like âThat suits you perfectly. Youâre so judge-y.â And unfortunately thereâs something to that.
[...]
Robert Wiblin: What kinds of things did you try to do? I remember you had fact checking, this live fact checking on-
Stefan Schubert: Actually that is, we might have called it fact checking at some point. But the name which I wanted to use was argument checking. So that was like in addition to fact checking, we also checked argument.
Robert Wiblin: Did you get many people watching your live argument checking?
Stefan Schubert: Yeah, in Sweden, I got some traction. I guess, I had probably hoped for more people to read about this. But on the plus side, I think that the very top showed at least some interest in it. A smaller interest than what I had thought, but at least you reach the most influential people.
Robert Wiblin: I guess my doubt about this strategy would be, obviously you can fact check politicians, you can argument check them. But how much do people care? How much do voters really care? And even if they were to read this site, how much would it change their mind about anything?
Stefan Schubert: Thatâs fair. I think one approach which one might take would be to, following up on this experience, the very top people who write opinion pieces for newspapers, they were at least interested, and just double down on that, and try to reach them. I think that something that people think is that, okay, so there are the tabloids, and everyone agrees what theyâre saying is generally not that good. But then you go to the the highbrow papers, and then everything there would actually make sense.
So that is what I did. I went for the Swedish equivalent of somewhere between the Guardian and the Telegraph. A decently well-respected paper. And even there, you can point out this glaring fallacies if you dig deeper.
Robert Wiblin: You mean, the journalists are just messing up.
Stefan Schubert: Yeah, or here it was often outside writers, like politicians or civil servants. I think ideally you should get people who are a bit more influential and more well-respected to realize how careful you actually have to be in order to really get to the truth.
Just to take one subject that effective altruists are very interested in, all the writings about AI, where you get people like professors who write the articles which are really very poor on this extremely important subject. Itâs just outrageous if you think about it.
Robert Wiblin: Yeah, when I read those articles, I imagine weâre referring to similar things, Iâm just astonished. And I donât know how to react. Because I read it, and I could just see egregious errors, egregious misunderstandings. But then, weâve got this modesty issue, that weâre bringing up before. These are well-respected people. At least in their fields in kind of adjacent areas. And then, Iâm thinking, âAm I the crazy one?â Do they read what I write, and they have the same reaction?
Stefan Schubert: I donât feel that. So I probably reveal my immodesty.
Of course, you should be modest if people show some signs of reasonableness. And obviously if someone is arguing for a position where your prior that itâs true is very low. But if theyâre a reasonable person, and theyâre arguing for it well, then you should update. But if theyâre arguing in a way which is very emotive â theyâre not really addressing the positions that weâre holding â then I donât think modesty is the right approach.
Robert Wiblin: I guess it does go to show how difficult being modest is when the rubber really hits the road, and youâre just sure about something that someone else you respect just disagrees.
But I agree. There is real red flag when people donât seem to be actually engaging with the substance of the issues which happens surprisingly often. Theyâll write something, which just suggests, âI just donât like the toneâ or âI donât like this topicâ or âThis whole thing makes me kind of madâ but they canât explain why exactly.
I think you raise interesting points. A few thoughts (which are again more like my views rather than âwhat the research saysâ):
I agree that something like the general trustworthiness of the environment also matters. And it seems good to me to both increase the proportion of reliable to unreliable messages one receives and to make people better able to spot unreliable messages and avoid updating (incorrectly) on them and to make people better able to update on correct messages. (Though Iâm not sure how tractable any of those things are.)
I agree that it seems like a major risk from proliferation of misinformation, fake news, etc., is that people stop seeking out or updating on info in general, rather than just that they update incorrectly on the misinfo. But I wouldnât say that thatâs âthe real problem with fake newsâ; Iâd say thatâs a real problem, but that updating on the misinfo is another real problem (and Iâm not sure which is bigger).
As a minor thing, I think when people spread misinfo, someone else updates on it, and then the world more generally gets worse due to voting for stupid policies or whatever, thatâs also an externality. (The actions taken caused harm to people who werenât involved in the original âtransactionâ.)
I agree you can fool/âmislead people without lies. You can use faulty arguments, cherry-picking, fairly empty rhetoric that âfeelsâ like it points a certain way, etc.
I wonder if fact-checking agencies could evolve to some sort of ârating agenciesâ; I mean, they shouldnât only screen for false statements, but actually provide information about who is accurate
Not sure if I understand the suggestion, or rather how you envision it adding value compared to the current system.
Fact-checkers already do say both that some statements are false and that others are accurate.
Also, at least some of them already have ways to see what proportion of a certain personâs claims that the fact-checker evaluated turned out to be true vs false. Although thatâs obviously not the same as what proportion of all a sourceâs claims (or all of a sourceâs important claims, or whatever) are true.
But it seems like trying to objectively assess various sourcesâ overall accuracy would be very hard and controversial. And it seems like one way we could view the current situation is that most info thatâs spread is roughly accurate (though often out of context, not highly important, etc.), and some is not, and the fact-checkers pick up claims that seem like they might be inaccurate and then say if they are. So we can perhaps see ourselves as already having something like an overall screening for general inaccuracy of quite prominent sources, in that, if fact-checking agencies havenât pointed out false statements of theirs, theyâre probably generally roughly accurate.
Thatâs obviously not a very fine-grained assessment, but I guess what Iâm saying is that itâs something, and that adding value beyond that might be very hard.
I think Iâve got similar concerns and thoughts on this. Iâm vaguely aware of various ideas for dealing with these issues, but I havenât kept up with that, and Iâm not sure how effective they are or will be in future.
The idea of making captcha requirements before things like commenting very widespread is one I havenât heard before, and seems like it could plausibly cut off part of the problem at relatively low cost.
I would also quite like it if there were much better epistemic norms widespread across society, such as people feeling embarrassed if people point out they stated something non-obvious as a fact without referencing sources. (Whereas it could still be fine to state very obvious things as facts without sharing sources all the time, or to state non-obvious things as fairly confident conjectures rather than as facts.)
But some issues also come to mind (note: these are basically speculation, rather than drawing on research Iâve read):
It seems somewhat hard to draw the line between ok and not ok behaviours (e.g., what claims are self-evident enough that itâs ok to omit a source? What sort of tone and caveats are sufficient for various sorts of claims?)
And itâs therefore conceivable that these sorts of norms could be counterproductive in various ways. E.g., lead to (more) silencing or ridicule of people raising alarm bells about low probability high stakes events, because thereâs not yet strong evidence about that, but no one will look for the evidence until someone starts raising the alarm bells.
Though I think there are some steps that seems obviously good, like requiring sources for specific statistical claims (e.g., â67% of teenagers are doing [whatever]â).
This is a sociological/âpsychological rather than technological fix, which does seem quite needed, but also seems quite hard to implement. Spreading norms like that widely seems hard to do.
With a lot of solutions, it seems not too hard to imagine ways they could be (at least partly) circumvented by people or groups who are actively trying to spread misinformation. (At least when those people/âgroups are quite well-resourced.)
E.g., even if society adopted a strong norm that people must include sources when making relatively specific, non-obvious claims, there could then perhaps be large-scale human- or AI-generated sources being produced, and made to look respectable at first glance, which can then be shared alongside the claims being made elsewhere.
We could probably also think of things like more generally improving critical thinking or rationality as similar broad, sociological approaches to mitigating the spread/âimpacts of misinformation. Iâd guess that those more general approaches may better avoid the issue of difficulty drawing lines in the appropriate places and being circumventable by active efforts, but may suffer more strongly from being quite intractable or crowded. (But this is just a quick guess.)
Agreed. But I donât think we could do that without changing the environment a little bit. My point is that rationality isnât just about avoiding false beliefs (maximal skepticism), but about forming them adequately, and itâs way more costly to do that in some environments. Think about the different degrees of caution one needs when reading something in a peer-reviewed meta-analysis, in a wikipedia entry, in a newspaper, in a whatsapp message...
The core issue isnât really âstatements that are falseâ, or people who are actually fooled by them. The problem is that, if Iâm convinced Iâm surrounded by lies and nonsense, Iâll keep following the same path I was before (because I have a high credence my beliefs are OK); it will just fuel my confirmatory bias. Thus, the real problem with fake news is an externality. I havenât found any paper testing this hypothesis, though. If it is right, then most articles Iâve seen on âfake news didnât affect political outcomesâ might be wrong.
You can fool someone even without telling any sort of lies. To steal an example I once saw in LW (still trying to find the source): imagine a random sequence of 0s and 1s; now, an Agent feeds a Principal with information about the sequence, like âdigit 1 in position nthâ. To make a Principal believe the sequence is mainly made of 1s, all an Agent has to do is to select information, like âdigit 1 in positions n, m and oâ.
But why would someone hire such an agent? Well, maybe the Principal is convinced most other accessible agents are liars; itâs even worse if the Agent already knows some of the Principalâs biases, and easier if Principals with similar biases are clustered in groups with similar interests and jobsâlike social activists, churches, military staff and financial investors. Even to denounce this scenario does not necessarily improve things; I think, at least for some countries, political outcomes were affected by having common knowledge about statements like âmilitary personnel support this, financial investors would never accept thatâ. If you can convince voters theyâll face an economic crisis or political instability by voting candidate A, they will avoid it.
My personal anecdote on how this process may work for a smart and scientifically educated: I remember having a conversation with a childhood friend, who surprised me by being a climate change denier. I tried my ârationality skillsâ arguing with him; to summarize it, he replied that greenhouses work by convection, which wouldnât extrapolate to the atmosphere. I was astonished that I had ignored it so far (well, maybe it was mentioned en passant in a science class), and that he didnât take 2 min to google it (and find out that, yes, âgreenhouseâ is an analogy, the problem is that CO2 deflects radiation back to Earth); but maybe I wouldnât have done it myself if I didnât already know that CO2 is pivotal in keeping Earth warm. However, after days of this, no happy end: our discussion basically ended with me pointing out: a) he couldnât provide any scientific paper backing his overall thesis (even though I would be happy to pay him if he could); b) he would provide objections against âanthropic global warmingâ, without even caring to put a consistent credence on themâlike first pointing to alternative causes for warming, and then denying the warming itself. He didnât really believe (i.e., assigned a high posterior credence) there was no warming, nor that it was a random anomaly, because these would be ungrounded, and so a target in a discussion. Since then, we barely spoke.
P.S.: I wonder if fact-checking agencies could evolve to some sort of ârating agenciesâ; I mean, they shouldnât only screen for false statements, but actually provide information about who is accurateâso mitigating what Iâve been calling the âlemons problem in newsâ. But who rates the raters? Besides the risk of capture, I donât know how to make people actually trust the agencies in the first place.
Your paragraph on climate change denial among a smart, scientifically educated person reminded me of some very interesting work by a researcher called Dan Kahan.
An abstract from one paper:
Two other relevant papers:
Cultural Cognition of Scientific Consensus
ClimateâScience Communication and the Measurement Problem
Parts of your comment reminded me of something thatâs perhaps unrelated, but seems interesting to bring up, which is Stefan Schubertâs prior work on âargument-checkingâ, as discussed on an 80k episode:
I think you raise interesting points. A few thoughts (which are again more like my views rather than âwhat the research saysâ):
I agree that something like the general trustworthiness of the environment also matters. And it seems good to me to both increase the proportion of reliable to unreliable messages one receives and to make people better able to spot unreliable messages and avoid updating (incorrectly) on them and to make people better able to update on correct messages. (Though Iâm not sure how tractable any of those things are.)
I agree that it seems like a major risk from proliferation of misinformation, fake news, etc., is that people stop seeking out or updating on info in general, rather than just that they update incorrectly on the misinfo. But I wouldnât say that thatâs âthe real problem with fake newsâ; Iâd say thatâs a real problem, but that updating on the misinfo is another real problem (and Iâm not sure which is bigger).
As a minor thing, I think when people spread misinfo, someone else updates on it, and then the world more generally gets worse due to voting for stupid policies or whatever, thatâs also an externality. (The actions taken caused harm to people who werenât involved in the original âtransactionâ.)
I agree you can fool/âmislead people without lies. You can use faulty arguments, cherry-picking, fairly empty rhetoric that âfeelsâ like it points a certain way, etc.
Not sure if I understand the suggestion, or rather how you envision it adding value compared to the current system.
Fact-checkers already do say both that some statements are false and that others are accurate.
Also, at least some of them already have ways to see what proportion of a certain personâs claims that the fact-checker evaluated turned out to be true vs false. Although thatâs obviously not the same as what proportion of all a sourceâs claims (or all of a sourceâs important claims, or whatever) are true.
But it seems like trying to objectively assess various sourcesâ overall accuracy would be very hard and controversial. And it seems like one way we could view the current situation is that most info thatâs spread is roughly accurate (though often out of context, not highly important, etc.), and some is not, and the fact-checkers pick up claims that seem like they might be inaccurate and then say if they are. So we can perhaps see ourselves as already having something like an overall screening for general inaccuracy of quite prominent sources, in that, if fact-checking agencies havenât pointed out false statements of theirs, theyâre probably generally roughly accurate.
Thatâs obviously not a very fine-grained assessment, but I guess what Iâm saying is that itâs something, and that adding value beyond that might be very hard.