I knew a bit about misinformation and fact-checking in 2017. AMA, if you’re really desperate.
In 2017, I did my Honours research project on whether, and how much, fact-checking politicians’ statements influenced people’s attitudes towards those politicians, and their intentions to vote for them. (At my Australian university, “Honours” meant a research-focused, optional, selective 4th year of an undergrad degree.) With some help, I later adapted my thesis into a peer-reviewed paper: Does truth matter to voters? The effects of correcting political misinformation in an Australian sample. This was all within the domains of political psychology and cognitive science.
During that year, and in a unit I completed earlier, I learned a lot about:
how misinformation forms
how it can sticky
how it can continue to influence beliefs, attitudes, and behaviours even after being corrected/retracted, and even if people do remember the corrections/retractions
ways of counteracting, or attempting to counteract, these issues
E.g., fact-checking, or warning people that they may be about to receive misinformation
various related topics in the broad buckets of political psychology and how people process information, such as impacts of “falsely balanced” reporting
The research that’s been done in these areas has provided many insights that I think might be useful for various EA-aligned efforts. For some examples of such insights and how they might be relevant, see my comment on this post. These insights also seemed relevant in a small way in this comment thread, and in relation to the case for building more and better epistemic institutions in the effective altruism community.
I’ve considered writing something up about this (beyond those brief comments), but my knowledge of these topics is too rusty for that to be something I could smash out quickly and to a high standard. So I’d like to instead just publicly say I’m happy to answer questions related to those topics.
I think it’d be ideal for questions to be asked publicly, so others might benefit, but I’m also open to discussing this stuff via messages or video calls. The questions could be about anything from a super specific worry you have about your super specific project, to general thoughts on how the EA community should communicate (or whatever).
Disclaimers:
In 2017, I probably wasn’t adequately concerned by the replication crisis, and many of the papers I was reading were from before psychology’s attention was drawn to that. So we should assume some of my “knowledge” is based on papers that wouldn’t replicate.
I was never a “proper expert” in those topics, and I haven’t focused on them since 2017. (I ended up with First Class Honours, meaning that I could do a fully funded PhD, but decided against it at that time.) So it might be that most of what I can provide is pointing out key terms, papers, and authors relevant to what you’re interested in.
If your question is really important, you may want to just skip to contacting an active researcher in this area or checking the literature yourself. You could perhaps use the links in my comment on this post as a starting point.
If you think you have more or more recent expertise in these or related topics, please do make that known, and perhaps just commandeer this AMA outright!
(Due to my current task list, I might respond to things mostly from 14 May onwards. But you can obviously comment & ask things before then anyway.)
- Running an AMA on the EA Forum by 18 Feb 2021 1:44 UTC; 38 points) (
- Memetic downside risks: How ideas can evolve and cause harm by 25 Feb 2020 19:47 UTC; 27 points) (LessWrong;
- 21 Jun 2020 18:40 UTC; 9 points) 's comment on EA considerations regarding increasing political polarization by (
- 16 Dec 2020 4:10 UTC; 3 points) 's comment on Ask Rethink Priorities Anything (AMA) by (
I’dlike to have read this before having our discussion:
But their recommendations sound scary:
Interesting article—thanks for sharing it.
Why do you say their recommendations sound scary? Is it because you think they’re intractable or hard to build support for?
Sorry, I should have been more clear: I think “treating attacks on common political knowledge by insiders as being just as threatening as the same attacks by foreigners” is hard to build support for, and may imply some risk of abuse.
I’ve seen some serious stuff on epistemic and memetic warfare. Do you think misinformation in the web has recently been or is currently being used as an effective weapon against countries or peoples? Is it qualitatively different from good old conspiracies and smear campaigns? Do you have some examples? Do standard ways of counter-acting (e.g., fact-checking) can effectively work in the case of an intentional attack (my guess: probably not; an attacker can spread misinformation more effectively than we can spread fact-checking—and warning about it wil increase mistrust and polarization, which might be the goal of the campaign)? What would be your credences on your answers?
Good questions!
Unfortunately, I think these specific questions are mostly about stuff that people started talking about a lot more after 2017. (Or at least, I didn’t pick up on much writing and discussion about these points.) So it’s a bit beyond my area.
But I can offer some speculations and related thoughts, informed in a general sense by the things I did learn:
I suspect misinformation at least could be an “effective weapon” against countries or peoples, in the sense of causing them substantial damage.
I’d see (unfounded) conspiracy theories and smear campaigns as subtypes of spreading misinformation, rather than as something qualitatively different. But I think today’s technology allows for spreading misinformation (of any type) much more easily and rapidly than people could previously.
At the same time, today’s technology also makes flagging, fact-checking, and otherwise countering misinformation easier.
I’d wildly speculate that, overall, the general public are much better informed than they used to be, but that purposeful efforts to spread misinformation will more easily have major effects now than previously.
This is primarily based on the research I’ve seen (see my other comment on this post) that indicates that even warnings about misinfo and (correctly recalled!) corrections of misinfo won’t stop that misinfo having an effect.
But I don’t actually know of research that’s looked into this. We could perhaps call this question: How does the “offense-defense” balance of (mis)information spreading scale with better technology, more interconnectedness, etc.? (I take the phrase “offense-defense balance” from this paper, though it’s possible my usage here is not in line with what the phrase should mean.)
My understanding is that, in general, standard ways of counteracting misinfo (e.g., fact-checking, warnings) tend to be somewhat but not completely effective in countering misinfo. I expect this would be true for accidentally spread misinfo, misinfo spread deliberately by e.g. just a random troll, or misinfo spread deliberately by e.g. a major effort on the part of a rival country.
But I’d expect that the latter case would be one where the resources dedicated to spreading the misinfo will more likely overwhelm the resources dedicated towards counteracting it. So the misinfo may end up having more influence for that reason.
We could also perhaps wonder about how the “offense-defense” balance of (mis)information spreading scales with more resources. It seems plausible that, after a certain amount of resources dedicated by both sides, the public are just saturated with the misinfo to such an extent that fact-checking doesn’t help much anymore. But I don’t know of any actual research on that.
One thing that you didn’t raise, but which seems related and important, is how advancements in certain AI capabilities could affect the impacts of misinformation. I find this concerning, especially in connection with the point you make with this statement:
Early last year, shortly after learning about EA, I wrote a brief research proposal related to the combination of these points. I never pursued the research project, and have now learned of other problems I see as likely more important, but I still do think it’d be good for someone to pursue this sort of research. Here it is:
References:
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., … & Anderson, H. (2018). The malicious use of artificial intelligence: forecasting, prevention, and mitigation. arXiv preprint arXiv:1802.07228.
Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106-131.
Koehler, D. J. (2016). Can journalistic “false balance” distort public perception of consensus in expert opinion?. Journal of experimental psychology: Applied, 22(1), 24-38.
Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303-330.
Really thanks!
I think “offense-deffense balance” is a very accurate term here. I wonder if you have any personal opinion on how to improve our situation on that. I guess when it comes to AI-powered misinformation through media, it’s particularly concerning how easily it can overrun our defenses—so that, even if we succeed by fact-checking every inaccurate statement, it’ll require a lot of resources and probably lead to a situation of widespread uncertainty or mistrust, where people, incapable of screening reliable info, will succumb to confirmatory bias or peer pressure (I feel tempted to draw an analogy with DDoS attacks, or even with the lemons problem).
So, despite everything I’ve read about the subject (though notvery sistematically), I haven’t seen feasible well-written strategies to address this asymmetry—except for some papers on moderation in social networks and forums (even so, it’s quite time consuming, unless moderators draw clear guidelines—like in this forum). I wonder why societies (through authorities or self-regulation) can’t agree to impose even minimal reliability requirementes, like demanding captcha tests before spreading messages (so making it harder to use bots) or, my favorite, holding people liable for spreading misinformation, unless they explicitly reference a source—something even newspapers refuse to do (my guess is that they are affraid this norm would compromise source confidentiality and their protections against legal suits). If people had this as an established practice, one could easily screen for (at least grossly) unreliable messages by checking their source (or pointing out its absence), besides deterring them.
I think I’ve got similar concerns and thoughts on this. I’m vaguely aware of various ideas for dealing with these issues, but I haven’t kept up with that, and I’m not sure how effective they are or will be in future.
The idea of making captcha requirements before things like commenting very widespread is one I haven’t heard before, and seems like it could plausibly cut off part of the problem at relatively low cost.
I would also quite like it if there were much better epistemic norms widespread across society, such as people feeling embarrassed if people point out they stated something non-obvious as a fact without referencing sources. (Whereas it could still be fine to state very obvious things as facts without sharing sources all the time, or to state non-obvious things as fairly confident conjectures rather than as facts.)
But some issues also come to mind (note: these are basically speculation, rather than drawing on research I’ve read):
It seems somewhat hard to draw the line between ok and not ok behaviours (e.g., what claims are self-evident enough that it’s ok to omit a source? What sort of tone and caveats are sufficient for various sorts of claims?)
And it’s therefore conceivable that these sorts of norms could be counterproductive in various ways. E.g., lead to (more) silencing or ridicule of people raising alarm bells about low probability high stakes events, because there’s not yet strong evidence about that, but no one will look for the evidence until someone starts raising the alarm bells.
Though I think there are some steps that seems obviously good, like requiring sources for specific statistical claims (e.g., “67% of teenagers are doing [whatever]”).
This is a sociological/psychological rather than technological fix, which does seem quite needed, but also seems quite hard to implement. Spreading norms like that widely seems hard to do.
With a lot of solutions, it seems not too hard to imagine ways they could be (at least partly) circumvented by people or groups who are actively trying to spread misinformation. (At least when those people/groups are quite well-resourced.)
E.g., even if society adopted a strong norm that people must include sources when making relatively specific, non-obvious claims, there could then perhaps be large-scale human- or AI-generated sources being produced, and made to look respectable at first glance, which can then be shared alongside the claims being made elsewhere.
We could probably also think of things like more generally improving critical thinking or rationality as similar broad, sociological approaches to mitigating the spread/impacts of misinformation. I’d guess that those more general approaches may better avoid the issue of difficulty drawing lines in the appropriate places and being circumventable by active efforts, but may suffer more strongly from being quite intractable or crowded. (But this is just a quick guess.)
Agreed. But I don’t think we could do that without changing the environment a little bit. My point is that rationality isn’t just about avoiding false beliefs (maximal skepticism), but about forming them adequately, and it’s way more costly to do that in some environments. Think about the different degrees of caution one needs when reading something in a peer-reviewed meta-analysis, in a wikipedia entry, in a newspaper, in a whatsapp message...
The core issue isn’t really “statements that are false”, or people who are actually fooled by them. The problem is that, if I’m convinced I’m surrounded by lies and nonsense, I’ll keep following the same path I was before (because I have a high credence my beliefs are OK); it will just fuel my confirmatory bias. Thus, the real problem with fake news is an externality. I haven’t found any paper testing this hypothesis, though. If it is right, then most articles I’ve seen on “fake news didn’t affect political outcomes” might be wrong.
You can fool someone even without telling any sort of lies. To steal an example I once saw in LW (still trying to find the source): imagine a random sequence of 0s and 1s; now, an Agent feeds a Principal with information about the sequence, like “digit 1 in position nth”. To make a Principal believe the sequence is mainly made of 1s, all an Agent has to do is to select information, like “digit 1 in positions n, m and o”.
But why would someone hire such an agent? Well, maybe the Principal is convinced most other accessible agents are liars; it’s even worse if the Agent already knows some of the Principal’s biases, and easier if Principals with similar biases are clustered in groups with similar interests and jobs—like social activists, churches, military staff and financial investors. Even to denounce this scenario does not necessarily improve things; I think, at least for some countries, political outcomes were affected by having common knowledge about statements like “military personnel support this, financial investors would never accept that”. If you can convince voters they’ll face an economic crisis or political instability by voting candidate A, they will avoid it.
My personal anecdote on how this process may work for a smart and scientifically educated: I remember having a conversation with a childhood friend, who surprised me by being a climate change denier. I tried my “rationality skills” arguing with him; to summarize it, he replied that greenhouses work by convection, which wouldn’t extrapolate to the atmosphere. I was astonished that I had ignored it so far (well, maybe it was mentioned en passant in a science class), and that he didn’t take 2 min to google it (and find out that, yes, “greenhouse” is an analogy, the problem is that CO2 deflects radiation back to Earth); but maybe I wouldn’t have done it myself if I didn’t already know that CO2 is pivotal in keeping Earth warm. However, after days of this, no happy end: our discussion basically ended with me pointing out: a) he couldn’t provide any scientific paper backing his overall thesis (even though I would be happy to pay him if he could); b) he would provide objections against “anthropic global warming”, without even caring to put a consistent credence on them—like first pointing to alternative causes for warming, and then denying the warming itself. He didn’t really believe (i.e., assigned a high posterior credence) there was no warming, nor that it was a random anomaly, because these would be ungrounded, and so a target in a discussion. Since then, we barely spoke.
P.S.: I wonder if fact-checking agencies could evolve to some sort of “rating agencies”; I mean, they shouldn’t only screen for false statements, but actually provide information about who is accurate—so mitigating what I’ve been calling the “lemons problem in news”. But who rates the raters? Besides the risk of capture, I don’t know how to make people actually trust the agencies in the first place.
Your paragraph on climate change denial among a smart, scientifically educated person reminded me of some very interesting work by a researcher called Dan Kahan.
An abstract from one paper:
Two other relevant papers:
Cultural Cognition of Scientific Consensus
Climate‐Science Communication and the Measurement Problem
Parts of your comment reminded me of something that’s perhaps unrelated, but seems interesting to bring up, which is Stefan Schubert’s prior work on “argument-checking”, as discussed on an 80k episode:
I think you raise interesting points. A few thoughts (which are again more like my views rather than “what the research says”):
I agree that something like the general trustworthiness of the environment also matters. And it seems good to me to both increase the proportion of reliable to unreliable messages one receives and to make people better able to spot unreliable messages and avoid updating (incorrectly) on them and to make people better able to update on correct messages. (Though I’m not sure how tractable any of those things are.)
I agree that it seems like a major risk from proliferation of misinformation, fake news, etc., is that people stop seeking out or updating on info in general, rather than just that they update incorrectly on the misinfo. But I wouldn’t say that that’s “the real problem with fake news”; I’d say that’s a real problem, but that updating on the misinfo is another real problem (and I’m not sure which is bigger).
As a minor thing, I think when people spread misinfo, someone else updates on it, and then the world more generally gets worse due to voting for stupid policies or whatever, that’s also an externality. (The actions taken caused harm to people who weren’t involved in the original “transaction”.)
I agree you can fool/mislead people without lies. You can use faulty arguments, cherry-picking, fairly empty rhetoric that “feels” like it points a certain way, etc.
Not sure if I understand the suggestion, or rather how you envision it adding value compared to the current system.
Fact-checkers already do say both that some statements are false and that others are accurate.
Also, at least some of them already have ways to see what proportion of a certain person’s claims that the fact-checker evaluated turned out to be true vs false. Although that’s obviously not the same as what proportion of all a source’s claims (or all of a source’s important claims, or whatever) are true.
But it seems like trying to objectively assess various sources’ overall accuracy would be very hard and controversial. And it seems like one way we could view the current situation is that most info that’s spread is roughly accurate (though often out of context, not highly important, etc.), and some is not, and the fact-checkers pick up claims that seem like they might be inaccurate and then say if they are. So we can perhaps see ourselves as already having something like an overall screening for general inaccuracy of quite prominent sources, in that, if fact-checking agencies haven’t pointed out false statements of theirs, they’re probably generally roughly accurate.
That’s obviously not a very fine-grained assessment, but I guess what I’m saying is that it’s something, and that adding value beyond that might be very hard.
Meta comment
I felt unsure how many people this AMA would be useful to, if anyone, and whether it would be worth posting.
But I’d guess it’s probably a good norm for EAs who might have relatively high levels of expertise in a relatively niche area to just make themselves known, and then let others decide whether it seems worthwhile to use them as a bridge between that niche area and EA. The potential upside—the creation of such bridges—seems notably larger than the downsides—a little time wasted writing and reading the post, before people ultimately just decide it’s not valuable and scroll on by.
I’d be interested in other people’s thoughts on that idea, and whether it’d be worth more people doing “tentative AMAs”, if they’re “sort-of” experts in some particular area that isn’t known to already be quite well represented in EA (e.g., probably not computer science or population ethics). E.g., maybe someone who did a Masters project on medieval Europe could do an AMA, without really knowing why any EAs would care, and then just see if anyone takes them up on it.
It’s now occurred to me that a natural option to compare this against is having something like a directory listing EAs who are open to 1-on-1s on various topics, where their areas of expertise or interest are noted. Like this or this.
Here are some quick thoughts on how these options compare. But I’d be interested in others’ thoughts too.
Relative disadvantages of this “tentative AMA” approach:
Less centralised; you can’t see all the people listed in one place (or a small handful of places)
Harder to find again later; this post will soon slip off the radar, unless people remember it or happen to search for it
Maybe directs a disproportionate amount of attention/prominence to the semi-random subset of EAs who decide to do a “tentative AMA”
E.g., for at least a brief period, this post is on the frontpage, just as would be an AMA from Toby Ord, Will MacAskill, etc., even though those are much more notable and relevant for many EAs. If a lot of people did “tentative AMAs”, that’d happen a lot. Whereas just one post where all such people can comment or add themselves to a directory would only “take up attention” once, in a sense.
On the other hand, the karma system provides a sort of natural way of sorting that out.
Relative advantage of this “tentative AMA” approach:
More likely to lead to public answers and discussion, rather than just 1-on-1s, which may benefit more people and allow the discussion to be found again later
To get the ball rolling, and give examples of some insights from these areas of research and how they might be relevant to EA, here’s an adapted version of a shortform comment I wrote a while ago:
Potential downsides of EA’s epistemic norms (which overall seem great to me)
This is a quick attempt to summarise some insights from psychological findings on the continued influence effect of misinformation, and related areas, which might suggest downsides to some of EA’s epistemic norms. Examples of the norms I’m talking about include just honestly contributing your views/data points to the general pool and trusting people will update on them only to the appropriate degree, or clearly acknowledging counterarguments even when you believe your position is strong.
From memory, this paper reviews research on CIE, and I perceived it to be high-quality and a good intro to the topic.
From this paper’s abstract:
This seems to me to suggest some value in including “epistemic status” messages up front, but that this don’t make it totally “safe” to make posts before having familiarised oneself with the literature and checked one’s claims. (This may suggest potential downsides to both this comment and this whole AMA, so please consider yourself both warned and warned that the warning might not be sufficient!)
Similar things also make me a bit concerned about the “better wrong than vague” norm/slogan that crops up sometimes, and also make me hesitant to optimise too much for brevity at the expense of nuance. I see value in the “better wrong than vague” idea, and in being brief at the cost of some nuance, but it seems a good idea to make tradeoffs like this with these psychological findings in mind as one factor.
Here are a couple other seemingly relevant quotes from papers I read back then (and haven’t vetted since then):
“retractions [of misinformation] are less effective if the misinformation is congruent with a person’s relevant attitudes, in which case the retractions can even backfire [i.e., increase belief in the misinformation].” (source) (see also this source)
“we randomly assigned 320 undergraduate participants to read a news article presenting either claims both for/against an autism-vaccine link [a “false balance”], link claims only, no-link claims only or non-health-related information. Participants who read the balanced article were less certain that vaccines are safe, more likely to believe experts were less certain that vaccines are safe and less likely to have their future children vaccinated. Results suggest that balancing conflicting views of the autism-vaccine controversy may lead readers to erroneously infer the state of expert knowledge regarding vaccine safety and negatively impact vaccine intentions.” (emphasis added) (source)
This seems relevant to norms around “steelmanning” and explaining reasons why one’s own view may be inaccurate. Those overall seem like very good norms to me, especially given EAs typically write about issues where there truly is far less consensus than there is around things like the autism-vaccine “controversy” or climate change. But it does seem those norms could perhaps lead to overweighting of the counterarguments when they’re actually very weak, perhaps especially when communicating to wider publics who might read and consider posts less carefully than self-identifying EAs/rationalists would. But that’s all just my own speculative generalisations of the findings on “falsely balanced” coverage.
Two more examples of how these sorts of findings can be applied to matters of interest to EAs:
Seth Baum has written a paper entitled Countering Superintelligence Misinformation drawing on this body of research. (I stumbled upon this recently and haven’t yet had a chance to read beyond the abstract and citations.)
In a comment, Jonas Vollmer applied ideas from this body of research to the matter of how best to handle interactions about EA with journalists