#180 – Why gullibility and misinformation are overrated (Hugo Mercier on the 80,000 Hours Podcast)

We just published an interview: Hugo Mercier on why gullibility and misinformation are overrated. Listen on Spotify or click through for other audio options, the transcript, and related links. Below are the episode summary and some key excerpts.

Episode summary

There are now dozens, if not hundreds, of experiments showing that in the overwhelming or the quasi-entirety of the cases, when you give people a good argument for something, something that is based in fact, some authority that they trust, then they are going to change their mind. Maybe not enough, not as much as we’d like them to, but the change will be in the direction that you would expect. In a way, that’s the sensible thing to do.

And you’re right that both laypeople and professional psychologists have been and still are very much attracted to demonstrations that human adults are irrational and a bit silly, because it’s more interesting. We are attracted by mistakes, by errors, by kind of silly behaviour, but that doesn’t mean this is representative at all.

- Hugo Mercier

The World Economic Forum’s global risks survey of 1,400 experts, policymakers, and industry leaders ranked misinformation and disinformation as the number one global risk over the next two years — ranking it ahead of war, environmental problems, and other threats from AI.

And the discussion around misinformation and disinformation has shifted to focus on how generative AI or a future super-persuasive AI might change the game and make it extremely hard to figure out what was going on in the world — or alternatively, extremely easy to mislead people into believing convenient lies.

But this week’s guest, cognitive scientist Hugo Mercier, has a very different view on how people form beliefs and figure out who to trust — one in which misinformation really is barely a problem today, and is unlikely to be a problem anytime soon. As he explains in his book Not Born Yesterday, Hugo believes we seriously underrate the perceptiveness and judgement of ordinary people.

In this interview, host Rob Wiblin and Hugo discuss:

  • How our reasoning mechanisms evolved to facilitate beneficial communication, not blind gullibility.

  • How Hugo makes sense of our apparent gullibility in many cases — like falling for financial scams, astrology, or bogus medical treatments, and voting for policies that aren’t actually beneficial for us.

  • Rob and Hugo’s ideas about whether AI might make misinformation radically worse, and which mass persuasion approaches we should be most worried about.

  • Why Hugo thinks our intuitions about who to trust are generally quite sound, even in today’s complex information environment.

  • The distinction between intuitive beliefs that guide our actions versus reflective beliefs that don’t.

  • Why fake news and conspiracy theories actually have less impact than most people assume.

  • False beliefs that have persisted across cultures and generations — like bloodletting and vaccine hesitancy — and theories about why.

  • And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore

Highlights

The evolutionary argument against humans being gullible

Hugo Mercier: The basic argument is one that was laid out by actually Richard Dawkins and a colleague in the ’70s and ’80s. It goes something like this: within any species, or across species, if you have two individuals that communicate, you have senders of information and receivers of information. Both have to benefit from communication. The sender has to, on average, benefit from communication, otherwise they would evolve to stop sending messages. But the receiver also has to, on average, benefit from communication, otherwise they would simply evolve to stop receiving messages. The same way as cave-dwelling animals might lose their vision, because vision is pointless, if most of the signal you were getting from others was noise — or even worse, it was harmful — then you would just evolve to stop receiving these signals.

Rob Wiblin: Yeah. So in the book you point out that, having said that, a lot of people will be thinking about persuasion and gullibility and convincingness and perceptiveness as a sort of evolutionary arms race between people’s ability to trick one another and people’s ability to detect trickery from others and not be deceived. But your take is that this is sort of the wrong way to think about it. Can you explain why?

Hugo Mercier: Yes. I think this view of an arms race is tempting, but it’s mistaken in that it starts from a place of great gullibility. It’s as if people started being gullible and then they evolved to be increasingly sceptical and so to increasingly reject messages. Whereas in fact, what we are seeing — hypothetically, because we don’t exactly know how our ancestors used to communicate — but if we extrapolate based on other great apes, for instance, we can see that they have a communication system that is much more limited than ours, and they’re much more sceptical.

For instance, if you take a chimpanzee and you try pointing to help the chimpanzee figure out where something is, the chimpanzee is not going to pay attention to you as a rule. They’re very sceptical, very defiant in a way, because they live in an environment in which they have little reason to trust each other. By contrast, if you take humans as an endpoint — I mean, obviously chimpanzees are not our ancestors, but assuming our last common ancestor was more similar to the chimps than it was to humans — if you take a human as the opposite endpoint, we rely usually on communication for everything we do in our everyday life. And that has likely been true for most of our recent evolution, so that means that we have become able to accept more information; we take in vastly more information from others than any other great ape.

Rob Wiblin: So one take would be that you need discerningness in order to be able to make the information useful to the recipient, so that they can dismiss messages that are bad. And there’s a kind of truth to that. But the other framing is just that you need discerningness in order to make communication take place at all, because if you were undiscerning, you would simply close your ears and, like many other species, basically just not listen or pay no attention or do not process the information coming from other members of your species. So it’s only because we were able to evolve the ability to tell truth from fiction that communication evolved as a human habit at all. Is that basically it?

Hugo Mercier: Yes, exactly. It’s really striking in the domain of the evolution of communication how you can find communication that works — even within species, for instance, that you think would be very adversarial. If you take a typical example of some gazelles and some of their predators, like packs of dogs, you think they’re pure adversaries: the dogs want to eat the gazelle; the gazelle doesn’t want to be eaten.

But in fact, some gazelles have evolved this behaviour of stotting, where they jump without going anywhere — they just jump on the same place. And by doing that, they’re signalling to the dogs that they’re really fit, and that they would likely outrun the dogs. And this signalling is possible only because this stotting is a reliable indicator that you would outrun the dogs: it’s impossible if you’re a sick gazelle, if you’re an old gazelle, if you’re a young gazelle, if you’re a gazelle with a broken leg. You can’t do it. So the dogs can believe, so to speak, the gazelles, because the gazelles are sending it on a signal.

How AI could make the information environment worse

Rob Wiblin: I want to come to this other worry, which I called spam. To me, this is the most plausible way that AI could make the information environment worse. This is just helping to produce and disseminate just vast amounts of low-quality or misleading content, even more than exists currently. And now we don’t really expect that it’s going to persuade anyone to buy any particular conclusions, but it’s an alternative effect where it’s just increasing the noise that’s out there; it’s just cluttering up the internet with lots of untrustworthy information, where it’s kind of effortful to figure out that it’s untrustworthy because it looks like a paper, it looks like a real study, it looks like a real blog post.

So people realise that this is the case and they just begin to mistrust everything that they see a little bit more, because a lot of the cues that they might use to judge the credibility of things are no longer as reliable as they used to be, because they’re too easy to fake. So the end result is just that, for practical reasons, they give up on trying to form strong views on most topics, and they end up feeling that it’s just not really worth the effort to learn about what’s going on, because it’s so easy to generate some fake video of events that never happened, or fake papers purporting to show some conclusion, or fake accounts creating the false impression about what people believe.

I think one reason I worry about this is that, as I understand it, this has been an approach that many governments have used with their own populations when they’re worried about them rebelling against them: just to produce an enormous amount of noise and confusion. Where it’s not that people believe that the regime is good, but they no longer trust any particular thing that they’re observing, so they just kind of opt out of public discourse.

What do you think of that risk? Can you imagine that being something that plays out over time?

Hugo Mercier: My intuition would be that most people still rely on curation to a large extent. So if you’re going to trust a piece of news to some extent, you make up your own mind based on the content of the news — and if it’s something that’s too implausible, then you’ll be sceptical. But for all the things that are within the range of things that are broadly plausible, the main moderator is going to be the source. So if you read that in a reliable newspaper, or if it’s tweeted by a colleague that you trust, at least within a given area of expertise, then that’s how you know that the information is reliable, or at least kind of worth considering.

And the fact that there’s a lot of junk out there shouldn’t change that fundamentally. The only problem would be that if these courtiers of information — these people who are going to relay information to you by creating it themselves — if they become less reliable, if their job becomes so hard that they stop being reliable, then everything stops working. But I’m not sure that LLMs are going to make the jobs of journalists that different, in terms of figuring out what’s true or not. I mean, you still have to talk to people, you still have to check your sources. And in many ways, LLMs can help them as well. So on balance, it’s not clear it’s going to make things harder.

You’re right when you were saying that obviously the strategy of many governments that already have a not-very-trustworthy political system is to increase that mistrust, so that at least potentially more trustworthy agents can’t gain a foothold. That’s why the Germans were trying, in the Second World War, to discredit the BBC: because they knew it was impossible to get the Germans to believe German propaganda anymore, but at least they could try to discredit the other side. And you have the same thing in Russia and China, et cetera. But that can only work if there is not much trust to start with. If you have some actors that are trusted, it’s not obvious how you’re going to make that trust go away.

Redefining beliefs

Rob Wiblin: You want to say that people professing silly beliefs that they don’t actually act on, or intuitively incorporate into their world model, that that doesn’t show deep, real gullibility.

And there are also various other cases of seemingly really daft behaviour that you want to defend and explain in the book as motivated by really understandable, pragmatic, selfish concerns. Basically, if people are persuaded that their self-interest requires them to say that they believe some stupid thing, typically they are willing to do it. But that doesn’t necessarily mean that they’ve been persuaded of the belief on a deep level. So it’s less, in these cases, an epistemic error, and more a matter that they’re kind of being bribed; they’re being paid to claim that they believe in magic or whatever else.

One of the examples of this you’ve alluded to earlier is a nontrivial number of people say that they believe the Earth is flat, for instance. How could you explain that? Typically the reason is that people are really enjoying the social group, the kind of social dynamic that comes along with these flat-earther groups. Is there much more to say about that, other than that people who are kind of lonely and maybe don’t feel like they have many allies in life often kind of look for unusual beliefs that can bind a group together that they can all profess? And then that increases the loyalty between them, and allows them to hang out and feel like they have something special?

Hugo Mercier: Yes, I think that’s a potential explanation. It seems as if people who turn towards conspiracy theories are people who maybe don’t have the status that they think they should have. In the sense that instead of being people who influence others, in terms of having strong opinions about current events and these sort of things, they’re mostly down to just, you have to accept what’s in the newspaper, you have to accept what the authorities say — and that might not be fully satisfying.

A lot of people want to contribute to creating their epistemic environment. And if you can’t do that professionally, like if you’re a journalist or researcher or something like this, then it’s tempting to do it in a way that will make up for it — but because you’re not in a nurturing institutional context, it’s likely to go astray. So people do their own research, and they create these sometimes very elaborate and quite knowledgeable theories about vaccination, or the fact that the Earth is flat, or that whoever is killed…

In a way, I can really understand their motivation. And I was talking to a journalist who has studied a lot of QAnon people, and he was describing the work that these people were doing and the feelings that they had when they felt they were uncovering new evidence was not very different from what he felt as a journalist when he was figuring out how a story was making sense together. So he really was understanding, in a way, their motivation. Unfortunately, the outcome isn’t great, but the motivation isn’t intrinsically bad.

Vaccine hesitancy

Rob Wiblin: I guess people who are reluctant to get vaccinations have been pretty vilified in recent years. One explanation for this belief is generally bad judgement; another might be that people are gullible and that they don’t know who to trust, so they’re trusting these quack doctors. What’s your explanation for why so many people are scared to get vaccinated?

Hugo Mercier: In a way, we can tell that it’s not sheer gullibility, because we find the same pattern everywhere, and we have found it since the beginning of mandatory vaccination or inoculation in Britain about two centuries ago. In every society, there will be a few percent of people who will quite staunchly oppose vaccination, really kind of anti-vax people. And you’ll have 10%, 20%, 15% or more who are more vaccine hesitant. The worst country probably is actually France, which I’m a bit ashamed about. So you have this, and you find that just about everywhere in the world. Again, you find that in England, as soon as vaccination was introduced. So it just seems to be a fact of human nature that some people will find vaccination to be a bad thing.

And I think it resonates with a lot of people. Even people who are mostly pro-vax can at least sort of understand the intuition that injecting something that is related to a disease into a baby that is perfectly healthy doesn’t seem like the most straightforward thing to do. Imagine if you didn’t know anything about vaccination and you encounter a tribe that takes a bit of blood from a sick cow and puts it in a perfectly healthy baby. You’re going to think that they’re nuts. I’m taking the example of the cow because that’s how inoculation started with smallpox in the UK.

So obviously, given everything we know about vaccination, you should do it for all the vaccines that are recommended by the health system. But I can see how it’s not the most intuitive therapy. It’s not like if you have a broken arm and someone said, “Probably we should put the bone right.” “OK, sure. Yeah, let’s do it.” They say, “Oh, your kid is perfectly fine. We should take this thing from that sick person and transform it and then put it in your kid.” It doesn’t sound great. So there’s an intuition I think that many people share, that vaccination isn’t the best therapy.

And we know that this is the prime driver and not the stories about vaccination causing autism, for instance. Because as much as in every culture, there are people who are going to doubt vaccination, the reasons that they offer to justify that doubt are going to vary tremendously from one culture to the next. So in the West, it has been a lot recently about vaccines, the MMR vaccine in particular, causing autism. It used to be that the smallpox inoculation would turn you into a cow. There are many cultures in which it’s going to make you sterile, it’s going to give you AIDS, it’s going to give you all sorts of bad things. The justifications vary a lot, because these are the ones that you get from your environment. But the underlying motivation to dislike vaccine is pretty much universal — not universal in the sense that everybody shares it, but that in every population you’ll have people who are very keen on being anti-vax.

Rob Wiblin: Yeah. It does make a lot of sense. At most points in history, if doctors had said, “What we should do is take the thing that makes someone else sick and put it on you,” then that actually you would have been pretty justified in saying, “I don’t know, I think I’m just going to go get the homoeopathy, take just the water,” because that would be a much safer option.

Hugo Mercier: Exactly.

Rob Wiblin: It is quite counterintuitive that you should take someone who’s healthy and then give them a transformed disease, basically. So it’s just an unfortunate fact of reality that that actually is the best treatment.

Hugo Mercier: That is really bad luck.

Open vigilance

Hugo Mercier: It comes from the concept of epistemic vigilance, and it’s really the same thing; it’s just kind of rebranding.

The “open” comes from the fact that all of these mechanisms, as we were hinting at earlier, their main function really is to help us be more open: be more accepting of information, be influenced by others when they’re right and when we’re wrong. That is the ultimate function. We start from this place of we just have our own beliefs that we form through perception and inference, and then the more open we are, the more we’ll be able to benefit from the fact that other people have different knowledge, they have different information from what we have, and we can use that.

And the “vigilance” comes from the fact that this openness, as we were also kind of saying earlier, is only made possible by the fact that we are vigilant. So it is because we check what people tell us, and we check whether we can trust them or not, that we can afford to be open to what they’re telling us.

Rob Wiblin: In what ways would you say that we are open to new information?

Hugo Mercier: For instance, when someone gives you a good argument, people tend to change their minds. This is something they’ve studied a lot in the lab, in which we give people small logical or mathematical problems to which there is a perfect argument — like, you can just demonstrate the correct answer in a relatively easy manner, even though most people get it wrong originally, and it’s one of these kind of trick questions, and people have a very strong intuition that they got it right. And in spite of that, when you give them a good argument, they change their minds quite easily. So in a way, the argumentation is the way in which you can exert the most dramatic changes of mind, when you have arguments that are strong enough.

Rob Wiblin: Yeah. Many people will have the idea that giving people good arguments for beliefs is not always that effective, and often people will throw out good arguments on spurious grounds, or just because they conflict with what they already believe. You have this example in the book of people who’ve studied whether people sensibly incorporate new information in changing their beliefs. I think many listeners will, like me, have heard of this experiment where people who supported the Iraq War were told later on that WMDs were never found in Iraq, and in fact they didn’t exist, and that this shockingly caused them to become more in favour of the Iraq War rather than less, as you might expect.

You point out that there’s been a whole slate of experiments of this type done, and that was actually the only case in which people’s beliefs updated in the wrong direction relative to what you might expect. In some cases they didn’t move so much, but that was the one case out of dozens in which it went in the wrong direction. Do you think to some extent people are maybe cherry-picking cases where folks are resistant to arguments, and they ignore the familiar everyday cases when arguments persuade us sensibly all the time?

Hugo Mercier: Yes, I think there are at least two documented cases of this backfire effect. There is another one with vaccine hesitancy, I think for one specific vaccine and one specific range of the population. But yes, there are now dozens, if not hundreds, of experiments showing that in the overwhelming or the quasi-entirety of the cases, when you give people a good argument for something, something that is based in fact, some authority that they trust, then they are going to change their mind. Maybe not enough, not as much as we’d like them to, but the change will be in the direction that you would expect. In a way, that’s the sensible thing to do.

And you’re right that both laypeople and professional psychologists have been and still are very much attracted to demonstrations that human adults are irrational and a bit silly, because it’s more interesting. Look, people can speak — it’s the most amazing thing maybe in the biological world that we have language. It’s like, well, sure, obviously we have language, but sometimes, one time every maybe 50,000 words, there’s a word that you can’t remember, you have on the tip of the tongue: “Oh my god, this is amazing how our brains have been working so poorly.” So yeah, we are attracted by mistakes, by errors, by kind of silly behaviour, but that doesn’t mean this is representative at all.

Intuitive and reflective beliefs

Hugo Mercier: Intuitive beliefs are beliefs that are formed usually through perception. If I see there’s a desk in front of me, I have an intuitive belief that there’s a desk in front of me, and I’m not going to try walking through it; I know I can put my laptop on it. And also beliefs that are formed through some simple forms of testimony. So if my wife tells me she’s at home tonight, then I’m going to intuitively believe she’s at home tonight. So I will base my behaviour on that, and I will act as if I had perceived that she was at home tonight, for instance. And that’s the vast majority of our beliefs, and things work really well, and these beliefs tend to be consequential and to have behavioural impact.

By contrast, reflective beliefs are beliefs that we can hold equally strongly as intuitive beliefs, so it’s not just a matter of confidence, but they tend to be largely divorced from our behaviour. So you can believe something, but either because you don’t really know how to act on the basis of that belief or for some other reasons, it doesn’t really translate into the kind of behaviour that one would expect if you held the same belief intuitively.

So an example that is really striking is conspiracy theories. If you take someone who intuitively believes in a conspiracy — for instance, someone who works in a company or in a government, and they’ve seen that their boss was shredding documents or was doing something really fishy, and they have good evidence that something really bad is going on — their reaction is going to be to shut up. They’re going to be afraid for their jobs and, in some places, for their lives. They have a strong emotional component, and their behaviour will be one of really not wanting to say anything, or if they say anything, they won’t want to shout it from the rooftops — they’ll contact a journalist anonymously or something like this.

If you can contrast that to the behaviour of conspiracy theorists who don’t have actual perceptual or first-hand evidence of a conspiracy going on, then these people tend not to be afraid. They can say, “I believe the CIA orchestrated 9/​11 and they’re this all-powerful, evil institution” — and yet they’re not going to kill me if I say this. So at worst they’re going to say things, but their emotional and behavioural reactions are really stunted, or really different from what you would expect from someone who would have a similar intuitive belief.

Rob Wiblin: Yeah. You give the contrast between, in Pakistan, the intelligence services are known for engaging in all kinds of conspiracy theories all the time, and engaging in basically committing crimes on the regular in order to pursue their agenda. And everyone in Pakistan believes that this is the case, and they know intuitively that it’s the case, and they don’t go out and organise a conference talking about how the security services orchestrated a terrorist attack, because they think that they would be killed.

Hugo Mercier: Yeah, people who have tried would be dead.

Rob Wiblin: Right. And by contrast, in other places where people claim to believe that the security services are equally evil and organising terrorist attacks all the time, they don’t seem to have much fear that there’ll be any repercussion to saying this. And that’s the difference between intuitive and reflective claims.

Hugo Mercier: Yes. And ironically, that means that the more vocal conspiracy theories, the more everywhere it is, the less likely it is to be true, in a way — at least when it comes to an actor that still has a lot of power now. If it comes to old things in the past, then fair enough. But if it comes from an actor, like an institution that is supposed to be really powerful now, the more huge the claims are, the less likely it is to be correct, otherwise you would not be out there saying that.

How people decide who to trust

Hugo Mercier: There are two main dimensions of trust, really. One has to do with competence — essentially, how likely is it that what you’re telling me is true? And that depends on how well informed you are, how much of an expert you are, whether you’re someone who is very knowledgeable in a given area. And for this, we keep track of informational access, for instance. So let’s say we have a friend in common, and I know that you’ve seen her recently. If you tell me something about her, I will tend to believe you, because presumably you’re better informed because you’ve seen her more recently.

More generally, we are pretty good at figuring out who is an expert in a given area, sometimes on the basis of relatively subtle cues. Like if you have a friend who manages to fix your computer, you’re going to think they’re a good computer person, and maybe you’ll turn to them the next time you have a computer problem.

So that’s the competence dimension: Does that person know the truth? Do they themselves have accurate beliefs? And the other dimension, which is maybe what we really call trust in everyday life, is: Are they going to tell us that? Because even if I can believe that you’re the most expert person in the world in a given area, if I don’t trust you, if I don’t believe that you will share with me the accurate beliefs that you hold, then it’s no use to me.

That second dimension, of really trust per se, depends broadly on two things. One is your short-term incentives. So even if you’re my brother, or you’re a very good friend, if we play poker together, I’m not going to believe you — because I know that if you tell me to fold, you have no incentive to be truthful in the context of poker; we have purely opposite incentives. So there’s this kind of short-term, what can you get from me with that specific message?

And then there’s the long-term incentives, like: Are you someone whose interests are kind of intermeshed with mine, and someone who would benefit from me doing well? And is that something that’s going to be true moving forward? So if you’re a family member, if you’re a good friend, I know that you don’t have any incentive, or very small incentives, to mislead me — because then that will jeopardise our relationship, and the cost to you as well as to me would be quite high.

Rob Wiblin: Would you generally say that we have good judgement about who to trust?

Hugo Mercier: Yes, on the whole. We make mistakes, but on the whole, I think we’re pretty good. And I think most of the mistakes we make are mistakes of the type that we don’t trust people enough, rather than trusting them too much.

No comments.