Next week for the 80,000 Hours Podcast I’m interviewing Hugo Mercier, author of Not Born Yesterday: The Science of Who We Trust and What We Believe.
Hugo is a Cognitive Scientist, research director at the CNRS where here works with the ‘Evolution and Social Cognition’ team.
One of his research interests is how we evaluate communicated information.
Hugo argues that while many people believe that human beings are gullible and easily persuaded of false ideas, in fact people are surprisingly good at telling who is trustworthy, and generally aren’t easily convinced of anything they don’t already think.
That’s because communication couldn’t evolve among human unless it was beneficial to both the sender and receiver of information. If the receiver generally lost out, they would stop listening entirely.
Given this outlook he’s skeptical that social media and fake news are big drivers of our current problems.
He’s also skeptical that advances in AI or LLMs will make it easy to persuade large numbers of people of things they aren’t already inclined to believe.
(Of course we do have systemic weaknesses — one he points out is we’re bad at detecting when what looks like two independent sources of information is actually just one source of information.)
Blinkist summaries the top 6 messages of ‘Not Born Yesterday’ as:
When deciding what to believe, we seek out beliefs that speak to our goals and match our views.
Individuals with common goals have no incentive to send unreliable communication signals.
Open vigilance mechanisms have evolved to help us accept beneficial messages and reject harmful ones.
We rely on prior beliefs and reasoning to evaluate the plausibility of communicated information.
We depend on intuition to decide if others are more competent or better informed.
Fake news doesn’t usually mislead people—it justifies actions they were going to do anyway.
What should I ask him?
I’d be interested to hear if he has any ideas pro-rationality interventions.
That looks like a great interview subject!
I’m confused. I thought the general take was “people are tricked into believing things that are not true”, not “people are tricked into believing things that are bad for them”. The above argument is a reason to think the second claim is false, but not the first claim (since you can have false beliefs that are nonetheless not bad for you).
Also, could you not have communication evolve even if people are gullible, so long as it is good for groups to have unity/cohesion/obedience? Groups and tribes with more gullible members might have outcompeted groups with more independent-minded members if the former were more united/cohesive.
Some other questions:
What does he make of the claim that all cognitive biases at heart are just confirmation bias based around a few “fundamental prior” beliefs?
Is he an atheist, and if so what does he make of humanity’s history of belief in religion? I am thinking especially of times and places that were especially fertile ground for new religious ideas, e.g., the Mediterranean prior to and during the spread of Christianity, the Second Great Awakening, and the Taiping Rebellion in China. I think those were times when many people readily believed false ideas—why?
On social media and fake news, can he imagine any plausible information ecologies that would cause major problems? How would those look, and why will we avoid them?
Similarly, can he imagine an ideal information ecology? How different is it from what we have today, and how much would things change if we could switch over?
You could argue that fake news is a problem not because it convinces people of falsehoods, but because it spurs them into action, or extremizes their beliefs (e.g., by providing more extreme evidence of their beliefs’ truth than does reality). What does he make of that argument?
Presumably people sometimes do change their mind. What’s his model of how that typically happens? (Presumably it mostly involves things you would not call persuasion.)
Does he think LLMs and voice synthesis will be widely used for scams in the next decade? If not, why not? If yes, does scamming not involve persuasion?
Why did the ad media industry have over $800B revenue last year?
My question: do we have different capacities to detect 1) dishonesty (e.g. a scam from a con artist), 2) motivated reasoning or conflict of interest (e.g. a salesperson pitching us a product), and 3) sincere but nonetheless false beliefs (e.g. an ideologue giving a speech)?
I could more easily buy that we have good instincts for (1) or for (1) and (2) than for (3).