Yesterday the New Yorker published a detailed exploration of an ‘expert on serial killers’, Stéphane Bourgoin, who turned out to be comprehensively lying about his own past, the murder of his girlfriend (who appears to not exist), his credentials, how many serial killers he’d interviewed, etc., but who was taken seriously for many years, getting genuine privileged access to serial killers for interviews and to victims’ families/support groups as a result of his lying.
I find serial/compulsive/career liars fascinating. One of the best serial-liar stories that I ran into as a warning story for journalists is that of Stephen Glass, the 1990s New Republic writer who turned out to be comprehensively making up most of the juicy details of his articles, including forging handwritten transcripts of conversations that never happened to present to the magazine’s fact-checkers.
I mostly just read about this because it’s fun, but I do think it has crystallized some things for me which are useful to have in mind even if you don’t have fun reading about serial liars. (Takeaways are at the bottom if you want to skip to that.)
The dynamics of how serial liars go unnoticed, and how the socially awkward information “hey, we think that guy is a fraud” gets propagated (or fails to get propagated) seem to me to also describe how other less clear-cut kinds of errors and misconduct go unnoticed.
A recurring theme in the New Yorker article is that people knew this guy was full of crap, but weren’t personally motivated to go try to correct all the ways he was full of crap.
“Neither I nor any of our mutual friends at the time had heard the story of his murdered girlfriend, nor of his so-called F.B.I. training,” a colleague and friend of Bourgoin’s from the eighties told me. “It triggered rounds of knowing laughter among us, because we all knew it was absolutely bogus.”
Bourgoin was telling enough lies that eventually one of them would surely ring wrong to someone, though by then he’d often moved on to a different audience and different lies. I ended up visualizing this as a sort of expanding ring of people who’d encountered Bourgoin’s stories. With enough exposure to the stories, most people suspected something was fishy and started to withdraw, but by then Bourgoin had reached a larger audience and greater fame, speaking to new audiences for whom the warning signs hadn’t yet started to accumulate.
Eventually, he got taken down by an irritated group of internet amateurs who’d noticed all the ways in which he was dishonest and had the free time and spite to actually go around comprehensively proving it.
This is a dynamic I’ve witnessed from the inside a couple of times. There’s a Twitter personality called ‘Lindyman’ who had his fifteen minutes of internet fame last year, including a glowing New York Times profile. Much of his paid Substack content was plagiarized. A lot of people know this and had strong evidence for a while before someone demonstrated it publicly.
I personally know someone who Lindyman plagiarized from, who seriously debated whether to write a blog post to the effect of ‘Lindyman is a plagiarist’, but ended up not doing so. It would’ve taken a lot of time and effort, and probably attracted the wrath of Lindyman’s followers, and possibly led to several frustrating weeks of back and forth, and is that really worth it? And that’s for plagiarism of large blocks of text, which is probably the single most provable and clear-cut kind of misbehavior, much harder to argue about than the lies Glass or Bourgoin put forward. Eventually someone got fed up and made the plagiarism public, but it’d been a running joke in certain circles for a while before then.
There are more examples I’m aware of where a researcher is widely known by other researchers to engage in shady research practices, but where no one wants to be the person to say that publicly; when it does, eventually, come out, you often hear from colleagues and peers “I’m not surprised”.
Why not be the change I want to see in the world? Last year, I tried looking into what looked like a pretty clear-cut allegation of scientific misconduct. It ended up consuming tons of my time in a way that was not particularly clarifying. After getting statements from both sides, asking lots of followup questions, getting direct access to the email chain in which they initially disputed the allegations, etc., I still ended up incredibly confused about what had really happened, and unsure enough of any specific thing I could say that even though I suspected misconduct had been involved, I didn’t have something I felt comfortable writing.
…then, about two months after I gave up, the same scientist at the center of that frustrating, unrewarding investigation had another paper identified as fraudulent in a much more clear-cut way. That retroactively clarified a lot about the first debate. The person who pointed it out, though, was immediately the target of a very aggressive and personal online backlash. I don’t envy him.
Calling out liars is clearly a public service, but it’s not a very rewarding one, and it’s a bit hard to say exactly how much of a public service it is. I think people are basically right to anticipate that it’s likely to absorb a ton of their time and energy and lead to months of mudslinging and not necessarily get the liar to either shut up or stop being taken seriously.
But of course, having a public square with even a few prolific liars in it is also quite bad. Glass was so wildly successful as a reporter because the anecdotes he manufactured hit just the right spot; they were funny, memorable, gave vivid ‘proof’ to something people wanted to believe anyway. Fiction has more degrees of freedom than truth, and is likelier to hit on particularly divisive, memorable, and memetically compelling claims. Scientists who write fraudulent articles can write them faster, and as a result much of the early formative Covid-19 treatment research was fraudulent. I think a public square substantially shaped by lies is much worse than one that isn’t.
One very tentative takeaway:
It’s easy to forget that people might just be uncomplicatedly and deliberately lying. Most of the time they’re not. But occasionally they are, and if you fail to have it in your hypothesis space then you’ll end up incredibly confused by trying to triangulate the truth among different stories, assuming that everyone’s misremembering/narrativizing but not actively fabricating the information they’re presenting you with. I think it’s pretty important to have lying in your hypothesis space, and worth reading about liars until you have the intuition that sometimes people are just lying to you.
Another very tentative takeaway:
If you are a person interested in doing informal research that’s important and neglected, I think identifying scientific fraud, or identifying experts on the TED talk circuit who are doing substantially dishonest or misleading work, is valuable and largely not being done by more experienced and credentialed people, not because they don’t have lengthy rants they’ll give you off the record but because they don’t want to stake their personal credibility on it and don’t want to deal with the frustrating ongoing arguments it’ll cause.
My current sense is that this work is not super important, but is reasonably good practice for important work; making sense of a muddle of claims and figuring out whether there’s clear-cut dishonesty, and if so how to make it apparent and communicate it, is a skill that transfers pretty well to making sense of other muddles of claims. I’d be pretty excited about hiring someone who’d successfully done and written up a couple of investigations like these.