The Parable of the Boy Who Cried 5% Chance of Wolf
Epistemic status: a parable making a moderately strong claim about statistics
Once upon a time, there was a boy who cried, âthereâs a 5% chance thereâs a wolf!â
The villagers came running, saw no wolf, and said âHe said there was a wolf and there was not. Thus his probabilities are wrong and heâs an alarmist.â
On the second day, the boy heard some rustling in the bushes and cried âthereâs a 5% chance thereâs a wolf!â
Some villagers ran out and some did not.
There was no wolf.
The wolf-skeptics who stayed in bed felt smug.
âThat boy is always saying there is a wolf, but there isnât.â
âI didnât say there was a wolf!â cried the boy. âI was estimating the probability at low, but high enough. A false alarm is much less costly than a missed detection when it comes to dying! The expected value is good!â
The villagers didnât understand the boy and ignored him.
On the third day, the boy heard some sounds he couldnât identify but seemed wolf-y. âThereâs a 5% chance thereâs a wolf!â he cried.
No villagers came.
It was a wolf.
They were all eaten.
Because the villagers did not think probabilistically.
The moral of the story is that we should expect to have a large number of false alarms before a catastrophe hits and that is not strong evidence against impending but improbable catastrophe.
Each time somebody put a low but high enough probability on a pandemic being about to start, they werenât wrong when it didnât pan out. H1N1 and SARS and so forth didnât become global pandemics. But they could have. They had a low probability, but high enough to raise alarms.
The problem is that people then thought to themselves âLook! People freaked out about those last ones and it was fine, so people are terrible at predictions and alarmist and we shouldnât worry about pandemicsâ
And then COVID-19 happened.
This will happen again for other things.
People will be raising the alarm about something, and in the media, the nuanced thinking about probabilities will be washed out.
Youâll hear people saying that X will definitely fuck everything up very soon.
And it doesnât.
And when the catastrophe doesnât happen, donât over-update.
Donât say, âThey cried wolf before and nothing happened, thus they are no longer credible.â
Say âI wonder what probability they or I should put on it? Is that high enough to set up the proper precautions?â
When somebody says that nuclear war hasnât happened yet despite all the scares, when somebody reminds you about the AI winter where nothing was happening in it despite all the hype, remember the boy who cried a 5% chance of wolf.
Originally posted on my Twitter and personal blog.
Reminder that if this reaches 25 upvotes, you can listen to this post on your podcast player using the Nonlinear Library.
- The soÂcial disÂinÂcenÂtives of warnÂing about unÂlikely risks by (17 Jun 2024 11:20 UTC; 107 points)
- EA conÂtent in French: AnÂnouncÂing EA Franceâs transÂlaÂtion proÂject and our transÂlaÂtion coÂorÂdiÂnaÂtion initiative by (24 Feb 2023 14:58 UTC; 79 points)
- 's comment on Are AI safeÂtyÂists cryÂing wolf? by (18 Jan 2025 13:36 UTC; 3 points)
As I understand it, counter-terrorism risk management trends to work by having ongoing monitoring of potential theats combined with a pre-agreed escalation (alert levels) and response protocol to ensure propotionate action to prevent (or mitigate) those threats.
This seems to provide a template solution to this issue.
Iâd be keen for more policy advocacy for governments to implement similar protocol mechanisms for bio threats (and also for EA researchers to work with risk managers and try mapping risks along these lines).
Thanks for writing this! I think itâs great. Reminds me of another wild animal metaphor about high-stakes decision-making under uncertaintyâReaganâs 1984 âBear in the Woodsâ campaign ad:
I think that kind of reasoning is helpful when communicating about GCRs and X-risks.
What is the reasoning behind having the probability increase each time? It might be more interesting if the probability stayed at 5% each time. Because now you might get the conclusion âonly once the odds are significantly high, say at 15%, should we start worryingâ.
Great point! I didnât give it much thought, honestly. I think youâre right and saying 5% each time is better. Gonna update it now.
Thanks for the suggestion!
Not sure if this is essential to the parable, but wouldnât it be useful to distinguish between the following cases?:
(1) the boy says every day thereâs 5% chance the wolf could come every evening, but isnât saying the wolf is there right now
and
(2) the boy says there is a wolf in the village whenever he thinks thereâs a 5+% chance this is true.
If the boy is doing (1) and the villagers panic now then theyâve just misunderstood what heâs saying. If the boy is doing (2), then youâd understand why the villagers would start ignoring the boy (just like everyone ignores car alarms because they are so oversensitive).
Iâm not sure either is neatly analogous to the X-risk case, which is that different people give different estimates and these range from negligible to doom being apparently virtually certain. I guess thatâs a bit like there being many different boys in the village, each of whom assigns a different percentage chance to the wolf appearing at some point (but none of whom are claiming itâs literally here now).
Related: A Failure, But Not of Prediction. The best case for x-risk reduction Iâve ever read, and it doesnât even mention x-risks once.
Thanks for linking the post â I think itâs really great.
Note also, a comment from the post:
Relatedly: Heuristics That Almost Always Work