Concerning the Recent 2019-Novel Coronavirus Outbreak
Update: Most information presented here is out of date. See the 80,000 hours page for more up-to-date information.
I have been researching the Wuhan Coronavirus for several hours today, and I have come to the tentative conclusion that the situation is worse than I initially thought.
Given my current understanding, it now seems reasonable to assign a non-negligible probability (>2%) to the proposition that the current outbreak will result in a global disaster (>50 million deaths resulting from the pathogen within 1 year). I understand this prediction will sound alarmist, but in this post I will outline some of the reasons why I have come to this conclusion.
I now believe that it is warranted for effective altruists to take particular actions to prepare for a resulting pandemic. The most effective action is likely to research preparation in order to limit exposure to sources of the virus. Sending out evidence-based warning signals to at-risk communities may also be effective at limiting the spread of the pathogen.
Summary of my reasons for believing that this outbreak could result in a global disaster
The current outbreak matches the criteria that scientists have identified as being particularly likely characteristics of a pandemic-induced global disaster. That is, it’s a disease that’s contagious during a long incubation period, has a high infection rate, has no known treatment, few people are immune, and it has a low but significant mortality rate. See this article for a summary of likely characteristics of a pandemic-induced global disaster.
Based on my research, I wasn’t able to identify any historically recent pathogen with these characteristics, giving me reason to believe that using an outside view to argue against alarmism may not be warranted. For reference, the 2003 SARS outbreak, the 2009 Swine Flu, and the several Ebola outbreaks do not match the profiles of a global disaster as completely as the current outbreak.
Estimates of the mortality rate vary, but one media source says, “While the single figures of deaths in early January seemed reassuring, the death toll has now climbed to above 3 percent.” This would put itroughly on par with the mortality rate of the 1918 flu pandemic, and over 10 times more deadly than a normal seasonal flu. It’s worth noting, however, that the 1918 flu pandemic killed mostly young adults, whereas the pattern for this pathogen appears to be the opposite (which is normal for pathogens).
The incubation period (the period during which symptoms are not present but those infected can still infect others) could be as long as 14 days, according to many sources.
An Imperial College London report stated, “Self-sustaining human-to-human transmission of the novel coronavirus (2019-nCov) is the only plausible explanation of the scale of the outbreak in Wuhan. We estimate that, on average, each case infected 2.6 (uncertainty range: 1.5-3.5) other people up to 18th January 2020, based on an analysis combining our past estimates of the size of the outbreak in Wuhan with computational modelling of potential epidemic trajectories. This implies that control measures need to block well over 60% of transmission to be effective in controlling the outbreak.”
Compare the above infection rate to the H1N1 virus, which some estimate to have infected 10-20% of the world population in 2009. The World Health Organization has said, “The pandemic (H1N1) 2009 influenza virus has a R0 of 1.2 to 1.6 (Fraser, 2009) which makes controlling its spread easier than viruses with higher transmissibility.”
A simple regression model indicates that the growth rate of the pathogen is predictable and extremely rapid.
The number of cases as reported by the National Health Commission of China forms the basis of my regression model (you can currently find the number of cases reported in graphical format on the Wikipedia page here). An exponential regression model fit to the data reveals that the equation 38.7 * e^(0.389 * (t+11)) strongly retrodicts the number of cases (where t is the number of days since January 26th). In this model, the growth is very high.
[Update: Growth for January 27th remained roughly in line with the predicted growth from the exponential regression model. The new equation is 35.5*exp(0.401*t) where t is the number of days since January 15th]
A top expert has estimated that approximately 100,000 people have already been infected, which is much more than the confirmed number of 2808 (as of January 26th). If the number were this high, then the pathogen has likely already crossed the quarantine. The infection has also spread to 12 other countries besides China, supporting this point.
The Metaculus community’s estimate for the number of total cases in 2020 is much higher than it was just two or three days ago. Compare this older question here, versus this new question (when it opens).
While several organizations are developing a vaccine, Wikipedia seems to indicate that it will take months before vaccines even enter trials, and we should expect that it will take about a year before a vaccine comes out.
Summary of my recommendations
I think it’s unlikely that EAs are in any special position to help stop the pandemic. However, we can guard ourselves against the pandemic by heeding early warnings, research ways to limit our exposure to the virus, and use our platforms to warn those at-risk.
The CDC has a page for preparing for disaster.
Currently, the pathogen appears to have a significant mortality rate, but kills mainly older people. Therefore, old people are most at-risk of dying.
Even if you contract the disease and don’t die, the symptoms are likely to be severe. One source says,
ARDS (acute respiratory distress syndrome) is a common complication. Between 25 and 32 percent of cases are admitted to the intensive care unit (ICU) for mechanical ventilation and sometimes ECMO (pumping blood through an artificial lung for oxygenation).
Other complications include septic shock, acute kidney injury, and virus-induced cardiac injury. The extensive lung damage also sets the lung up for secondary bacterial pneumonia, which occurs in 10 percent of ICU admissions.
Acknowledgements: Dony Christie and Louis Francini helped gather sources and write this post.
- Results from the First Decade Review by 13 May 2022 15:01 UTC; 163 points) (
- The ten most-viewed posts of 2020 by 13 Jan 2021 12:21 UTC; 55 points) (
- 15 Jan 2023 22:11 UTC; 47 points) 's comment on AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years by (
- Attempted summary of the 2019-nCoV situation — 80,000 Hours by 3 Feb 2020 22:37 UTC; 37 points) (
- 3 Apr 2020 17:39 UTC; 28 points) 's comment on Has LessWrong been a good early alarm bell for the pandemic? by (LessWrong;
- Has LessWrong been a good early alarm bell for the pandemic? by 3 Apr 2020 9:44 UTC; 18 points) (LessWrong;
- 28 Jan 2020 13:59 UTC; 8 points) 's comment on Rationalist prepper thread by (LessWrong;
This straightforwardly got the novel coronavirus (now “covid-19”) on the radar of many EAs who were otherwise only vaguely aware of it, or thought it was another media panic, like bird flu.
The post also illustrates some of the key strengths and interests of effective altruism, like quantification, forecasting, and ability to separate out minor global events from bigger ones.
Note: The relevant Metaculus for this forecast also has currently ~2% odds on this level of catastrophe.
In addition, I’ll mention:
Foretold is tracking ~20 questions and is open to anyone adding their own, but doesn’t have very many predictions.
In addition to the one you mentioned, Metaculus is tracking a handful of other questions and has a substantial number of predictions.
The John Hopkins disease prediction project lists 3 questions. You have to sign up to view them. (I also think you can’t see the crowd average before you’ve made your prediction.)
Hey,
I suggest the question you’ve linked has an artificially low upper bound. Please could you update the link with this Metaculus question which, without the upper bound, provides a better prediction.
All metaculus questions are about cases, not deaths. Currently the most up to date community prediction is a 7% chance of over a billion cases this year. I am not sure where you found the claim you cite. Apologies if I’ve made some mistake.
https://www.metaculus.com/questions/3529/how-many-human-infections-of-the-2019-novel-coronavirus-2019-ncov-will-be-estimated-to-have-occurred-before-2021-question-two/
The question has an upper bound of 100 million deaths, not cases. I don’t think that is “artificially low”.
Maybe you are confusing Hurford’s link with this old question, which does have an artificially low upper bound and deals with cases instead of deaths.
Most of them are, but the one Hurford linked to is explicitly about the number of deaths: “How many people will die as a result of the 2019 novel coronavirus (2019-nCoV) before 2021?”.
If you look at the bottom of the page, it says that the community predicts a ~3% chance of greater than 100 million deaths. Previously, it said 2% for the same number of deaths.
Just to be absolutely clear about what I am referring to, here is a screenshot of the relevant part of the UI.
You are entirely correct. My bad.
[Comment not relevant]
For a long time, I’ve believed in the importance of not being alarmist. My immediate reaction to almost anybody who warns me of impending doom is: “I doubt it”. And sometimes, “Do you want to bet?”
So, writing this post was a very difficult thing for me to do. On an object-level, l realized that the evidence coming out of Wuhan looked very concerning. The more I looked into it, the more I thought, “This really seems like something someone should be ringing the alarm bells about.” But for a while, very few people were predicting anything big on respectable forums (Travis Fisher, on Metaculus, being an exception), so I stayed silent.
At some point, the evidence became overwhelming. It seemed very clear that this virus wasn’t going to be contained, and it was going to go global. I credit Dony Christie and Louis Francini with interrupting me from my dogmatic slumber. They were able to convince me —in the vein of Eliezer Yudkowsky’s Inadequate Equilibria —that the reason why no one was talking about this probably had nothing to do whatsoever with the actual evidence. It wasn’t that people had a model and used that model to predict “no doom” with high confidence: it was a case of people not having models at all.
I thought at the time—and continue to think—that the starting place of all our forecasting should be using the outside view. But—and this was something Dony Christie was quite keen to argue—sometimes people just use the “outside view” as a rationalization; to many people, it means just as much, and no more than, “I don’t want to predict something weird, even if that weird thing is overwhelmingly determined by the actual evidence.”
And that was definitely true here: pandemics are not a rare occassion in human history. They happen quite frequently. I am most thankful for belonging to a community that opened my mind long ago, by having abundant material written about natural pandemics, the Spanish flu, and future bio-risks. That allowed me to enter the mindset of thinking “OK maybe this is real” as opposed to rejecting all the smoke under the door until the social atmosphere became right.
My intuitions, I’m happy to say, paid off. People are still messaging me about this post. Nearly two years later, I wear a mask when I enter a supermarket.
There are many doomsayers who always get things wrong. A smaller number of doomsayers are occasionally correct—good enough that it might be worth listening to them, but rejecting them, most of the time.
Yet, I am now entitled to a distinction that I did not think I would ever earn, and one that I perhaps do not deserve (as the real credit goes to Louis and Dony): the only time I’ve ever put out a PSA asking people to take some impending doom very seriously, was when I correctly warned about the most significant pandemic in one hundred years. And I’m pretty sure I did it earlier than any other effective altruist in the community (though I’m happy to be proven wrong, and congratulate them fully).
That said, there are some parts of this post I am not happy with. These include,
I only had one concrete prediction in the whole post, and it wasn’t very well-specified. I said that there was a >2% probability that 50 million people would die within one year. That didn’t happen.
I overestimated the mortality rate. At the time, I didn’t understand which was likely to be a greater factor in biasing the case fatality rate: the selection effect of missed cases, or the time-delay of deaths. It is now safe to say that the former was a greater issue. The infection fatality rate of Covid-19 is less than 1%, putting it into a less dangerous category of disease than I had pictured at the time.
Interestingly, one part I didn’t regret writing was the vaccine timeline I implicitly predicted in the post. I said, “we should expect that it will take about a year before a vaccine comes out.” Later, health authorities claimed that it would take much longer, with some outlets “fact-checking” the claim that a vaccine could arrive by the end of 2020. I’m pleased to say I outlasted the pessimists on this point, as vaccines started going into people’s arms on a wide scale almost exactly one year after I wrote this post.
Overall, I’m happy I wrote this post. I’m even happier to have friends who could trigger me to write it. And I hope, when the next real disaster comes, effective altruists will correctly anticipate it, as they did for Covid-19.
This is the boring take, but it’s worth noting that conditional on this spreading widely, perhaps the most important things to do are mitigating health impacts on you, not preventing transmission. And that means staying healthy in general, perhaps especially regarding cardiovascular health—a good investment regardless of the disease, but worth re-highlighting.
I’m not a doctor, but I do work in public health. Based on my understanding of the issues involved, if you want to take actions now to minimize severity later if infected, my recommendations are:
Exercise (which will help with cardiovascular health)
Lose excess weight (which can exacerbate breathing issues)
Get enough sleep (which assists your immune system generally)
Eat healthy (again, general immune system benefits)
And for preventing transmission, I know it seems obvious, but you need to actually wash your hands. Also, it seems weird, by studies indicate that brushing teeth seems to help reduce infection rates.
And covering your mouth with a breathing mask may be helpful, as long as you’re not, say, touching food with your hands that haven’t been washed recently and then eating. Also, even if there is no Coronavirus, in general, wash your hands before eating. Very few people are good about doing this, but it will help.
Nice list!
Adding to it a little:
Avoid being sick with two things at once or being sick with something else immediately before.
When it comes to supplements the evidence and effect sizes are not that strong. Referencing examine.com and what I generally remember, I roughly think that the best immune system strengthening supplements would be zinc and echinacea with maybe mild effects from other things like vitamin C, vitamin D, and whey protein. There may be a couple additional herbs that could do something but it’s unclear they are safe to take for a long duration. What you’d aim for is decreasing the severity of viral pnemonia induced by something like influenza.
It’s possible that some existing antivirals will be helpful but currently this is unknown.
Re exercise: I worry that putting myself in a catabolic state (by exercising particularly hard) I temporarily increase my risk. Also by being at the gym around sweaty strangers. Is this worry justified?
I don’t think so to any significant extent in most circumstances. And any tiny spike counterbalanced by general benefits pointed to by David. My understanding (former competitive runner) is that extended periods of heavily overdoing it with exercise (overtraining) can lead to an inhibited immune system among other symptoms, but this is rare with people generally keeping fit (other than e.g. someone jumping into marathon/triathlon training without building up). Other things to avoid/be mindful of are the usual (hanging around in damp clothes in the cold, hygiene in group sporting/exercise contexts etc).
I feel much more worried about being in a crowded gym than about immune effects of exercise. People are really bad at (a) cleaning gym equipment and (b) washing their hands.
To be clear, I’d guess this is less bad than many other social situation (bars, public transport, restaurants) as well as carrying a much clearer health upside. But perhaps there is an argument for switching to more solitary forms of exercise in outbreak situations?
And obviously you should not go to the gym if you yourself are sick (people apparently do this)!
Thanks for this. I found this article on how to personally prevent its spread helpful: https://foreignpolicy.com/2020/01/25/wuhan-coronavirus-safety-china/
For people living in the US, at what point does it become particularly important to start following these methods? I assume it’s always beneficial, but risk adjusted not particularly important until there start being more cases in the US or until we start having more cases. Is that assumption right or dangerously wrong?
There doesn’t seem to be any local transmission in the US yet—so for now, I guess it probably wouldn’t help much (though it would still help prevent the spread of the common cold/flu!).
If/when there is local transmission, following this advice will be very important.
The CDC and WHO emphasise handwashing, not gloves. “WHO experts advise against wearing gloves on the basis that hand-washing is more important and people wearing gloves are less likely to wash their hands.”
https://www.theguardian.com/science/2020/jan/27/coronavirus-how-to-protect-yourself-from-infection
Alcohol-based hand sanitiser is also good. Often better than hand washing in practice as very few people actually wait for the water to get warm, or spend 20 seconds lathering the soap.
While I agree that most people don’t wash their hands correctly, I was recently surprised to learn that warm water doesn’t make much of a difference when it comes to killing germs.
https://medicalsciences.stackexchange.com/questions/500/does-hot-water-kill-germs-better-than-cold-water
Makes sense from the point of view of killing germs, and temperatures being tolerable for us also being tolerable for germs. My intuition is that it’s easier to get dirt (which contains germs) off hands with warmer water (similar to how it’s easier to wash dishes with warmer water).
.
I’m willing to bet up to $100 at even odds that by the end of 2020, the confirmed death toll by the Wuhan Coronavirus (2019-nCoV) will not be over 10,000. Is anyone willing to take the bet?
I accepted a bet on January 30th with a friend with the above terms. Nobody else offered to bet me. Since then, I have updated my view. I now give a ~60% probability that there will be over 10,000 deaths. https://predictionbook.com/predictions/198256
My update is mostly based on (a) Metaculus’s estimate of the median number of deaths updating from ~3.5k to now slightly over ~10K (https://www.metaculus.com/questions/3530/how-many-people-will-die-as-a-result-of-the-2019-novel-coronavirus-2019-ncov-before-2021/) and also (b) some naive extrapolation of the possible total number of deaths based on the Feb 4th death data here: https://www.worldometers.info/coronavirus/coronavirus-death-toll/
Incubation period and Chinese government coverup efforts are relevant to this question, but roughly speaking if the actual number of infections is ~35x the reported number, and there’s no uptick in mysterious deaths in hospitals, then the actual mortality rate is ~1/35 the reported number, more in line with normal flu than 1918 Spanish flu.
Current death rates are likely to underestimate the total mortality rate, since the disease has likely not begun to affect most of the people who are infected.
I’ll add information about incubation period to the post.
By total mortality rate do you mean total number of people eventually or do you mean percentage?
If the former I agree.
If you mean the later… I see it as a toss up between the selection effect of the more severely affected being the ones we know have it (and so decreasing the true mortality rate relative to the published numbers) and time for the disease to fully progress (and so increasing the true mortality rate relative to the published numbers).
It should be noted that the oft-cited case-fatality ratio of 2.5% for the 1918 flu might be inaccurate, and the true CFR could be closer to 10%: https://rybicki.blog/2018/04/11/1918-influenza-pandemic-case-fatality-rate/?fbclid=IwAR3SYYuiERormJxeFZ5Mx2X_00QRP9xkdBktfmzJmc8KR-iqpbK8tGlNqtQ
EDIT: Also see this twitter thread: https://twitter.com/ferrisjabr/status/1232052631826100224
Howie and I just recorded a 1h15m conversation going through what we do and don’t know about nCoV for the 80,000 Hours Podcast.
We’ve also compiled a bunch of links to the best resources on the topic that we’re aware of which you can get on this page.
https://www.worldometers.info/coronavirus/
I find the analysis from this link very interesting. It suggests that Ro is higher than initially estimated at 3-4 (rather than 1.4-2.5 by WHO) but the national China mortality rate drops to 0.3% if the province of Hubei is excluded (the reported mortality rate of Wuhan alone is 5.5%). This would be consistent with the theory that the number of cases are underreported in Wuhan, due to a shortage of testing capacity and perhaps under reporting. A recent Lancet report by Professor Gabriel Leung from University of Hong Kong https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(20)30260-9/fulltext estimates 76,000 cases in Wuhan as of Jan 25th based on a Ro of 2.68, more than 30x reported, which would put the mortality rate at well under 0.5%.
This suggests the pandemic could be more difficult to control than expected but mortality rate is also much lower (perhaps in the region of 3x flu).
This may also mean the main damage could be through economic impact.
I am extremely skeptical of the high R0 estimate for one reason: SARS has a lower R0, but was much worse overseas than nCov currently is.
According to the Lancet report you linked, SARS has an R0 of around 2 in China, so substantially lower than nCov. However, we know how the first cases abroad spread. Compared to the current situation, it was far far worse, by mortality and by number of cases. The first case in Toronto infected first her family, then some hospital works who in turn spread it further until the whole hospital had to be closed. Eyeballing the graph for Canada found here, this really does not look like the situation we currently have, despite higher interconnection and more rigorous testing (more testing → more discovered cases).
So far the majority of overseas cases are still travelers from China; the people that they infected are generally close contacts; it is positively surprising how few spouses seem to get the virus. This can also not just be attributed to higher awareness I think. Even before the news story of a new dangerous story broke, there were no human-to-human transmissions overseas despite some travelers already present.
For posterity, I was wrong here because I was unaware of the dispersion parameter k that is substantially higher for SARS than for Covid-19.
Great thanks for this and the link. I am still trying to understand this more as it evolves. I guess as the monitoring and control is now much stronger hopefully Ro will come down also.
The link to the Lancet study seems to be broken when I click on it, although the text of the link itself is correct. This should be (hopefully) a working link: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(20)30260-9/fulltext
I am an EA living in China right now. Thanks for sharing this post on the Coronavirus. I am also very interested in these questions.
We do not know what percentage of people experience symptoms so mild that they do not seek medical attention and so do not appear in the ‘Suspected’ or ‘Confirmed’ case statistics.
here: https://ncov.dxy.cn/ncovh5/view/pneumonia
and here: https://www.worldometers.info/coronavirus/
Here’s Google Translate if you need it: https://translate.google.com/
Nevertheless, as you mentioned, attempts have been made to model the spread of the infection and to estimate the number of people carrying the virus so far.
I have created a very simple spreadsheet with three scenarios here:
https://docs.google.com/spreadsheets/d/1qSNLQC5BpA-Gah0INyolFpPvTAGFphKUwTxQZN7FWI4/edit?usp=sharing
The Red scenario = 50% of infections go undiagnosed (unrecorded).
The Yellow scenario = 70% of infections go undiagnosed (unrecorded).
The Green scenario = 85% of infections go undiagnosed (unrecorded).
Each scenario has different estimates of the ‘real’ number of infections and percentages that progress to either a serious/critical condition or to death.
I found the serious/critical condition numbers here: https://twitter.com/BNODesk
We can perhaps use these scenarios to roughly ‘frame’ whether this virus is similar or much worse than the seasonal flu. Thoughts?
There are potential inaccuracies with the numbers in the three scenarios.
On the potential positive (good) side, perhaps the high critical condition and death rates reflect the early stages of treating a novel virus and it’s possible that the rate will not be so high in future as better treatment protocols are followed. Certainly the rates seem much worse in Hubei Province compared to the rest of China and the rest of the world.
On the negative (bad) side, the death rate and serious/critical condition rate may turn out to be higher than the number reflected here. This is because many of the ‘suspected and confirmed’ cases will later progress to a serious/critical condition or death. There is likely to be a time lag in this progression which we do not yet see in the stats.
Moreover, perhaps the virus could become more deadly if medical services were to become overwhelmed.
Perhaps the questions could be summarised:
What percentage of cases go unrecorded (undiagnosed)? What is the real number of people with infections right now?
Which scenario: Green, Yellow or Red is closest to the truth? What confidence would you assign to the probability of each of the three scenarios?
What is the ‘true’ risk of progressing to a serious/critical condition or death once infected with Coronavirus?
I would love to hear any thoughts from others on any of the points made here.
Just some quick thoughts:
-Your lowest discovery rate (15%) might still be too high, this recent preprint estimates 0.05% https://www.medrxiv.org/content/10.1101/2020.01.23.20018549v2.full.pdf
-The spreadsheet compares the current number of deaths with the current number of known cases. However, deaths will always leak behind the number of confirmed cases; this way you will be underestimating death rate.
-It might also be interesting to do some back of the envelope math on the cases outside of China, since discovery rate should be much higher. So far, there are very few serious, and I think no critical conditions. However, most confirmed infections outside of China are actually people who were infected in China and then traveled overseas which selects for people healthy enough to travel, so mostly younger. Doing a proper analysis of those numbers will be hard; one could compare them to SARS figures of the relevant age bracket.
Sorry, the link to the ‘live’ statistics from Chinese Health centres was broken:
https://ncov.dxy.cn/ncovh5/view/pneumonia
I’ve also updated the link in the original post.
Here’s Google Translate if you need it: https://translate.google.com/
Thanks for the article. One thing I’m wondering about that has implications for the large scale pandemic case is how much equipment for “mechanical ventilation and sometimes ECMO (pumping blood through an artificial lung for oxygenation)” does society have and what are the consequences of not having access to such equipment? Would such people die? In that case the fatality rate would grow massively to something like 25 to 32%.
Whether there is enough equipment would depend upon how many get sick at once, can more than one person use the same equipment in an interleaved fashion, how long each sick person needs the equipment, are their good alternatives to the equipment, and how quickly additional equipment could be built or improvised.
So the case I’d be worried about here would be a very quick spread where you need rare expensive equipment to keep the fatality rate down where it is currently.
A study published today attempts to estimate the nCoV incubation period:
As the authors note, this estimate indicates an incubation period remarkably similar to that of the Middle East respiratory syndrome.
It’s now been more than two weeks since infected people have been diagnosed in countries like Thailand, but there’s no outbreak in Thailand. There have so far been 14 cases in Thailand, all brought directly from China rather than from person-to-person infection in Thailand.
That makes me feel more skeptical that this will become a worldwide pandemic.
FYI—study of outcomes a/o Jan 25 for all 99 2019-nCoV patients admitted to a hospital in Wuhan between Jan 1 and Jan 20.
Many caveats apply. Only includes confirmed cases, not suspected ones. People who end up at a hospital are selected for being more severely ill. 60% of the patients have not yet been discharged so haven’t experienced the full progression of the disease. Etc.
https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(20)30211-7/fulltext#%20
I wonder what sort of Fermi calculation we should apply to this? My quick (quite possibly wrong) numbers are:
P(it goes world scale pandemic) = 1⁄3, if I believe the exponential spreading math (hard to get my human intuition behind) and the long, symptom less, contagious incubation period
P(a particular person gets it | it goes world scale pandemic) = 1⁄3, estimating from similar events
P(a particular person dies from it | a particular person gets it) = 1⁄30, and this may be age or preexisting condition agnostic and could, speculatively, increase if vital equipment is too scarce (see other comment)
=> P(death of a randomly selected person from it) = ~1/300
What are your thoughts?
Updating the Fermi calculation somewhat:
P(it goes world scale pandemic) = 1⁄3, no updates (the metaculus estimate reference in another comment counteracted my better first principles estimation)
P(a particular person gets it | it goes world scale pandemic) = 1⁄2, updating based on the reproduction number of the virus
P(a particular person dies from it | a particular person gets it) = 0.09, updating based on a guess of 1⁄2 probability rare equipment is needed and a random guess of 1⁄2 probability fatality without it. 1/2*1/30 + 1/2*((Probability of pneumonia: (1/3+1/4 )*1/2)*(Probability of fatality given pnemonia and rare equipment is needed: 1⁄2)
=> P(death of a randomly selected person from it) = ~1/67
I’m not entirely sure what to think of the numbers; I cannot deny the logic but it’s pretty grim and I hope I’m missing some critical details, my intuitions are wrong, or unknown unknowns make things more favorable.
Hopefully future updates and information resolves some of the uncertainties here and makes the numbers less grim. One large uncertainty is how the virus will evolve in time.
Hmm. interesting. This goes strongly against my intuitions. In case of interest I’d be happy to give you 5:1 odds that this Fermi estimate is at least an order of magnitude too severe (for a small stake of up to £500 on my end, £100 on yours). Resolved in your favour if 1 year from now the fatalities are >1/670 (or 11.6M based on current world population); in my favour if <1/670.
(Happy to discuss/modify/clarify terms of above.)
Edit: We have since amended the terms to 10:1 (50GBP of Justin’s to 500GBP of mine).
Hmm… I will take you up on a bet at those odds and with those resolution criteria. Let’s make it 50 GBP of mine vs 250 GBP of yours. Agreed?
I hope you win the bet!
(note: I generally think it is good for the group epistemic process for people to take bets on their beliefs but am not entirely certain about that.)
Agreed, thank you Justin. (I also hope I win the bet, and not for the money—while it is good to consider the possibility of the most severe plausible outcomes rigorously and soberly, it would be terrible if it came about in reality). Bet resolves 28 January 2021. (though if it’s within an order of magnitude of the win criterion, and there is uncertainty re: fatalities, I’m happy to reserve final decision for 2 further years until rigorous analysis done—e.g. see swine flu epidemiology studies which updated fatalities upwards significantly several years after the outbreak).
To anyone else reading. I’m happy to provide up to a £250 GBP stake against up to £50 of yours, if you want to take the same side as Justin.
The bet is on.
Strong kudos for betting. Your estimates seem quite off to me but I really admire you putting them to the test. I hope, for the sake of the world, that you are wrong.
Re: whose mortality estimates, I suggest we use metaculus’s list here (WHO has highest ranking) as standard (with the caveat above).
https://www.metaculus.com/questions/3530/how-many-people-will-die-as-a-result-of-the-2019-novel-coronavirus-2019-ncov-before-2021/
Though it’s interesting to note Justin’s fermi is not far off how one of Johns Hopkins’ CHS scenarios played out (coronavirus, animal origin, 65m deaths worldwide).
http://www.centerforhealthsecurity.org/event201/scenario.html
Note: this was NOT a prediction (and had some key differences including higher mortality associated with their hypothetical virus, and significant international containment failure beyond that seen to date with nCov)
http://www.centerforhealthsecurity.org/newsroom/center-news/2020-01-24-Statement-of-Clarification-Event201.html
Hmm. You’re betting based on whether the fatalities exceed the mean of Justin’s implied prior, but the prior is really heavy-tailed, so it’s not actually clear that your bet is positive EV for him. (e.g., “1:1 odds that you’re off by an order of magnitude” would be a terrible bet for Justion because he has 2⁄3 credence that there will be no pandemic at all).
Justin’s credence for P(a particular person gets it | it goes world scale pandemic) should also be heavy-tailed, since the spread of infections is a preferential attachment process. If (roughly, I think) the median of this distribution is 1⁄10 of the mean, then this bet is negative EV for Justin despite seeming generous.
In the future you could avoid this trickiness by writing a contract whose payoff is proportional to the number of deaths, rather than binary :)
This seems fair. I suggested the bet quite quickly. Without having time to work through the math of the bet, I suggested something that felt on the conservative side from the point of view of my beliefs. The more I think about it, (a) the more confident I am in my beliefs and (b) the more I feel it was not as generous as I originalyl thought*. I have a personal liking for binary bets rather than proportional payoffs. As a small concession in light of the points raised, I’d be happy to offer to modify the terms retroactively to make them more favourable to Justin, offering either of the following.
(i) Doubling the odds against me to 10:1 odds (rather than 5:1) on the original claim (at least an order of magnitude lower than his fermi). So his £50 would get £500 of mine.
OR
(ii) 5:1 on at least 1.5 orders of magnitude (50x) lower than his fermi (rather than 10x).
(My intuition is that (ii) is a better deal than (i) but I haven’t worked it through)
(*i.e. at time of bet—I think the likelihood of this being a severe global pandemic is now diminishing further in my mind)
Sure, I’ll take the modification to option (i). Thanks Sean.
10:1 on the original (1 order of magnitude) it is.
I respect that you are putting money behind your estimates and get the idea behind it, but would recommend you to reconsider if you want to do this (publicly) in this context and maybe consider removing these comments. Not only because it looks quite bad from the outside, but also because I’m not sure it’s appropriate on a forum about how to do good, especially if the virus should happen to kill a lot of people over the next year (also meaning that even more people would have lost someone to the virus). I personally found this quite morbid and I have a lot more context into EA culture than a random person reading this, e.g. I can guess that the primary motivation is not “making money” or “the feeling of winning and being right”—which would be quite inappropriate in this context -, but that might not be clear to others with less context.
(Maybe I’m also the only one having this reaction in which case it’s probably not so problematic)
edit: I can understand if people just disagree with me because you think there’s no harm done by such bets, but I’d be curious to hear from the people who down voted if in addition to that you think that comments like mine are harmful because of being bad for epistemic habits or something, so grateful to hear if someone thinks comments like these shouldn’t be made!
I have downvoted this, here are my reasons:
Pretty straightforwardly, I think having correct beliefs about situations like this is exceptionally important, and maybe the central tenet this community is oriented around. Having a culture of betting on those beliefs is one of the primary ways in which we incentivize people to have accurate beliefs in situations like this.
I think doing so publicly is a major public good, and is helping many others think more sanely about this situation. I think the PR risk that comes with this is completely dwarfed by that consideration. I would be deeply saddened to see people avoid taking these bets publicly, since I benefit a lot from both seeing people’s belief put the test this way, and I am confident many others are too.
Obviously, providing your personal perspective is fine, but I don’t think I want to see more comments like this, and as such I downvoted it. I think a forum that had many comments like this would be a forum I would not want to participate in, and I expect it to directly discourage others from contributing in ways I think are really important and productive (for example, it seems to have caused Sean below to seriously consider deleting his comments, which I would consider a major loss).
I also think that perception wise, this exchange communicates one of the primary aspects that makes me excited about this community. Seeing exchanges like the above is one of the primary reasons why I am involved in the Effective Altruism community, and is what caused me to become interested and develop trust in many of the institutions of the community in the first place. As such, I think this comments gets the broader perception angle backwards.
The comment also seems to repeatedly sneak in assumptions of broader societal judgement, without justifying doing so. The comment makes statements that extend far beyond personal perception, and indeed primarily makes claims about external perception and its relevance, which strike me as straightforwardly wrong and badly argued:
I don’t think it looks bad, and think that on the opposite, it communicates that we take our beliefs seriously and are willing to put personal stakes behind them. There will of course be some populations that will have some negative reaction to the above, but I am not particularly convinced of the relevance of their perception to our local behavior here on the forum.
I am quite confused why it would be “inappropriate”. Our culture of betting is a key part of a culture that helps us identify the most effective ways to do good, and as such is highly appropriate for this forum. It seems to me you are simply asserting that it might be inappropriate, and as such are making an implicit claim about what the norms on such a forum should be, which is something I strongly disagree with.
I don’t think these motivations would be inappropriate in this context. Those are fine motivations that we healthily leverage in large parts of the world to cause people to do good things, so of course we should leverage them here to allow us to do good things.
The whole economy relies on people being motivated to make money, and it has been a key ingredient to our ability to sustain the most prosperous period humanity has ever experienced (cf. more broadly the stock market). Of course I want people to have accurate beliefs by giving them the opportunity to make money. That is how you get them to have accurate beliefs!
Similarly the feeling of being right is probably what motivates large fractions of epidemiologists, trying to answer questions of direct relevance to this situation. Academia itself runs to a surprising degree on the satisfaction that comes from being right, and I think we should similarly not label that motivation as “inappropriate”, and instead try to build a system that leverages that motivation towards doing good things and helping people have accurate beliefs. Which is precisely what public betting does!
I emphatically object to this position (and agree with Chi’s). As best as I can tell, Chi’s comment is more accurate and better argued than this critique, and so the relative karma between the two dismays me.
I think it is fairly obvious that ‘betting on how many people are going to die’ looks ghoulish to commonsense morality. I think the articulation why this would be objectionable is only slightly less obvious: the party on the ‘worse side’ of the bet seems to be deliberately situating themselves to be rewarded as a consequence of the misery others suffer; there would also be suspicion about whether the person might try and contribute to the bad situation seeking a pay-off; and perhaps a sense one belittles the moral gravity of the situation by using it for prop betting.
Thus I’m confident if we ran some survey on confronting the ‘person on the street’ with the idea of people making this sort of bet, they would not think “wow, isn’t it great they’re willing to put their own money behind their convictions”, but something much more adverse around “holding a sweepstake on how many die”.
(I can’t find an easy instrument for this beyond than asking people/anecdata: the couple of non-EA people I’ve run this by have reacted either negatively or very negatively, and I know comments on forecasting questions which boil down to “will public figure X die before date Y” register their distaste. If there is a more objective assessment accessible, I’d offer odds at around 4:1 on the ratio of positive:negative sentiment being <1).
Of course, I think such an initial ‘commonsense’ impression would very unfair to Sean or Justin: I’m confident they engaged in this exercise only out of a sincere (and laudable) desire to try and better understand an important topic. Nonetheless (and to hold them much higher standards than my own behaviour) one may suggest it is nonetheless a lapse of practical wisdom if, whilst acting to fulfil one laudable motivation, not tempering this with other moral concerns one should also be mindful of.
One needs to weigh the ‘epistemic’ benefits of betting (including higher order terms) against the ‘tasteless’ complaint (both in moral-pluralism case of it possibly being bad, but also the more prudential case of it looking bad to third parties). If the epistemic benefits were great enough, we should reconcile ourselves to the costs of sometimes acting tastelessly (triage is distasteful too) or third parties (reasonably, if mistakenly) thinking less of us.
Yet the epistemic benefits on the table here (especially on the margin of ‘feel free to bet, save on commonsense ghoulish topics’) are extremely slim. The rate of betting in EA/rationalist land on any question is very low, so the signal you get from small-n bets are trivial. There are other options, especially for this question, which give you much more signal per unit activity—given, unlike the stock market, people are interested in the answer for-other-than pecuniary motivations: both metacalus and the John’s Hopkins platform prediction have relevant questions which are much active, and where people are offering more information.
Given the marginal benefits are so slim, they are easily outweighed by the costs Chi notes. And they are.
I am confused. Both of these are environments in which people participate in something very similar to betting. In the first case they are competing pretty directly for internet points, and in the second they are competing for monetary prices.
Those two institutions strike me as great examples of the benefit of having a culture of betting like this, and also strike me as similarly likely to create offense in others.
We seem to agree on the value of those platforms, and both their public perception and their cultural effects seem highly analogous to the private betting case to me. You even explicitly say that you expect similar reactions to questions like the above being brought up on those platforms.
I agree with you that were there only the occasional one-off bet on the forum that was being critiqued here, the epistemic cost would be minor. But I am confident that a community that had a relationship to betting that was more analogous to how Chi’s relationship to betting appears to be, we would have never actually built the Metaculus prediction platform. That part of our culture was what enabled us to have these platforms in the first place (as I think an analysis of the history of Metaculus will readily reveal, which I think can be pretty directly traced to a lot of the historic work around prediction markets, which have generally received public critique very similar to the one you describe).
I think this is almost entirely dependent on the framing of the question, so I am a bit uncertain about this. If you frame the question as something like “is it important for members of a research community to be held accountable for the accuracy of their predictions?” you will get a pretty positive answer. If you frame the question as something like “is it bad for members of a research community to profit personally from the deaths and injuries of others?” you will obviously get a negative answer.
In this case, I do think that the broader public will have a broadly negative reaction to the bet above, which I never argued against. The thing I argued against was that minor negative perception in the eyes of the broader public was of particularly large relevance here on our forum.
I additionally argued that the effects of that perception were outweighed by the long-term positive reputational effects of having skin-in-the-game of even just a small amount of our beliefs, and the perception of a good chunk of a much more engaged and more highly-educated audience, which thinks of our participation in prediction-markets and our culture of betting as being one of the things that sets us apart from large parts of the rest of the world.
I’m extremely confident a lot more opprobrium attaches to bets where the payoff is in money versus those where the payoff is in internet points etc. As you note, I agree certain forecasting questions (even without cash) provoke distaste: if those same questions were on a prediction market the reaction would be worse. (There’s also likely an issue the money leading to a question of ones motivation—if epi types are trying to predict a death toll and not getting money for their efforts, it seems their efforts have a laudable purpose in mind, less so if they are riding money on it).
This looks like a stretch to me. Chi can speak for themselves, but their remarks don’t seem to entail a ‘relationship to betting’ writ large, but an uneasy relationship to morbid topics in particular. Thus the policy I take them to be recommending (which I also endorse) of refraining making ‘morbid’ or ‘tasteless’ bets (but feel free to prop bet to heart’s desire on other topics) seems to have very minor epistemic costs, rather than threatening some transformation of epistemic culture which would mean people stop caring about predictions.
For similar reasons, this also seems relatively costless in terms of other perceptions: refraining from ‘morbid’ topics for betting only excludes a small minority of questions one can bet upon, leaving plenty of opportunities to signal its virtuous characteristics re. taking ideas seriously whilst avoiding those which reflect poorly upon it.
This is directly counter to my experience of substantive and important EA conversation. All the topics I’m interested in are essentially morbid topics when viewed in passing by a ‘person on the street’. Here are examples of such questions:
How frequently will we have major pandemics that kill over N people?
How severe (in terms of death and major harm) will the worst pandemic in the next 10 years be?
How many lives are saved by donations to GiveWell recommended charities? If we pour 10-100 million dollars into them, will we see a corresponding decline in deaths from key diseases globally?
As AI gets more powerful, will we get warning shots across the bow that injure
or kill <10,000 people with enough time for us to calibrate to the difficulty of the alignment problem, or will it be more sudden than that?
Like, sometimes I even just bet on ongoing death rates. Someone might say to me “The factory farming problem is very small of course” and I’ll reply “I will take a bet with you, if you’re so confident. You say what you think it is, I’ll say what I think it is, then we’ll use google to find out who’s right. Because I expect you’ll be wrong by at least 2 orders of magnitude.” I’m immediately proposing a bet on number of chickens being murdered per year, or some analogous number. I also would make similar bets when someone says a problem is big small e.g. “Ageing/genocide/cancer is/isn’t very important” → “I’ll take a bet on the number of people who’ve died from it in the last 10 years.”
All of your examples seem much better than the index case I am arguing against. Commonsense morality attaches much less distaste to cases where those ‘in peril’ are not crisply identified (e.g. “how many will die in some pandemic in the future” is better than “how many will die in this particular outbreak”, which is better than “will Alice, currently ill, live or die?”). It should also find bets on historical events are (essentially) fine, as whatever good or ill implicit in these has already occurred.
Of course, I agree they your examples would be construed as to some degree morbid. But my recommendation wasn’t “refrain from betting in any question where we we can show the topic is to some degree morbid” (after all, betting on GDP of a given country could be construed this way, given its large downstream impacts on welfare). It was to refrain in those cases where it appears very distasteful and for which there’s no sufficient justification. As it seems I’m not expressing this balancing consideration well, I’ll belabour it.
#
Say, God forbid, one of my friend’s children has a life-limiting disease. On its face, it seems tasteless for me to compose predictions at all on questions like, “will they still be alive by Christmas?” Carefully scrutinising whether they will live or die seems to run counter to the service I should be providing as a supporter of my friends family and someone with the child’s best interests at heart. It goes without saying opening a book on a question like this seems deplorable, and offering (and confirming bets) where I take the pessimistic side despicable.
Yet other people do have good reason for trying to compose an accurate prediction on survival or prognosis. The child’s doctor may find themselves in the invidious position where they recognise they their duty to give my friend’s family the best estimate they can runs at cross purposes to other moral imperatives that apply too. The commonsense/virtue-ethicsy hope would be the doctor can strike the balance best satisfies these cross-purposes, thus otherwise callous thoughts and deeds are justified by their connection to providing important information to the family
Yet any incremental information benefit isn’t enough to justify anything of any degree of distastefulness. If the doctor opened a prediction market on a local children’s hospice, I think (even if they were solely and sincerely motivated for good purposes, such as to provide families with in-expectation better prognostication now and the future) they have gravely missed the mark.
Of the options available, ‘bringing money’ into it generally looks more ghoulish the closer the connection is between ‘something horrible happening’ and ‘payday!‘. A mere prediction platform is better (although still probably the wrong side of the line unless we have specific evidence it will give a large benefit), also paying people to make predictions on said platform (but paying for activity and aggregate accuracy rather than direct ‘bet results’) is also slightly better. “This family’s loss (of their child) will be my gain (of some money)” is the sort of grotesque counterfactual good people would strenuously avoid being party to save exceptionally good reason.
#
To repeat: the it is the balance of these factors—which come in degrees—which is determines the final evaluation. So, for example, I’m not against people forecasting the ‘nCoV’ question (indeed, I do as well), but the addition of money takes it the wrong side of the line (notwithstanding the money being ridden on this for laudable motivation). Likewise I’m happy to for people to prop bet on some of your questions pretty freely, but not for the ‘nCoV’ (or some even more extreme versions) because the question is somewhat less ghoulish, etc. etc. etc.
I confess some irritation. Because I think whilst you and Oli are pressing arguments (sorry—“noticing confusion”) re. there not being a crisp quality that obtains to the objectionable ones yet not the less objectionable ones (e.g. ‘You say this question is ‘morbid’ - but look here! here are some other questions which are qualitatively morbid too, and we shouldn’t rule them all out’) you are in fact committed to some sort of balancing account.
I presume (hopefully?) you don’t think ‘child hospice sweepstakes’ would be a good idea for someone to try (even if it may improve our calibration! and it would give useful information re. paediatric prognosticiation which could be of value to the wider world! and capitalism is built on accurate price signals! etc. etc.) As you’re not biting the bullet on these reductios (nor bmg’s, nor others) you implicitly accept all the considerations about why betting is a good thing are pro tanto and can be overcome at some extreme limit of ghoulishness etc.
How to weigh these considerations is up for grabs. Yet picking each individual feature of ghoulishness in turn and showing it, alone, is not enough to warrant refraining from highly ghoulish bets (where the true case against would be composed of other factors alongside the one being shown to be individually insufficient) seems an exercise in the fallacy of division.
#
I also note that all the (few) prop bets I recall in EA up until now (including one I made with you) weren’t morbid. Which suggests you wouldn’t appreciably reduce the track record of prop bets which show (as Oli sees it) admirable EA virtues of skin in the game.
I’m tapping out of this discussion. I disagree with much of the above, but I cannot respond to it properly for now.
At least from a common-sense morality perspective, this doesn’t sit right with me. I do feel that it would be wrong for two people to get together to bet about some horrible tragedy—“How many people will die in this genocide?” “Will troubled person X kill themselves this year?” etc. -- purely because they thought it’d be fun to win a bet and make some money off a friend. I definitely wouldn’t feel comfortable if a lot of people around me were doing this.
When the motives involve working to form more accurate and rigorous beliefs about ethically pressing issues, as they clearly were in this case, I think that’s a different story. I’m sympathetic to the thought that it would be bad to discourage this sort of public bet. I think it might also be possible to argue that, if the benefits of betting are great enough, then it’s worth condoning or even encouraging more ghoulishly motivated bets too. I guess I don’t really buy that, though. I don’t think that a norm specifically against public bets that are ghoulish from a common-sense morality perspective would place very important limitations on the community’s ability to form accurate beliefs or do good.
I do also think there are significant downsides, on the other hand, to having a culture that disregards common-sense feelings of discomfort like the ones Chi’s comment expressed.
[[EDIT: As a clarification, I’m not classifying the particular bet in this thread as “ghoulish.” I share the general sort of discomfort that Chi’s comment describes, while also recognizing that the bet was well-motivated and potentially helpful. I’m more generally pushing back against the thought that evident motives don’t matter much or that concerns about discomfort/disrespectfulness should never lead people to refrain from public bets.]]
Responding to this point separately: I am very confused by this statement. A large fraction of topics we are discussing within the EA community, are pretty directly about the death of thousands, often millions or billions, of other people. From biorisk (as discussed here), to global health and development, to the risk of major international conflict, a lot of topics we think about involve people forming models that will quite directly require forecasting the potential impacts of various life-or-death decisions.
I expect bets about a large number of Global Catastrophic Risks to be of great importance, and to similarly be perceived as “ghoulish” as you describe here. Maybe you are describing a distinction that is more complicated than I am currently comprehending, but I at least would expect Chi and Greg to object to bets of the type “what is the expected number of people dying in self-driving car accidents over the next decade?”, “Will there be an accident involving an AGI project that would classify as a ‘near-miss’, killing at least 10000 people or causing at least 10 billion dollars in economic damages within the next 50 years?” and “what is the likelihood of this new bednet distribution method outperforming existing methods by more than 30%, saving 30000 additional people over the next year?”.
All of these just strike me as straightforwardly important questions, that an onlooker could easily construe as “ghoulish”, and I expect would be strongly discouraged by the norms that I see being advocated for here. In the case of the last one, it is probably the key fact I would be trying to estimate when evaluating a new bednet distribution method.
Ultimately, I care a lot about modeling risks of various technologies, and understanding which technologies and interventions can more effective save people’s lives, and whenever I try to understand that, I will have to discuss and build models of how those will impact other people’s lives, often in drastic ways.
Compared to the above, the bet between Sean and Justin does not strike me as particularly ghoulish (and I expect that to be confirmed by doing some public surveys on people’s naive perception, as Greg suggested), and so I see little alternative to thinking that you are also advocating for banning bets on any of the above propositions, which leaves me confused why you think doing so would not inhibit our ability to do good.
There might also be a confusion about what the purpose and impact of bets in our community is. While the number of bets being made is relatively small, the effect of having a broader betting culture is quite major, at least in my experience of interacting with the community.
More precisely, we have a pretty concrete norm that if someone makes a prediction or a public forecast, then it is usually valid (with some exceptions) to offer a bet with equal or better odds than the forecasted probability to the person making the forecast, and expect them to take you up on the bet. If the person does not take you up on the bet, this usually comes with some loss of status and reputation, and is usually (correctly, I would argue) interpreted as evidence that the forecast was not meant sincerely, or the person is trying to avoid public accountability in some other way. From what I can tell, this is exactly what happened here.
The effects of this norm (at least as I have perceived it) are large and strongly positive. From what I can tell, it is one of the norms that ensures the consistency of the models that our public intellectuals express, and when I interact with communities that do not have this norm, I very concretely experience many people no longer using probabilities in consistent ways, and can concretely observe large numbers of negative consequences arising from the lack of this norm.
Alex Tabarrok has written about this in his post “A Bet is a Tax on Bullshit”.
This doesn’t affect your point, but I just wanted to note that the post—including the wonderful title—was written by Alex Tabarrok.
Oops. Fixed.
I think what’s confusing you is that people are selectively against betting based on its motivation.
In EA, people regularly talk about morbid topics, but the stated aim is to help people. In this case, the aim could be read as “having fun and making money”. It was the motivation that was a problem, not the act itself, for most people.
While my read of your post is “there is the possibility that the aim could be interpreted this way” which I regard as fair, I feel I should state that ‘fun and money’ was not my aim, and (I strongly expect not Justin’s), as I have not yet done so explicitly.
I think it’s important to be as well-calibrated as reasonably possible on events of global significance. In particular, I’ve been seeing a lot of what appear to me to be poorly calibrated, alarmist statements, claims and musings on nCOV on social media, including from EAs, GCR researchers, Harvard epidemiologists, etc. I think these poorly calibrated/examined claims can result in substantial material harms to people, in terms of stoking up unnecessary public panic, confusing accurate assessment of the situation, and creating ‘boy who cried wolf’ effects for future events. I’ve spent a lot of time on social media trying to get people to tone down their more extreme statements re: nCOV.
(edit: I do not mean this to refer to Justin’s fermi estimate, which was on the more severe end but had clearly reasoned and transparent thinking behind it; more a broad comment on concerns re: poor calibration and the practical value of being well-calibrated).
As Habryka has said, this community in particular is one that has a set of tools it (or some part of it) uses for calibration. So I drew on it in this case. The payoff for me is small (£50; and I’m planning to give it to AMF); the payoff for Justin is higher but he accepted it as an offer rather than proposing it and so I doubt money is a factor for him either.
In the general sense I think both the concern about motivation and how something appears to parts of the community is valid. I would hope that it is still possible to get the benefits of betting on GCR-relevant topics for the benefits-to-people I articulate above (and the broader benefits Habryka and others have articulated). I would suggest that achieving this balance may be a matter of clearly stating aims and motivations, and (as others have suggested) taking particular care with tone and framing, but I would welcome further guidance.
Lastly, I would like to note my gratitude for the careful and thoughtful analysis and considerations that Khorton, Greg, Habryka, Chi and others are bringing to the topic. There are clearly a range of important considerations to be balanced appropriately, and I’m grateful both for the time taken and the constructive nature of the discussion.
Following Sean here I’ll also describe my motivation for taking the bet.
After Sean suggested the bet, I felt as if I had to take him up on it for group epistemic benefit; my hand was forced. Firstly, I wanted to get people to take the nCOV seriously and to think thoroughly about it (for the present case and for modelling possible future pandemics) - from an inside view model perspective the numbers I was getting are quite worrisome. I felt that if I didn’t take him up on the bet people wouldn’t take the issue as seriously, nor take explicitly modeling things themselves as seriously either. I was trying to socially counter what sometimes feels like a learned helplessness people have with respect to analyzing things or solving problems. Also, the EA community is especially clear thinking and I think a place like the EA forum is a good medium for problem solving around things like nCOV.
Secondly, I generally think that holding people in some sense accountable for their belief statements is a good thing (up to some caveats); it improves the collective epistemic process. In general I prefer exchanging detailed models in discussion rather than vague intuitions mediated by a bet but exchanging intuitions is useful. I also generally would rather make bets about things that are less grim and wouldn’t have suggested this bet myself, but I do think that it is important that we do make predictions about things that matter and some of those things are rather grim. In grim bets though we should definitely pay attention to how something might appear to parts of the community and make more clear what the intent and motivation behind the bet is.
Third, I wished to bring more attention and support to the issue in the hope that it causes people to take sensible personal precautions and that perhaps some of them can influence how things progress. I do not entirely know who reads this and some of them may have influence, expertise, or cleverness they can contribute.
I’m so sorry Sean, I took it as obvious that your motivation was developing accurate beliefs, hopefully to help you help others, rather than fun and profit. Didn’t mean to imply otherwise!
Thanks Khorton, nothing to apologise for. I read your comment as a concern about how the motivations of a bet might be perceived from the outside (whether in the specific case or more generally); but this led me to the conclusion that actually stating my motivations rather than assuming everyone reading knows would be helpful at this stage!
I would be interested to learn more about your views on the current outbreak. Can you link to the statements you made on social media, or present your perspective here (or as a top-level comment or post)?
Hi Wei,
Sorry I missed this. My strongest responses over the last while have fallen into the categories of: (1) responding to people claiming existential risk-or-approaching potential (or sharing papers by people like Taleb stating we are entering a phase where this is near-certain; e.g. https://static1.squarespace.com/static/5b68a4e4a2772c2a206180a1/t/5e2efaa2ff2cf27efbe8fc91/1580137123173/Systemic_Risk_of_Pandemic_via_Novel_Path.pdf
(shared in one xrisk group, for example, as “X-riskers, it would appear your time is now: “With increasing transportation we are close to a transition to conditions in which extinction becomes certain both because of rapid spread and because of the selective dominance of increasingly worse pathogens.”. My response: “We are **not** “close to a transition to conditions in which extinction becomes certain both because of rapid spread and because of the selective dominance of increasingly worse pathogens”.)
Or, responding to speculation that nCov is a deliberately developed bioweapon, or was accidentally released from a BSL4 lab in Wuhan. There isn’t evidence for either of these and I think they are unhelpful types of speculation to be made without evidence, and such speculations can spread widely. Further, some people making the latter speculation didn’t seem to be aware what a common class of virus coronaviruses are (ranging from common cold thru to SARS). Whether or not a coronavirus was being studied at the Wuhan lab, I think it would not be a major coincidence to find a lab studying a coronavirus in a major city.
A third example was clarifying that the event 201 exercise Johns Hopkins did (which involved 65 million hypothetical deaths) was a tabletop simulation , not a prediction, and therefore could not be used to extrapolate an expectation of 65 million deaths from the current outbreak.
I made various other comments as part of discussions, but more providing context or points for discussion etc as I recall as opposed to disagreeing per se, and don’t have time to dig them up.
The latter examples don’t relate to predictions of the severity of the outbreak, more so to what I perceived at the time to be misunderstandings, misinformation, and unhelpful/ungrounded speculations.
To clarify a bit, I’m not in general against people betting on morally serious issues. I think it’s possible that this particular bet is also well-justified, since there’s a chance some people reading the post and thread might actually be trying to make decisions about how to devote time/resources to the issue. Making the bet might also cause other people to feel more “on their toes” in the future, when making potentially ungrounded public predictions, if they now feel like there’s a greater chance someone might challenge them. So there are potential upsides, which could outweigh the downsides raised.
At the same time, though, I do find certain kinds of bets discomforting and expect a pretty large portion of people (esp. people without much EA exposure) to feel discomforted too. I think that the cases where I’m most likely to feel uncomfortable would be ones where:
The bet is about an ongoing, pretty concrete tragedy with non-hypothetical victims. One person “profits” if the victims become more numerous and suffer more.
The people making the bet aren’t, even pretty indirectly, in a position to influence the management of the tragedy or the dedication of resources to it. It doesn’t actually matter all that much, in other words, if one of them is over- or under-confident about some aspect of the tragedy.
The bet is made in an otherwise “casual”/”social” setting.
(Importantly) It feels like the people are pretty much just betting to have fun, embarrass the other person, or make money.
I realize these aren’t very principled criteria. It’d be a bit weird if the true theory of morality made a principled distinction between bets about “hypothetical” and “non-hypothetical” victims. Nevertheless, I do still have a pretty strong sense of moral queeziness about bets of this sort. To use an implausibly extreme case again, I’d feel like something was really going wrong if people were fruitlessly betting about stuff like “Will troubled person X kill themselves this year?”
I also think that the vast majority of public bets that people have made online are totally fine. So maybe my comments here don’t actually matter very much. I mainly just want to make the point that: (a) Feelings of common-sense moral discomfort shouldn’t be totally ignored or dismissed and (b) it’s at least sometimes the right call to refrain from public betting in light of these feelings.
At a more general level, I really do think it’s important for the community in terms of health, reputation, inclusiveness, etc., if common-sense feelings of moral and personal comfort are taken seriously. I’m definitely happy that the community has a norm of it typically being OK to publicly challenge others to bets. But I also want to make sure we have a strong norm against discouraging people from raising their own feelings of discomfort.
(I apologize if it turns out I’m disagreeing with an implicit straw-man here.)
Do you think the bet would be less objectionable if Justin was able to increase the number of deaths?
No, I think that would be far worse.
But if two people were (for example) betting on a prediction platform that’s been set up by public health officials to inform prioritization decisions, then this would make the bet better. The reason is that, in this context, it would obviously matter if their expressed credences are well-callibrated and honestly meant. To the extent that the act of making the bet helps temporarily put some observers “on their toes” when publicly expressing credences, the most likely people to be put “on their toes” (other users of the platform) are also people whose expressed credences have an impact. So there would be an especially solid pro-social case for making the bet.
I suppose this bullet point is mostly just trying to get at the idea that a bet is better if it can clearly be helpful. (I should have said “positively influence” instead of just “influence.”) If a bet creates actionable incentives to kill people, on the other hand, that’s not a good thing.
Thanks bmg. FWIW, I provide my justification (from my personal perspective) here: https://forum.effectivealtruism.org/posts/g2F5BBfhTNESR5PJJ/concerning-the-recent-wuhan-coronavirus-outbreak?commentId=mWi2L4S4sRZiSehJq
Thanks! I do want to stress that I really respect your motives in this case and your evident thoughtfulness and empathy in response to the discussion; I also think this particular bet might be overall beneficial. I also agree with your suggestion that explicitly stating intent and being especially careful with tone/framing can probably do a lot of work.
It’s maybe a bit unfortunate that I’m making this comment in a thread that began with your bet, then, since my comment isn’t really about your bet. I realize it’s probably pretty unpleasant to have an extended ethics debate somehow spring up around one of your posts.
I mainly just wanted to say that it’s OK for people to raise feelings of personal/moral discomfort and that these feelings of discomfort can at least sometimes be important enough to justify refraining from a public bet. It seemed to me like some of the reaction to Chi’s comment went too far in the opposite direction. Maybe wrongly/unfairly, it seemed to me that there was some suggestion that this sort of discomfort should basically just be ignored or that people should feel discouraged from expressing their discomfort on the EA Forum.
The US government attempted to create a prediction market to predict terrorist attacks. It was shut down basically because it was perceived as “ghoulish”.
My impression is that experts think that shutting down the market made terrorism more likely, but I’m not super well-informed.
I see this as evidence both that 1) markets are useful and 2) some people (including influential people like senators) react pretty negatively to betting on life or death issues, despite the utility.
Just as an additional note, to speak directly to the examples you gave: I would personally feel very little discomfort if two people (esp. people actively making or influencing decisions about donations and funding) wanted to publicly bet on the question: “What is the likelihood of this new bednet distribution method outperforming existing methods by more than 30%, saving 30000 additional people over the next year?” I obviously don’t know, but I would guess that Chi and Greg would both feel more comfortable about that question as well. I think that some random “passerby” might still feel some amount of discomfort, but probably substantially less.
I realize that there probably aren’t very principled reasons to view one bet here as intrinsically more objectionable than others. I listed some factors that seem to contribute to my judgments in my other comment, but they’re obviously a bit of a hodgepodge. My fully reflective moral view is also that there probably isn’t anything intrinsically wrong with any category of bets. For better or worse, though, I think that certain bets will predictably be discomforting and wrong-feeling to many people (including me). Then I think this discomfort is worth weighing against the plausible social benefits of the individual bet being made. At least on rare occasions, the trade-off probably won’t be worth it.
I ultimately don’t think my view here is that different than common views on lots of other more mundane social norms. For example: I don’t think there’s anything intrinsically morally wrong about speaking ill of the dead. I recognize that a blanket prohibition on speaking ill of the dead would be a totally ridiculous and socially/epistemically harmful form of censorship. But it’s still true that, in some hard-to-summarize class of cases, criticizing someone who’s died is going to strike a lot of people as especially uncomfortable and wrong. Even without any specific speech “ban” in place, I think that it’s worth giving weight to these feelings when you decide what to say.
What this general line of thought implies about particular bets is obviously pretty unclear. Maybe the value of publicly betting is consistently high enough to, in pretty much all cases, render feelings of discomfort irrelevant. Or maybe, if the community tries to have any norms around public betting, then the expected cost of wise bets avoided due to “false positives” would just be much higher than the expected the cost of unwise bets made due to “false negatives.” I don’t believe this, but I obviously don’t know. My best guess is that it probably makes sense to strike a (messy/unprincipled/disputed) balance that’s not too dissimilar from balances we strike in other social and professional contexts.
(As an off-hand note, for whatever it’s worth, I’ve also updated in the direction of thinking that the particular bet that triggered this thread was worthwhile. I also, of course, feel a bit weird having somehow now written so much about the fine nuances of betting norms in a thread about a deadly virus.)
I do think the “purely” matters a good bit here. While I would go as far as to argue that even purely financial motivations are fine (and should be leveraged for the public good when possible), I think in as much as I understand your perspective, it becomes a lot less bad if people are only partially motivated by making money (or gaining status within their community).
As a concrete example, I think large fractions of academia are motivated by wanting a sense of legacy and prestige (this includes large fractions of epidemiology, which is highly relevant to this situation). Those motivations also feel not fully great to me, and I would feel worried about an academic system that tries to purely operate on those motivations. However, I would similarly expect an academic system that does not recognize those motivations at all, bans all expressions of those sentiments, and does not build system that leverages them, to also fail quite disastrously.
I think in order to produce large-scale coordination, it is important to enable the leveraging a of a large variety of motivations, while also keeping them in check by ensuring at least a minimum level of more aligned motivations (or some other external systems that ensures partially aligned motivations still result in good outcomes).
I strongly disagree with this comment—I think that motivations matter and that betting with an appropriate respect for the people who have died is completely possible—but I am glad you stated your position explicitly. Comments like this make the Forum better.
I would similarly be curious to understand the level of downvoting of my comment offering to remove my comments in light of concerns raised and encouragement to consider doing so. This is by far the most downvoted comment I’ve ever had. This may just be an artefact of how my call for objections to removing my comments has manifested (I was anticipating posts stating an objection like Ben’s and Habryka’s, and for those to be upvoted if popular, but people may have simply expressed objection by downvoting the original offer). In that case that’s fine.
Another possible explanation is an objection to me even making the offer in the first place. My steelman for this is that even the offer of self-censorship of certain practices in certain situations could be seen as coming at a very heavy cost to group epistemics. However from an individual-posting-to-forum perspective, this feels like an uncomfortable thing to be punished for. Posting possibly-controversial posts to a public forum has some unilateralist’s curse elements to it: risk is distributed to the overall forum, and the person who posts the possibly-controversial thing is likely to be someone who deems the risk lower than others. And we are not always the best at impartially judging our own actions. So when arguments are made in good faith that an action may respond in group harm, it seems like a reasonable step to make the offer to withdraw the action, and to signal a willingness to cooperate in whatever the group (or moderators, I guess) deemed to be in the group’s interest. And I built in a time delay to allow for objections and more views to be raised, before taking action. I would anticipate a more negative response if I were calling for deletion of others’ comments, but this was my own comment.
I would also note that offering to delete one’s comments comes at a personal cost, as does acknowledging possible fault of judgement; having an avalanche of negative karma on top of it adds to the discomfort.
If there’s something else going on—e.g. a sense that I was being dishonest about following through on the offer to delete; or something else—it would be good to know. I guess there could be a negative reaction to expressing the view that Chi’s perspective is valid. In my view, a point can be valid without being action-deciding. Here there are multiple considerations which I would all see as valid (value of betting to calibrate beliefs; value of doing so in public to reinforce a norm the group sees as beneficial and promote that norm to others; value of avoiding making insensitive-seeming posts that could possibly cause reputational damage to the group). The question is one of weighting of considerations—I have my own views, but it was very helpful to get a broader set of views in order to calibrate my actions.
Ah, I definitely interpreted your comment as “leave a reply or downvote if you think that’s a bad idea”. So I downvoted it and left a reply. My guess is many others have done the same for similar reasons.
I do also think editing for tone was a bad idea (mostly because I think the norm of having to be careful around tone is a pretty straightforward tax on betting, and because it contributed to the shaming of people who do want to bet for what Chi expressed as “inappropriate“ motivations), so doing that was a concrete thing that I think was bad on a norm level.
Thanks, good to know on both, appreciate the feedback.
(+1 to Oli’s reasoning—I have since removed my downvote on that comment.)
I’m happy to remove my comments; I think Chi raises a valid point. The aim was basically calibration. I think this is quite common in EA and forecasting, but agree it could look morbid from the outside, and these are publicly searchable. (I’ve also been upbeat in my tone for friendliness/politeness towards people with different views, but this could be misread as a lack of respect for the gravity of the situation). Unless this post receives strong objections by this evening, I will delete my comments or ask moderators to delete.
I also strongly object. I think public betting is one of the most valuable aspects of our culture, and would be deeply saddened to see these comments disappear (and more broadly as an outside observer, seeing them disappear would make me deeply concerned about the epistemic health of our community, since that norm is one of the things that actually keeps members of our community accountable for their professed beliefs)
My take is that this at this stage has been resolved in favour of “editing for tone but keeping the bet posts”. I have done the editing for tone. I am happy with this outcome, I hope most others are too.
My own personal view is that I think public betting on beliefs is good—it’s why I did it (both this time and in the past) and my preference is to continue doing so. However, my take is that that the discussion highlighted that in certain circumstances around betting (such as predictions on events such as an ongoing mass fatality event) it is worth being particularly careful about tone.
I strongly object to saying we’re not allowed to bet on the most important questions—questions of life or death. That’s like deciding to take the best person off the team defending the president. Don’t handicap yourself when it matters most. This is the tool that stops us from just talking hot air and actually records which people are actually able to make correct predictions. These are some of the most important bets on the forum.
(Kind of just a nitpick)
I think I strongly agree with you on the value of being open to using betting in cases like these (at least in private, probably in public). And if you mean something like “Just in case anyone were to interpret Chi a certain way, I’d like to say that I strongly object to...”, then I just fully agree with your comment.
But I think it’s worth pointing out that no one said “we’re not allowed to” do these bets—Chi’s comment was just their personal view and recommendation, and had various hedges. At most it was saying “we shouldn’t”, which feels quite different from “we’re not allowed to”.
(Compare thinking that what someone is saying is racist and they really shouldn’t have said it, vs actually taking away their platforms or preventing their speech—a much higher bar is needed for the latter.)
Personally, I don’t see the bet itself as something that shouldn’t have happened. I acknowledge that others could have the perspective Chi had, and can see why they would. But didn’t feel that way myself, and I personally think that downside is outweighed by the upside of it being good for the community’s epistemics—and this is not just for Justin and Sean, but also for people reading the comments, so that they can come to more informed views based on the views the betters’ take and how strongly they hold them. (Therefore, there’s value in it being public, I think—I also therefore would personally suggest the comments shouldn’t be deleted, but it’s up to Sean.)
But I did feel really weird reading “Pleasure doing business Justin!”. I didn’t really feel uncomfortable with the rest of the upbeat tone Sean notes, but perhaps that should’ve been toned down too. That tone isn’t necessary for the benefits of the bet—it could be civil and polite but also neutral or sombre—and could create reputational issues for EA. (Plus it’s probably just good to have more respectful/taking-things-seriously norms in cases like these, without having to always calculate the consequences of such norms.)
Also, I feel uncomfortable with someone having downvoted Chi’s comment, given that it seemed to have a quite reasonable tone and to be sharing input/a suggestion/a recommendation. It wasn’t cutting or personal or damning. It seemed to me more like explaining Chi’s view than persuading, so I think we should be somewhat wary of downvoting such things, even when we disagree, so we don’t fall into something like groupthink. (I’ve strong upvoted for reasons of balance, even though I feel unsure about Chi’s actual recommendations.)
I agree that Chi’s comment is very reasonable (and upvoted for that reason). Personally, I think editing for tone would be a reasonable compromise, but I am glad people are starting to think more about the EA Forum as a publicly searchable space.
Re: Michael & Khorton’s points, (1) Michael fully agreed, casual figure of speech that I’ve now deleted. I apologise. (2) I’ve done some further editing for tone but would be grateful if others had further suggestions.
I also agree re: Chi’s comment—I’ve already remarked that I think the point was valid, but I would add that I found it to be respectful and considerate in how it made its point (as one of the people it was directed towards).
It’s been useful for me to reflect on. I think a combination of two things for me: one is some inherent personal discomfort/concern about causing offence by effectively saying “I think you’re wrong and I’m willing to bet you’re wrong”, which I think I unintentionally counteracted with (possibly excessive) levity. The second is how quickly the disconnect can happen from (initial discussion of very serious topic) to (checking in on forum several days later to quickly respond to some math). Both are things I will be more careful about going forward. Lastly, I may have been spending too much time around risk folk, for whom certain discussions become so standard that one forgets how they can come across.
Fwiw, the “pleasure doing business” line was the only part of your tone that struck me as off when I read the thread.
I guess there’s an interesting argument here for making casual gambling illegal—based on this thread, it seems like “Bets are serious & somber business, not for frivolous things like horse races” could be a really high value meme to spread.
Metaculus currently gives a 16% chance to the claim that total deaths before 2021 will be greater than 11.6 M.
I must admit, I would not make the same bet at the same odds on the 27th of February 2020.
If the death rate is really that high, then we should significantly update P(it goes world scale pandemic) and P(a particular person gets it | it goes world scale pandemic) downwards as it would cause governments and individuals to put a lot of resources towards prevention.
One can also imagine that P(a particular person dies from it | a particular person gets it) will go down with time as resources are spent on finding better treatment and a cure.
Good points! I agree but I’m not sure how significant those effects will be though… Have an idea of how we’d in a principled precise way update based on those effects?
It’s difficult. You’d probably need a model of every country since state capacity, health care, information access… can vary widely.
It looks like it almost not affecting children; a person of older age should give himself a higher estimate of being affected.
What do you base this one on?
~1/6 of the world population were infected by the 2009 swine flu (mortality rate was much lower though, at ~1/3000 of those infected).
I base it on what Greg mentions in his reply about the swine flu and also the reasoning that the reproduction number has to go below 1 for it to stop spreading. If its normal reproduction number before people have become immune (after being sick) is X (like 2 say), then to get the reproduction number below 1, (susceptible population proportion) * (normal reproduction number) < 1. So with a reproduction number of 2 the proportion who get infected will be 1⁄2.
This assumes that people have time to become immune so for a fast spreading virus more than that proportion would fall ill (note thought that pointing in the opposite direction is the effect that not everyone is uniformly likely to get ill though because some people are in relative isolation or have very good hygiene).
Just a note that the reproduction number can decrease for other reasons; in particular if and as the disease spreads you might expect greater public awareness, CDC guidance, travel bans, etc leading to greater precaution and less opportunity for infected individuals to infect others.
How do you arrive at 1⁄3 here?
It’s based on a few facts and swirling them around in my intuition to choose a single simple number.
Long invisible contagious incubation period (seems somewhat indicated but maybe is wrong) and high degree of contagiousness (the Ro factor) implies it is hard to contain and should spread in the network (and look something like probability spreading in a Markov chain with transition probabilities roughly following transportation probabilities).
The exponential growth implies that we are only a few doublings away from world scale pandemic (also note we’re probably better at stopping things when their at small scale). In the exponential sense, 4,000 is half way between 1 and 8 million and about a third of the way to world population.
The exponential growth curve and incubation period also have implications about “bugging out” strategies where you get food and water, isolate, and wait for it to be over. Let’s estimate again:
Assuming as in the above comment we are 1⁄3 of the exponential climb (in reported numbers) towards the total world population and it took a month, in two more months (the end of March) we would expect it to reach saturation. If the infectious incubation period is 2 weeks (and people are essentially uniformly infectious during that time) then you’d move the two month date forward by two weeks (the middle of March). Assuming you don’t want to take many risks here you might have a week buffer in front (the end of the first week of March). Finally, after symptoms arise people may be infectious for a couple weeks (I believe this is correct, anyone have better data?). So the sum total amount of time for the isolation strategy is about 5 weeks (and may start as early as the end of the first week of March or earlier depending on transportation and supply disruptions).
Governments by detecting cases early or restricting travel, and citizens by isolating and using better hygiene, could change these numbers and dates.
(note: for future biorisks that may be more severe this reasoning is also useful)
Have you looked at how long pandemics have lasted in the past? I think it’s a lot longer than five weeks.
It could have longer tail, but given high R0 large part of human population could be simultaneously ill (or self isolated) in March-April 2020.
What is you opinion, Dave, could this could put food production at risk?
Thanks. I’ve updated towards your estimate but 1⁄3 still seems high by my (all too human) intuitions.
I did some research on hand hygiene and wrote a quick summary on Facebook and LessWrong if anyone is interested. Not sure it’s really appropriate for a top-level post on the EA Forum but I do think it’s pretty useful to know. Most people (including me a few days ago) are very bad at washing their hands.
For a week or so I have been fearing this potentially deadly disease spreading to most people on Earth (space-station and antarctic bases excepted), since the doubling time has been about half a week, and simple calculations show that even with a 1 week doubling time, half the Earth’s population would get it by June. My fears were confirmed by reading of the John Hopkins Event 201 simulation last year, in which a 1 week doubling time virus spread throughout the world and killed tens of millions of people:
https://www.abc.net.au/news/2020-02-01/coronavirus-outbreak-researchers-simulated-severe-pandemic/11906562
http://www.centerforhealthsecurity.org/event201/videos.html
There is no precedent for a virus spreading as fast and far as this one:
https://graphics.reuters.com/CHINA-HEALTH-VIRUS-COMPARISON/0100B5BY3CY/
I still believe the Wuhan coronavirus will infect half the population by the middle of this year, but I now have cause for hope that the variants which do this will be far less virulent than the one or ones which caused the deaths in Hubei in January.
I am not an epidemiologist, but here goes. I am suggesting that the virus is mutating rapidly into less virulent strains which compete successfully against the original, December and early January, more virulent form(s). However, I know of no reports of such mutations.
From the John Hopkins ticker (necessarily long URL):
https://gisanddata.maps.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6
here are the figures from Hubei, the province containing Wuhan, and the ten provinces with the highest death rates, all of which were infected in mid to late January, about 5 or 6 weeks after the disease began in Wuhan. (The following is a table, to be viewed in a fixed width font.)
The lower death rate in the provinces which were first infected well after the initial spread in Hubei is striking: it is 1/27th the death rate in Hubei.
The reasons for this might include:
1. Cases in the other provinces are, on average, more recent than those in Hubei, meaning the death rate will rise over time to resemble that of Hubei (and the Hubei death rate could rise too, for the same reason.)
2. Saturation of hospitals and testing in Hubei and Wuhan in particular.
3. The provinces being infected more with a less symptomatic and so less deadly variant of the virus than those which caused the initial spread in Hubei, though hopefully the same process would be occurring in Hubei too, so the death rate for recent infections would be lower too. (I hope this is happening.)
4. Nursing leading to people surviving to the extent that a significantly lower proportion of people have no immunity. (I think this does not yet play a significant role.)
5. Poorer quality of care, including people not being able to get into hospital, in Hubei compared to the other provinces
However, I can’t imagine that 1, 2 or 4 would make anything like the difference we see—a striking 27:1 ratio in the death rate. 5 might explain some of it. This makes me think that 3 is true to a significant degree.
Now turning to the recovery rates. The other provinces have a significantly higher recovery rate than Hubei. Assuming the diagnostic standards do not vary significantly, this cannot be explained by 1, 2 or 4. It would be very well explained by 3.
In the other provinces, the recognised infections seem to be less damaging, with quicker recovery. Assuming that quality of care is about the same, the only explanation I can think of is that these more recent cases in the other provinces are with variants of the virus which cause less symptoms and perhaps lead to an earlier recovery—while still being contagious enough to compete successfully against the original and/or any more recently mutated, more virulent strains.
This analysis gives me hope that by the time the virus reaches about half the people on Earth—as I believe it will by the middle of this year—that the variants most people get will be much less damaging than at the start of the pandemic.
If this analysis is true, then the true rates of infection in the other provinces—and recently in Hubei—may be much higher in proportion to the number of confirmed cases than was the ratio a few weeks ago. This would be due to a greater proportion of infected people having no or only mild symptoms—so they are never tested or recognised much by the medical system.
If this is occurring, then it may work out well . . . unless there are mutants which remain highly infectious but which have sufficiently different spike and envelope proteins that the antibodies developed in response to the current strains are ineffective. Then those strains might start a whole second wave of infection, since immunologically, they would be a different virus.
If this analysis is correct, then while the Chinese lockdown (which is unsustainable) may slow the spread of the virus more effectively than the post-symptom quarantine arrangements of the West (which is the best they can do, not being a dictatorship), the Western approach is actually more helpful. This would be due to the Western approach enabling strains of the virus which have few or no symptoms to spread very rapidly, while significantly reducing the replication of strains which produce strong enough symptoms for people to be hospitalised and tested.
If there were no such helpful mutations, then the Western approach would be less effective than the Chinese approach, but the Chinese approach is unsustainable for more than a few weeks. There’s no hope of a conventional vaccine by the middle of this year, or even the end. However, if my analysis is correct, then by the good fortune of the virus mutating without altering its envelope and spike proteins significantly, the most successful strains become less damaging and work like a vaccine against the more damaging strains.
Some slightly positive evidence: By the 24th, 19 cases had been reported outside of China, with onset of symptoms usually before that. Given the most recent estimate of a mean incubation period of 5 days, it seems surprising that only 1 of the 19 cases has infected another person that we know of so far (a man traveling from Wuhan to Vietnam infected his son, who shared a hotel room with his father for 3 days). Since monitoring of people the infected came into contact with is high, finding infected people should be fairly quick.
Seems that effective containment, a lower R0 than expected (both good), or a longer incubation period than previously assumed (bad) could be the reason.
Source for the 19 cases by the 24th: https://docs.google.com/spreadsheets/d/1yZv9w9zRKwrGTaR-YzmAqMefw4wMlaXocejdxZaTs6w/htmlview?usp=sharing&sle=true#
The possibility of a long incubation period (and especially a long-ish pre-symptomatic infectiousness period) is especially worrying to me, as my impression is that this was a key reason SARS didn’t take off more than it did.
That said, I’m not sure it’s clear yet that there is a long pre-symptomatic period. This article suggests we’re not really sure about this yet. I’m expecting to get more information very soon, though.
Update: “A WHO panel of 16 independent experts twice last week declined to declare an international emergency over the outbreak.
“While more cases have been emerging outside China in people who have travelled from there recently, the WHO said only one of the overseas cases involved human-to-human transmission.”
https://www.theguardian.com/science/live/2020/jan/28/coronavirus-first-death-in-beijing-as-us-issues-new-china-travel-warning-live-updates?page=with:block-5e2ff1a58f0811db2faec898#block-5e2ff1a58f0811db2faec898
The WHO has now declared a global health emergency.
https://www.bbc.com/news/world-51318246
There has now been an incidence of person-to-person transmission outside China
How confident are you that it affects mainly older people or those with preexisting health conditions? Are the stats solid now? I vaguely recall that SARS and MERS (possibly the relevant reference class), were age agnostic.
Here’s a chart of odds of death by age that was tweeted by an epidmiology professor at Hopkins. I can’t otherwise vouch for the reliability of the data and caveat that mortality data sucks this early in an epidemic. https://twitter.com/JustinLessler/status/1222108497556279297
Nice find! Hopefully it updates soon as we learn more. What is your interpretation of it in terms of mortality rate in each age bracket?
MERS was pretty age-agnostic. SARS had much higher mortality rates in >60s. All the current reports from China claim that it affects mainly older people or those with preexisting health conditions. Coronavirus is a broad class including everything from the common cold to MERS; not sure there’s good ground to anchor too closely to SARS or MERS as a reference class.
When comparing the novel coronavirus to the seasonal flu, it seems like the main differences are:
-the seasonal flu is typically around half as infectious (r0 typically 1.4 to 1.6)
-some strains of seasonal flu have a vaccine available; the coronavirus doesn’t yet (although quick progress has apparently been made on the first steps)
But we believe that both seasonal flu and this coronavirus are similarly deadly, largely for the same segment of the population (older/at risk). Have I summarised this correctly?
I don’t think this is a good summary for an important reason: I think the Wuhan Coronavirus is a few orders of magnitude more deadly than a normal seasonal flu. The mortality estimates for the Wuhan Coronavirus are in the single digit percentages, whereas this source tells me that the seasonal flu mortality rate is about 0.014%. [ETA: Sorry, it’s closer to 0.1%, see Greg Colbourn’s comment].
A better comparison would be to look at death rate for those infected: ~0.1% for seasonal flu.
The mortality rate is the proportion of infections that *ultimately* result in death. If we had really good data (we don’t), we could get a better estimate by pitting fatalities against *recoveries*. Since we aren’t tracking recoveries well, If we attempt to compute mortality rates right now (as infections are increasing exponentially), we’re going to badly underestimate the actual mortality rate.
The source you’re looking at considers everyone in the population, even people who don’t get the flu, but the 3% figure for the Wuhan Coronavirus is only considering the people who have been infected.
EDIT: I was wrong, your source is giving the percentage of deaths caused by the flu, not the percentage of the whole population killed by flu each year.
The annual death toll in China is 8.9 million, so 87 deaths would mean 0.00001% are caused by the Wuhan Coronavirus, compared to 0.014% for the seasonal flu. (I don’t think this is a great way to compare, because very few people have been exposed to the Coronavirus so far, but you get the gist.)
No, the case fatality rate isn’t actually 3%, that’s the rate based on identified cases, and it’s always higher than the true rate.
Estimate of swine flu fatality rate was ~0.5% in July 2009 with 100,000 cases reported. It ended up dropping over an order of magnitude.
The opposite trend occurred for SARS (in the same class as nCoV-2019), which originally had around a 2-5% deaths/cases rate but ended up with >10% once all cases ran their full course.
SARS was very unusual, and serves as a partial counterexample. On the other hand, the “trend” being shown is actually almost entirely a function of the age groups of the people infected—it was far more fatal in the elderly. With that known now, we have a very reasonable understanding of what occurred—which is that because the elderly were infected more often in countries where SARS reached later, and the countries are being aggregated in this graph, the raw estimate behaved very strangely.
I think we were both confused. But based on what Greg Colbourn said, my point still stands, albeit to a weaker extent.
Note that there is now a Metaculus prize for questions and comments related to the coronavirus outbreak. Here you can see the existing questions in this series.
Some good background here: https://www.reddit.com/r/China_Flu/comments/exe552/coronavirus_faq_misconceptions_information_from_a (a)
I’am checking all stats nearly every hour at https://www.coronavirus-symptoms.info
Would love for someone to poke this & assess its epistemics: Coronavirus Contains “HIV Insertions”, Stoking Fears Over Artificially Created Bioweapon (a)
I’m more curious about the trustworthiness of the scary graphs than about the claims that it may have been bioengineered.
Cleared up here on the EA Hangouts fb group.
Any thoughts on why the estimates here are so much higher than metaculus? Here they seem to range between 10 − 100 million, whilst the current metaculus median is 100k.
Maybe I’ve missed something.
Maybe because of anchoring effect: everyone on metaculus sees the median prediction before he makes the bet and doesn’t want to be much different from the group.
How are you including age in this regression? It seems to me that ADRS resulting in ICU admission is a candidate for confounding with age.