There might also be a confusion about what the purpose and impact of bets in our community is. While the number of bets being made is relatively small, the effect of having a broader betting culture is quite major, at least in my experience of interacting with the community.
More precisely, we have a pretty concrete norm that if someone makes a prediction or a public forecast, then it is usually valid (with some exceptions) to offer a bet with equal or better odds than the forecasted probability to the person making the forecast, and expect them to take you up on the bet. If the person does not take you up on the bet, this usually comes with some loss of status and reputation, and is usually (correctly, I would argue) interpreted as evidence that the forecast was not meant sincerely, or the person is trying to avoid public accountability in some other way. From what I can tell, this is exactly what happened here.
The effects of this norm (at least as I have perceived it) are large and strongly positive. From what I can tell, it is one of the norms that ensures the consistency of the models that our public intellectuals express, and when I interact with communities that do not have this norm, I very concretely experience many people no longer using probabilities in consistent ways, and can concretely observe large numbers of negative consequences arising from the lack of this norm.
I think what’s confusing you is that people are selectively against betting based on its motivation.
In EA, people regularly talk about morbid topics, but the stated aim is to help people. In this case, the aim could be read as “having fun and making money”. It was the motivation that was a problem, not the act itself, for most people.
While my read of your post is “there is the possibility that the aim could be interpreted this way” which I regard as fair, I feel I should state that ‘fun and money’ was not my aim, and (I strongly expect not Justin’s), as I have not yet done so explicitly.
I think it’s important to be as well-calibrated as reasonably possible on events of global significance. In particular, I’ve been seeing a lot of what appear to me to be poorly calibrated, alarmist statements, claims and musings on nCOV on social media, including from EAs, GCR researchers, Harvard epidemiologists, etc. I think these poorly calibrated/examined claims can result in substantial material harms to people, in terms of stoking up unnecessary public panic, confusing accurate assessment of the situation, and creating ‘boy who cried wolf’ effects for future events. I’ve spent a lot of time on social media trying to get people to tone down their more extreme statements re: nCOV.
(edit: I do not mean this to refer to Justin’s fermi estimate, which was on the more severe end but had clearly reasoned and transparent thinking behind it; more a broad comment on concerns re: poor calibration and the practical value of being well-calibrated).
As Habryka has said, this community in particular is one that has a set of tools it (or some part of it) uses for calibration. So I drew on it in this case. The payoff for me is small (£50; and I’m planning to give it to AMF); the payoff for Justin is higher but he accepted it as an offer rather than proposing it and so I doubt money is a factor for him either.
In the general sense I think both the concern about motivation and how something appears to parts of the community is valid. I would hope that it is still possible to get the benefits of betting on GCR-relevant topics for the benefits-to-people I articulate above (and the broader benefits Habryka and others have articulated). I would suggest that achieving this balance may be a matter of clearly stating aims and motivations, and (as others have suggested) taking particular care with tone and framing, but I would welcome further guidance.
Lastly, I would like to note my gratitude for the careful and thoughtful analysis and considerations that Khorton, Greg, Habryka, Chi and others are bringing to the topic. There are clearly a range of important considerations to be balanced appropriately, and I’m grateful both for the time taken and the constructive nature of the discussion.
Following Sean here I’ll also describe my motivation for taking the bet.
After Sean suggested the bet, I felt as if I had to take him up on it for group epistemic benefit; my hand was forced. Firstly, I wanted to get people to take the nCOV seriously and to think thoroughly about it (for the present case and for modelling possible future pandemics) - from an inside view model perspective the numbers I was getting are quite worrisome. I felt that if I didn’t take him up on the bet people wouldn’t take the issue as seriously, nor take explicitly modeling things themselves as seriously either. I was trying to socially counter what sometimes feels like a learned helplessness people have with respect to analyzing things or solving problems. Also, the EA community is especially clear thinking and I think a place like the EA forum is a good medium for problem solving around things like nCOV.
Secondly, I generally think that holding people in some sense accountable for their belief statements is a good thing (up to some caveats); it improves the collective epistemic process. In general I prefer exchanging detailed models in discussion rather than vague intuitions mediated by a bet but exchanging intuitions is useful. I also generally would rather make bets about things that are less grim and wouldn’t have suggested this bet myself, but I do think that it is important that we do make predictions about things that matter and some of those things are rather grim. In grim bets though we should definitely pay attention to how something might appear to parts of the community and make more clear what the intent and motivation behind the bet is.
Third, I wished to bring more attention and support to the issue in the hope that it causes people to take sensible personal precautions and that perhaps some of them can influence how things progress. I do not entirely know who reads this and some of them may have influence, expertise, or cleverness they can contribute.
I’m so sorry Sean, I took it as obvious that your motivation was developing accurate beliefs, hopefully to help you help others, rather than fun and profit. Didn’t mean to imply otherwise!
Thanks Khorton, nothing to apologise for. I read your comment as a concern about how the motivations of a bet might be perceived from the outside (whether in the specific case or more generally); but this led me to the conclusion that actually stating my motivations rather than assuming everyone reading knows would be helpful at this stage!
I’ve spent a lot of time on social media trying to get people to tone down their more extreme statements re: nCOV.
I would be interested to learn more about your views on the current outbreak. Can you link to the statements you made on social media, or present your perspective here (or as a top-level comment or post)?
(shared in one xrisk group, for example, as “X-riskers, it would appear your time is now: “With increasing transportation we are close to a transition to conditions in which extinction becomes certain both because of rapid spread and because of the selective dominance of increasingly worse pathogens.”. My response: “We are **not** “close to a transition to conditions in which extinction becomes certain both because of rapid spread and because of the selective dominance of increasingly worse pathogens”.)
Or, responding to speculation that nCov is a deliberately developed bioweapon, or was accidentally released from a BSL4 lab in Wuhan. There isn’t evidence for either of these and I think they are unhelpful types of speculation to be made without evidence, and such speculations can spread widely. Further, some people making the latter speculation didn’t seem to be aware what a common class of virus coronaviruses are (ranging from common cold thru to SARS). Whether or not a coronavirus was being studied at the Wuhan lab, I think it would not be a major coincidence to find a lab studying a coronavirus in a major city.
A third example was clarifying that the event 201 exercise Johns Hopkins did (which involved 65 million hypothetical deaths) was a tabletop simulation , not a prediction, and therefore could not be used to extrapolate an expectation of 65 million deaths from the current outbreak.
I made various other comments as part of discussions, but more providing context or points for discussion etc as I recall as opposed to disagreeing per se, and don’t have time to dig them up.
The latter examples don’t relate to predictions of the severity of the outbreak, more so to what I perceived at the time to be misunderstandings, misinformation, and unhelpful/ungrounded speculations.
There might also be a confusion about what the purpose and impact of bets in our community is. While the number of bets being made is relatively small, the effect of having a broader betting culture is quite major, at least in my experience of interacting with the community.
More precisely, we have a pretty concrete norm that if someone makes a prediction or a public forecast, then it is usually valid (with some exceptions) to offer a bet with equal or better odds than the forecasted probability to the person making the forecast, and expect them to take you up on the bet. If the person does not take you up on the bet, this usually comes with some loss of status and reputation, and is usually (correctly, I would argue) interpreted as evidence that the forecast was not meant sincerely, or the person is trying to avoid public accountability in some other way. From what I can tell, this is exactly what happened here.
The effects of this norm (at least as I have perceived it) are large and strongly positive. From what I can tell, it is one of the norms that ensures the consistency of the models that our public intellectuals express, and when I interact with communities that do not have this norm, I very concretely experience many people no longer using probabilities in consistent ways, and can concretely observe large numbers of negative consequences arising from the lack of this norm.
Alex Tabarrok has written about this in his post “A Bet is a Tax on Bullshit”.
This doesn’t affect your point, but I just wanted to note that the post—including the wonderful title—was written by Alex Tabarrok.
Oops. Fixed.
I think what’s confusing you is that people are selectively against betting based on its motivation.
In EA, people regularly talk about morbid topics, but the stated aim is to help people. In this case, the aim could be read as “having fun and making money”. It was the motivation that was a problem, not the act itself, for most people.
While my read of your post is “there is the possibility that the aim could be interpreted this way” which I regard as fair, I feel I should state that ‘fun and money’ was not my aim, and (I strongly expect not Justin’s), as I have not yet done so explicitly.
I think it’s important to be as well-calibrated as reasonably possible on events of global significance. In particular, I’ve been seeing a lot of what appear to me to be poorly calibrated, alarmist statements, claims and musings on nCOV on social media, including from EAs, GCR researchers, Harvard epidemiologists, etc. I think these poorly calibrated/examined claims can result in substantial material harms to people, in terms of stoking up unnecessary public panic, confusing accurate assessment of the situation, and creating ‘boy who cried wolf’ effects for future events. I’ve spent a lot of time on social media trying to get people to tone down their more extreme statements re: nCOV.
(edit: I do not mean this to refer to Justin’s fermi estimate, which was on the more severe end but had clearly reasoned and transparent thinking behind it; more a broad comment on concerns re: poor calibration and the practical value of being well-calibrated).
As Habryka has said, this community in particular is one that has a set of tools it (or some part of it) uses for calibration. So I drew on it in this case. The payoff for me is small (£50; and I’m planning to give it to AMF); the payoff for Justin is higher but he accepted it as an offer rather than proposing it and so I doubt money is a factor for him either.
In the general sense I think both the concern about motivation and how something appears to parts of the community is valid. I would hope that it is still possible to get the benefits of betting on GCR-relevant topics for the benefits-to-people I articulate above (and the broader benefits Habryka and others have articulated). I would suggest that achieving this balance may be a matter of clearly stating aims and motivations, and (as others have suggested) taking particular care with tone and framing, but I would welcome further guidance.
Lastly, I would like to note my gratitude for the careful and thoughtful analysis and considerations that Khorton, Greg, Habryka, Chi and others are bringing to the topic. There are clearly a range of important considerations to be balanced appropriately, and I’m grateful both for the time taken and the constructive nature of the discussion.
Following Sean here I’ll also describe my motivation for taking the bet.
After Sean suggested the bet, I felt as if I had to take him up on it for group epistemic benefit; my hand was forced. Firstly, I wanted to get people to take the nCOV seriously and to think thoroughly about it (for the present case and for modelling possible future pandemics) - from an inside view model perspective the numbers I was getting are quite worrisome. I felt that if I didn’t take him up on the bet people wouldn’t take the issue as seriously, nor take explicitly modeling things themselves as seriously either. I was trying to socially counter what sometimes feels like a learned helplessness people have with respect to analyzing things or solving problems. Also, the EA community is especially clear thinking and I think a place like the EA forum is a good medium for problem solving around things like nCOV.
Secondly, I generally think that holding people in some sense accountable for their belief statements is a good thing (up to some caveats); it improves the collective epistemic process. In general I prefer exchanging detailed models in discussion rather than vague intuitions mediated by a bet but exchanging intuitions is useful. I also generally would rather make bets about things that are less grim and wouldn’t have suggested this bet myself, but I do think that it is important that we do make predictions about things that matter and some of those things are rather grim. In grim bets though we should definitely pay attention to how something might appear to parts of the community and make more clear what the intent and motivation behind the bet is.
Third, I wished to bring more attention and support to the issue in the hope that it causes people to take sensible personal precautions and that perhaps some of them can influence how things progress. I do not entirely know who reads this and some of them may have influence, expertise, or cleverness they can contribute.
I’m so sorry Sean, I took it as obvious that your motivation was developing accurate beliefs, hopefully to help you help others, rather than fun and profit. Didn’t mean to imply otherwise!
Thanks Khorton, nothing to apologise for. I read your comment as a concern about how the motivations of a bet might be perceived from the outside (whether in the specific case or more generally); but this led me to the conclusion that actually stating my motivations rather than assuming everyone reading knows would be helpful at this stage!
I would be interested to learn more about your views on the current outbreak. Can you link to the statements you made on social media, or present your perspective here (or as a top-level comment or post)?
Hi Wei,
Sorry I missed this. My strongest responses over the last while have fallen into the categories of: (1) responding to people claiming existential risk-or-approaching potential (or sharing papers by people like Taleb stating we are entering a phase where this is near-certain; e.g. https://static1.squarespace.com/static/5b68a4e4a2772c2a206180a1/t/5e2efaa2ff2cf27efbe8fc91/1580137123173/Systemic_Risk_of_Pandemic_via_Novel_Path.pdf
(shared in one xrisk group, for example, as “X-riskers, it would appear your time is now: “With increasing transportation we are close to a transition to conditions in which extinction becomes certain both because of rapid spread and because of the selective dominance of increasingly worse pathogens.”. My response: “We are **not** “close to a transition to conditions in which extinction becomes certain both because of rapid spread and because of the selective dominance of increasingly worse pathogens”.)
Or, responding to speculation that nCov is a deliberately developed bioweapon, or was accidentally released from a BSL4 lab in Wuhan. There isn’t evidence for either of these and I think they are unhelpful types of speculation to be made without evidence, and such speculations can spread widely. Further, some people making the latter speculation didn’t seem to be aware what a common class of virus coronaviruses are (ranging from common cold thru to SARS). Whether or not a coronavirus was being studied at the Wuhan lab, I think it would not be a major coincidence to find a lab studying a coronavirus in a major city.
A third example was clarifying that the event 201 exercise Johns Hopkins did (which involved 65 million hypothetical deaths) was a tabletop simulation , not a prediction, and therefore could not be used to extrapolate an expectation of 65 million deaths from the current outbreak.
I made various other comments as part of discussions, but more providing context or points for discussion etc as I recall as opposed to disagreeing per se, and don’t have time to dig them up.
The latter examples don’t relate to predictions of the severity of the outbreak, more so to what I perceived at the time to be misunderstandings, misinformation, and unhelpful/ungrounded speculations.