I can guess that the primary motivation is not “making money” or “the feeling of winning and being right”—which would be quite inappropriate in this context
I don’t think these motivations would be inappropriate in this context. Those are fine motivations that we healthily leverage in large parts of the world to cause people to do good things, so of course we should leverage them here to allow us to do good things.
The whole economy relies on people being motivated to make money, and it has been a key ingredient to our ability to sustain the most prosperous period humanity has ever experienced (cf. more broadly the stock market). Of course I want people to have accurate beliefs by giving them the opportunity to make money. That is how you get them to have accurate beliefs!
At least from a common-sense morality perspective, this doesn’t sit right with me. I do feel that it would be wrong for two people to get together to bet about some horrible tragedy—“How many people will die in this genocide?” “Will troubled person X kill themselves this year?” etc. -- purely because they thought it’d be fun to win a bet and make some money off a friend. I definitely wouldn’t feel comfortable if a lot of people around me were doing this.
When the motives involve working to form more accurate and rigorous beliefs about ethically pressing issues, as they clearly were in this case, I think that’s a different story. I’m sympathetic to the thought that it would be bad to discourage this sort of public bet. I think it might also be possible to argue that, if the benefits of betting are great enough, then it’s worth condoning or even encouraging more ghoulishly motivated bets too. I guess I don’t really buy that, though. I don’t think that a norm specifically against public bets that are ghoulish from a common-sense morality perspective would place very important limitations on the community’s ability to form accurate beliefs or do good.
I do also think there are significant downsides, on the other hand, to having a culture that disregards common-sense feelings of discomfort like the ones Chi’s comment expressed.
[[EDIT: As a clarification, I’m not classifying the particular bet in this thread as “ghoulish.” I share the general sort of discomfort that Chi’s comment describes, while also recognizing that the bet was well-motivated and potentially helpful. I’m more generally pushing back against the thought that evident motives don’t matter much or that concerns about discomfort/disrespectfulness should never lead people to refrain from public bets.]]
I guess I don’t really buy that, though. I don’t think that a norm specifically against public bets that are ghoulish from a common-sense morality perspective would place very important limitations on the community’s ability to form accurate beliefs or do good.
Responding to this point separately: I am very confused by this statement. A large fraction of topics we are discussing within the EA community, are pretty directly about the death of thousands, often millions or billions, of other people. From biorisk (as discussed here), to global health and development, to the risk of major international conflict, a lot of topics we think about involve people forming models that will quite directly require forecasting the potential impacts of various life-or-death decisions.
I expect bets about a large number of Global Catastrophic Risks to be of great importance, and to similarly be perceived as “ghoulish” as you describe here. Maybe you are describing a distinction that is more complicated than I am currently comprehending, but I at least would expect Chi and Greg to object to bets of the type “what is the expected number of people dying in self-driving car accidents over the next decade?”, “Will there be an accident involving an AGI project that would classify as a ‘near-miss’, killing at least 10000 people or causing at least 10 billion dollars in economic damages within the next 50 years?” and “what is the likelihood of this new bednet distribution method outperforming existing methods by more than 30%, saving 30000 additional people over the next year?”.
All of these just strike me as straightforwardly important questions, that an onlooker could easily construe as “ghoulish”, and I expect would be strongly discouraged by the norms that I see being advocated for here. In the case of the last one, it is probably the key fact I would be trying to estimate when evaluating a new bednet distribution method.
Ultimately, I care a lot about modeling risks of various technologies, and understanding which technologies and interventions can more effective save people’s lives, and whenever I try to understand that, I will have to discuss and build models of how those will impact other people’s lives, often in drastic ways.
Compared to the above, the bet between Sean and Justin does not strike me as particularly ghoulish (and I expect that to be confirmed by doing some public surveys on people’s naive perception, as Greg suggested), and so I see little alternative to thinking that you are also advocating for banning bets on any of the above propositions, which leaves me confused why you think doing so would not inhibit our ability to do good.
There might also be a confusion about what the purpose and impact of bets in our community is. While the number of bets being made is relatively small, the effect of having a broader betting culture is quite major, at least in my experience of interacting with the community.
More precisely, we have a pretty concrete norm that if someone makes a prediction or a public forecast, then it is usually valid (with some exceptions) to offer a bet with equal or better odds than the forecasted probability to the person making the forecast, and expect them to take you up on the bet. If the person does not take you up on the bet, this usually comes with some loss of status and reputation, and is usually (correctly, I would argue) interpreted as evidence that the forecast was not meant sincerely, or the person is trying to avoid public accountability in some other way. From what I can tell, this is exactly what happened here.
The effects of this norm (at least as I have perceived it) are large and strongly positive. From what I can tell, it is one of the norms that ensures the consistency of the models that our public intellectuals express, and when I interact with communities that do not have this norm, I very concretely experience many people no longer using probabilities in consistent ways, and can concretely observe large numbers of negative consequences arising from the lack of this norm.
I think what’s confusing you is that people are selectively against betting based on its motivation.
In EA, people regularly talk about morbid topics, but the stated aim is to help people. In this case, the aim could be read as “having fun and making money”. It was the motivation that was a problem, not the act itself, for most people.
While my read of your post is “there is the possibility that the aim could be interpreted this way” which I regard as fair, I feel I should state that ‘fun and money’ was not my aim, and (I strongly expect not Justin’s), as I have not yet done so explicitly.
I think it’s important to be as well-calibrated as reasonably possible on events of global significance. In particular, I’ve been seeing a lot of what appear to me to be poorly calibrated, alarmist statements, claims and musings on nCOV on social media, including from EAs, GCR researchers, Harvard epidemiologists, etc. I think these poorly calibrated/examined claims can result in substantial material harms to people, in terms of stoking up unnecessary public panic, confusing accurate assessment of the situation, and creating ‘boy who cried wolf’ effects for future events. I’ve spent a lot of time on social media trying to get people to tone down their more extreme statements re: nCOV.
(edit: I do not mean this to refer to Justin’s fermi estimate, which was on the more severe end but had clearly reasoned and transparent thinking behind it; more a broad comment on concerns re: poor calibration and the practical value of being well-calibrated).
As Habryka has said, this community in particular is one that has a set of tools it (or some part of it) uses for calibration. So I drew on it in this case. The payoff for me is small (£50; and I’m planning to give it to AMF); the payoff for Justin is higher but he accepted it as an offer rather than proposing it and so I doubt money is a factor for him either.
In the general sense I think both the concern about motivation and how something appears to parts of the community is valid. I would hope that it is still possible to get the benefits of betting on GCR-relevant topics for the benefits-to-people I articulate above (and the broader benefits Habryka and others have articulated). I would suggest that achieving this balance may be a matter of clearly stating aims and motivations, and (as others have suggested) taking particular care with tone and framing, but I would welcome further guidance.
Lastly, I would like to note my gratitude for the careful and thoughtful analysis and considerations that Khorton, Greg, Habryka, Chi and others are bringing to the topic. There are clearly a range of important considerations to be balanced appropriately, and I’m grateful both for the time taken and the constructive nature of the discussion.
Following Sean here I’ll also describe my motivation for taking the bet.
After Sean suggested the bet, I felt as if I had to take him up on it for group epistemic benefit; my hand was forced. Firstly, I wanted to get people to take the nCOV seriously and to think thoroughly about it (for the present case and for modelling possible future pandemics) - from an inside view model perspective the numbers I was getting are quite worrisome. I felt that if I didn’t take him up on the bet people wouldn’t take the issue as seriously, nor take explicitly modeling things themselves as seriously either. I was trying to socially counter what sometimes feels like a learned helplessness people have with respect to analyzing things or solving problems. Also, the EA community is especially clear thinking and I think a place like the EA forum is a good medium for problem solving around things like nCOV.
Secondly, I generally think that holding people in some sense accountable for their belief statements is a good thing (up to some caveats); it improves the collective epistemic process. In general I prefer exchanging detailed models in discussion rather than vague intuitions mediated by a bet but exchanging intuitions is useful. I also generally would rather make bets about things that are less grim and wouldn’t have suggested this bet myself, but I do think that it is important that we do make predictions about things that matter and some of those things are rather grim. In grim bets though we should definitely pay attention to how something might appear to parts of the community and make more clear what the intent and motivation behind the bet is.
Third, I wished to bring more attention and support to the issue in the hope that it causes people to take sensible personal precautions and that perhaps some of them can influence how things progress. I do not entirely know who reads this and some of them may have influence, expertise, or cleverness they can contribute.
I’m so sorry Sean, I took it as obvious that your motivation was developing accurate beliefs, hopefully to help you help others, rather than fun and profit. Didn’t mean to imply otherwise!
Thanks Khorton, nothing to apologise for. I read your comment as a concern about how the motivations of a bet might be perceived from the outside (whether in the specific case or more generally); but this led me to the conclusion that actually stating my motivations rather than assuming everyone reading knows would be helpful at this stage!
I’ve spent a lot of time on social media trying to get people to tone down their more extreme statements re: nCOV.
I would be interested to learn more about your views on the current outbreak. Can you link to the statements you made on social media, or present your perspective here (or as a top-level comment or post)?
(shared in one xrisk group, for example, as “X-riskers, it would appear your time is now: “With increasing transportation we are close to a transition to conditions in which extinction becomes certain both because of rapid spread and because of the selective dominance of increasingly worse pathogens.”. My response: “We are **not** “close to a transition to conditions in which extinction becomes certain both because of rapid spread and because of the selective dominance of increasingly worse pathogens”.)
Or, responding to speculation that nCov is a deliberately developed bioweapon, or was accidentally released from a BSL4 lab in Wuhan. There isn’t evidence for either of these and I think they are unhelpful types of speculation to be made without evidence, and such speculations can spread widely. Further, some people making the latter speculation didn’t seem to be aware what a common class of virus coronaviruses are (ranging from common cold thru to SARS). Whether or not a coronavirus was being studied at the Wuhan lab, I think it would not be a major coincidence to find a lab studying a coronavirus in a major city.
A third example was clarifying that the event 201 exercise Johns Hopkins did (which involved 65 million hypothetical deaths) was a tabletop simulation , not a prediction, and therefore could not be used to extrapolate an expectation of 65 million deaths from the current outbreak.
I made various other comments as part of discussions, but more providing context or points for discussion etc as I recall as opposed to disagreeing per se, and don’t have time to dig them up.
The latter examples don’t relate to predictions of the severity of the outbreak, more so to what I perceived at the time to be misunderstandings, misinformation, and unhelpful/ungrounded speculations.
To clarify a bit, I’m not in general against people betting on morally serious issues. I think it’s possible that this particular bet is also well-justified, since there’s a chance some people reading the post and thread might actually be trying to make decisions about how to devote time/resources to the issue. Making the bet might also cause other people to feel more “on their toes” in the future, when making potentially ungrounded public predictions, if they now feel like there’s a greater chance someone might challenge them. So there are potential upsides, which could outweigh the downsides raised.
At the same time, though, I do find certain kinds of bets discomforting and expect a pretty large portion of people (esp. people without much EA exposure) to feel discomforted too. I think that the cases where I’m most likely to feel uncomfortable would be ones where:
The bet is about an ongoing, pretty concrete tragedy with non-hypothetical victims. One person “profits” if the victims become more numerous and suffer more.
The people making the bet aren’t, even pretty indirectly, in a position to influence the management of the tragedy or the dedication of resources to it. It doesn’t actually matter all that much, in other words, if one of them is over- or under-confident about some aspect of the tragedy.
The bet is made in an otherwise “casual”/”social” setting.
(Importantly) It feels like the people are pretty much just betting to have fun, embarrass the other person, or make money.
I realize these aren’t very principled criteria. It’d be a bit weird if the true theory of morality made a principled distinction between bets about “hypothetical” and “non-hypothetical” victims. Nevertheless, I do still have a pretty strong sense of moral queeziness about bets of this sort. To use an implausibly extreme case again, I’d feel like something was really going wrong if people were fruitlessly betting about stuff like “Will troubled person X kill themselves this year?”
I also think that the vast majority of public bets that people have made online are totally fine. So maybe my comments here don’t actually matter very much. I mainly just want to make the point that: (a) Feelings of common-sense moral discomfort shouldn’t be totally ignored or dismissed and (b) it’s at least sometimes the right call to refrain from public betting in light of these feelings.
At a more general level, I really do think it’s important for the community in terms of health, reputation, inclusiveness, etc., if common-sense feelings of moral and personal comfort are taken seriously. I’m definitely happy that the community has a norm of it typically being OK to publicly challenge others to bets. But I also want to make sure we have a strong norm against discouraging people from raising their own feelings of discomfort.
(I apologize if it turns out I’m disagreeing with an implicit straw-man here.)
The people making the bet aren’t, even pretty indirectly, in a position to influence the management of the tragedy or the dedication of resources to it. It doesn’t actually matter all that much, in other words, if one of them is over- or under-confident about some aspect of the tragedy.
Do you think the bet would be less objectionable if Justin was able to increase the number of deaths?
But if two people were (for example) betting on a prediction platform that’s been set up by public health officials to inform prioritization decisions, then this would make the bet better. The reason is that, in this context, it would obviously matter if their expressed credences are well-callibrated and honestly meant. To the extent that the act of making the bet helps temporarily put some observers “on their toes” when publicly expressing credences, the most likely people to be put “on their toes” (other users of the platform) are also people whose expressed credences have an impact. So there would be an especially solid pro-social case for making the bet.
I suppose this bullet point is mostly just trying to get at the idea that a bet is better if it can clearly be helpful. (I should have said “positively influence” instead of just “influence.”) If a bet creates actionable incentives to kill people, on the other hand, that’s not a good thing.
Thanks! I do want to stress that I really respect your motives in this case and your evident thoughtfulness and empathy in response to the discussion; I also think this particular bet might be overall beneficial. I also agree with your suggestion that explicitly stating intent and being especially careful with tone/framing can probably do a lot of work.
It’s maybe a bit unfortunate that I’m making this comment in a thread that began with your bet, then, since my comment isn’t really about your bet. I realize it’s probably pretty unpleasant to have an extended ethics debate somehow spring up around one of your posts.
I mainly just wanted to say that it’s OK for people to raise feelings of personal/moral discomfort and that these feelings of discomfort can at least sometimes be important enough to justify refraining from a public bet. It seemed to me like some of the reaction to Chi’s comment went too far in the opposite direction. Maybe wrongly/unfairly, it seemed to me that there was some suggestion that this sort of discomfort should basically just be ignored or that people should feel discouraged from expressing their discomfort on the EA Forum.
I expect bets about a large number of Global Catastrophic Risks to be of great importance, and to similarly be perceived as “ghoulish” as you describe here.
The US government attempted to create a prediction market to predict terrorist attacks. It was shut down basically because it was perceived as “ghoulish”.
My impression is that experts think that shutting down the market made terrorism more likely, but I’m not super well-informed.
I see this as evidence both that 1) markets are useful and 2) some people (including influential people like senators) react pretty negatively to betting on life or death issues, despite the utility.
Maybe you are describing a distinction that is more complicated than I am currently comprehending, but I at least would expect Chi and Greg to object to bets of the type “what is the expected number of people dying in self-driving car accidents over the next decade?”, “Will there be an accident involving an AGI project that would classify as a ‘near-miss’, killing at least 10000 people or causing at least 10 billion dollars in economic damages within the next 50 years?” and “what is the likelihood of this new bednet distribution method outperforming existing methods by more than 30%, saving 30000 additional people over the next year?”.
Just as an additional note, to speak directly to the examples you gave: I would personally feel very little discomfort if two people (esp. people actively making or influencing decisions about donations and funding) wanted to publicly bet on the question: “What is the likelihood of this new bednet distribution method outperforming existing methods by more than 30%, saving 30000 additional people over the next year?” I obviously don’t know, but I would guess that Chi and Greg would both feel more comfortable about that question as well. I think that some random “passerby” might still feel some amount of discomfort, but probably substantially less.
I realize that there probably aren’t very principled reasons to view one bet here as intrinsically more objectionable than others. I listed some factors that seem to contribute to my judgments in my other comment, but they’re obviously a bit of a hodgepodge. My fully reflective moral view is also that there probably isn’t anything intrinsically wrong with any category of bets. For better or worse, though, I think that certain bets will predictably be discomforting and wrong-feeling to many people (including me). Then I think this discomfort is worth weighing against the plausible social benefits of the individual bet being made. At least on rare occasions, the trade-off probably won’t be worth it.
I ultimately don’t think my view here is that different than common views on lots of other more mundane social norms. For example: I don’t think there’s anything intrinsically morally wrong about speaking ill of the dead. I recognize that a blanket prohibition on speaking ill of the dead would be a totally ridiculous and socially/epistemically harmful form of censorship. But it’s still true that, in some hard-to-summarize class of cases, criticizing someone who’s died is going to strike a lot of people as especially uncomfortable and wrong. Even without any specific speech “ban” in place, I think that it’s worth giving weight to these feelings when you decide what to say.
What this general line of thought implies about particular bets is obviously pretty unclear. Maybe the value of publicly betting is consistently high enough to, in pretty much all cases, render feelings of discomfort irrelevant. Or maybe, if the community tries to have any norms around public betting, then the expected cost of wise bets avoided due to “false positives” would just be much higher than the expected the cost of unwise bets made due to “false negatives.” I don’t believe this, but I obviously don’t know. My best guess is that it probably makes sense to strike a (messy/unprincipled/disputed) balance that’s not too dissimilar from balances we strike in other social and professional contexts.
(As an off-hand note, for whatever it’s worth, I’ve also updated in the direction of thinking that the particular bet that triggered this thread was worthwhile. I also, of course, feel a bit weird having somehow now written so much about the fine nuances of betting norms in a thread about a deadly virus.)
purely because they thought it’d be fun to win a bet and make some money off a friend.
I do think the “purely” matters a good bit here. While I would go as far as to argue that even purely financial motivations are fine (and should be leveraged for the public good when possible), I think in as much as I understand your perspective, it becomes a lot less bad if people are only partially motivated by making money (or gaining status within their community).
As a concrete example, I think large fractions of academia are motivated by wanting a sense of legacy and prestige (this includes large fractions of epidemiology, which is highly relevant to this situation). Those motivations also feel not fully great to me, and I would feel worried about an academic system that tries to purely operate on those motivations. However, I would similarly expect an academic system that does not recognize those motivations at all, bans all expressions of those sentiments, and does not build system that leverages them, to also fail quite disastrously.
I think in order to produce large-scale coordination, it is important to enable the leveraging a of a large variety of motivations, while also keeping them in check by ensuring at least a minimum level of more aligned motivations (or some other external systems that ensures partially aligned motivations still result in good outcomes).
At least from a common-sense morality perspective, this doesn’t sit right with me. I do feel that it would be wrong for two people to get together to bet about some horrible tragedy—“How many people will die in this genocide?” “Will troubled person X kill themselves this year?” etc. -- purely because they thought it’d be fun to win a bet and make some money off a friend. I definitely wouldn’t feel comfortable if a lot of people around me were doing this.
When the motives involve working to form more accurate and rigorous beliefs about ethically pressing issues, as they clearly were in this case, I think that’s a different story. I’m sympathetic to the thought that it would be bad to discourage this sort of public bet. I think it might also be possible to argue that, if the benefits of betting are great enough, then it’s worth condoning or even encouraging more ghoulishly motivated bets too. I guess I don’t really buy that, though. I don’t think that a norm specifically against public bets that are ghoulish from a common-sense morality perspective would place very important limitations on the community’s ability to form accurate beliefs or do good.
I do also think there are significant downsides, on the other hand, to having a culture that disregards common-sense feelings of discomfort like the ones Chi’s comment expressed.
[[EDIT: As a clarification, I’m not classifying the particular bet in this thread as “ghoulish.” I share the general sort of discomfort that Chi’s comment describes, while also recognizing that the bet was well-motivated and potentially helpful. I’m more generally pushing back against the thought that evident motives don’t matter much or that concerns about discomfort/disrespectfulness should never lead people to refrain from public bets.]]
Responding to this point separately: I am very confused by this statement. A large fraction of topics we are discussing within the EA community, are pretty directly about the death of thousands, often millions or billions, of other people. From biorisk (as discussed here), to global health and development, to the risk of major international conflict, a lot of topics we think about involve people forming models that will quite directly require forecasting the potential impacts of various life-or-death decisions.
I expect bets about a large number of Global Catastrophic Risks to be of great importance, and to similarly be perceived as “ghoulish” as you describe here. Maybe you are describing a distinction that is more complicated than I am currently comprehending, but I at least would expect Chi and Greg to object to bets of the type “what is the expected number of people dying in self-driving car accidents over the next decade?”, “Will there be an accident involving an AGI project that would classify as a ‘near-miss’, killing at least 10000 people or causing at least 10 billion dollars in economic damages within the next 50 years?” and “what is the likelihood of this new bednet distribution method outperforming existing methods by more than 30%, saving 30000 additional people over the next year?”.
All of these just strike me as straightforwardly important questions, that an onlooker could easily construe as “ghoulish”, and I expect would be strongly discouraged by the norms that I see being advocated for here. In the case of the last one, it is probably the key fact I would be trying to estimate when evaluating a new bednet distribution method.
Ultimately, I care a lot about modeling risks of various technologies, and understanding which technologies and interventions can more effective save people’s lives, and whenever I try to understand that, I will have to discuss and build models of how those will impact other people’s lives, often in drastic ways.
Compared to the above, the bet between Sean and Justin does not strike me as particularly ghoulish (and I expect that to be confirmed by doing some public surveys on people’s naive perception, as Greg suggested), and so I see little alternative to thinking that you are also advocating for banning bets on any of the above propositions, which leaves me confused why you think doing so would not inhibit our ability to do good.
There might also be a confusion about what the purpose and impact of bets in our community is. While the number of bets being made is relatively small, the effect of having a broader betting culture is quite major, at least in my experience of interacting with the community.
More precisely, we have a pretty concrete norm that if someone makes a prediction or a public forecast, then it is usually valid (with some exceptions) to offer a bet with equal or better odds than the forecasted probability to the person making the forecast, and expect them to take you up on the bet. If the person does not take you up on the bet, this usually comes with some loss of status and reputation, and is usually (correctly, I would argue) interpreted as evidence that the forecast was not meant sincerely, or the person is trying to avoid public accountability in some other way. From what I can tell, this is exactly what happened here.
The effects of this norm (at least as I have perceived it) are large and strongly positive. From what I can tell, it is one of the norms that ensures the consistency of the models that our public intellectuals express, and when I interact with communities that do not have this norm, I very concretely experience many people no longer using probabilities in consistent ways, and can concretely observe large numbers of negative consequences arising from the lack of this norm.
Alex Tabarrok has written about this in his post “A Bet is a Tax on Bullshit”.
This doesn’t affect your point, but I just wanted to note that the post—including the wonderful title—was written by Alex Tabarrok.
Oops. Fixed.
I think what’s confusing you is that people are selectively against betting based on its motivation.
In EA, people regularly talk about morbid topics, but the stated aim is to help people. In this case, the aim could be read as “having fun and making money”. It was the motivation that was a problem, not the act itself, for most people.
While my read of your post is “there is the possibility that the aim could be interpreted this way” which I regard as fair, I feel I should state that ‘fun and money’ was not my aim, and (I strongly expect not Justin’s), as I have not yet done so explicitly.
I think it’s important to be as well-calibrated as reasonably possible on events of global significance. In particular, I’ve been seeing a lot of what appear to me to be poorly calibrated, alarmist statements, claims and musings on nCOV on social media, including from EAs, GCR researchers, Harvard epidemiologists, etc. I think these poorly calibrated/examined claims can result in substantial material harms to people, in terms of stoking up unnecessary public panic, confusing accurate assessment of the situation, and creating ‘boy who cried wolf’ effects for future events. I’ve spent a lot of time on social media trying to get people to tone down their more extreme statements re: nCOV.
(edit: I do not mean this to refer to Justin’s fermi estimate, which was on the more severe end but had clearly reasoned and transparent thinking behind it; more a broad comment on concerns re: poor calibration and the practical value of being well-calibrated).
As Habryka has said, this community in particular is one that has a set of tools it (or some part of it) uses for calibration. So I drew on it in this case. The payoff for me is small (£50; and I’m planning to give it to AMF); the payoff for Justin is higher but he accepted it as an offer rather than proposing it and so I doubt money is a factor for him either.
In the general sense I think both the concern about motivation and how something appears to parts of the community is valid. I would hope that it is still possible to get the benefits of betting on GCR-relevant topics for the benefits-to-people I articulate above (and the broader benefits Habryka and others have articulated). I would suggest that achieving this balance may be a matter of clearly stating aims and motivations, and (as others have suggested) taking particular care with tone and framing, but I would welcome further guidance.
Lastly, I would like to note my gratitude for the careful and thoughtful analysis and considerations that Khorton, Greg, Habryka, Chi and others are bringing to the topic. There are clearly a range of important considerations to be balanced appropriately, and I’m grateful both for the time taken and the constructive nature of the discussion.
Following Sean here I’ll also describe my motivation for taking the bet.
After Sean suggested the bet, I felt as if I had to take him up on it for group epistemic benefit; my hand was forced. Firstly, I wanted to get people to take the nCOV seriously and to think thoroughly about it (for the present case and for modelling possible future pandemics) - from an inside view model perspective the numbers I was getting are quite worrisome. I felt that if I didn’t take him up on the bet people wouldn’t take the issue as seriously, nor take explicitly modeling things themselves as seriously either. I was trying to socially counter what sometimes feels like a learned helplessness people have with respect to analyzing things or solving problems. Also, the EA community is especially clear thinking and I think a place like the EA forum is a good medium for problem solving around things like nCOV.
Secondly, I generally think that holding people in some sense accountable for their belief statements is a good thing (up to some caveats); it improves the collective epistemic process. In general I prefer exchanging detailed models in discussion rather than vague intuitions mediated by a bet but exchanging intuitions is useful. I also generally would rather make bets about things that are less grim and wouldn’t have suggested this bet myself, but I do think that it is important that we do make predictions about things that matter and some of those things are rather grim. In grim bets though we should definitely pay attention to how something might appear to parts of the community and make more clear what the intent and motivation behind the bet is.
Third, I wished to bring more attention and support to the issue in the hope that it causes people to take sensible personal precautions and that perhaps some of them can influence how things progress. I do not entirely know who reads this and some of them may have influence, expertise, or cleverness they can contribute.
I’m so sorry Sean, I took it as obvious that your motivation was developing accurate beliefs, hopefully to help you help others, rather than fun and profit. Didn’t mean to imply otherwise!
Thanks Khorton, nothing to apologise for. I read your comment as a concern about how the motivations of a bet might be perceived from the outside (whether in the specific case or more generally); but this led me to the conclusion that actually stating my motivations rather than assuming everyone reading knows would be helpful at this stage!
I would be interested to learn more about your views on the current outbreak. Can you link to the statements you made on social media, or present your perspective here (or as a top-level comment or post)?
Hi Wei,
Sorry I missed this. My strongest responses over the last while have fallen into the categories of: (1) responding to people claiming existential risk-or-approaching potential (or sharing papers by people like Taleb stating we are entering a phase where this is near-certain; e.g. https://static1.squarespace.com/static/5b68a4e4a2772c2a206180a1/t/5e2efaa2ff2cf27efbe8fc91/1580137123173/Systemic_Risk_of_Pandemic_via_Novel_Path.pdf
(shared in one xrisk group, for example, as “X-riskers, it would appear your time is now: “With increasing transportation we are close to a transition to conditions in which extinction becomes certain both because of rapid spread and because of the selective dominance of increasingly worse pathogens.”. My response: “We are **not** “close to a transition to conditions in which extinction becomes certain both because of rapid spread and because of the selective dominance of increasingly worse pathogens”.)
Or, responding to speculation that nCov is a deliberately developed bioweapon, or was accidentally released from a BSL4 lab in Wuhan. There isn’t evidence for either of these and I think they are unhelpful types of speculation to be made without evidence, and such speculations can spread widely. Further, some people making the latter speculation didn’t seem to be aware what a common class of virus coronaviruses are (ranging from common cold thru to SARS). Whether or not a coronavirus was being studied at the Wuhan lab, I think it would not be a major coincidence to find a lab studying a coronavirus in a major city.
A third example was clarifying that the event 201 exercise Johns Hopkins did (which involved 65 million hypothetical deaths) was a tabletop simulation , not a prediction, and therefore could not be used to extrapolate an expectation of 65 million deaths from the current outbreak.
I made various other comments as part of discussions, but more providing context or points for discussion etc as I recall as opposed to disagreeing per se, and don’t have time to dig them up.
The latter examples don’t relate to predictions of the severity of the outbreak, more so to what I perceived at the time to be misunderstandings, misinformation, and unhelpful/ungrounded speculations.
To clarify a bit, I’m not in general against people betting on morally serious issues. I think it’s possible that this particular bet is also well-justified, since there’s a chance some people reading the post and thread might actually be trying to make decisions about how to devote time/resources to the issue. Making the bet might also cause other people to feel more “on their toes” in the future, when making potentially ungrounded public predictions, if they now feel like there’s a greater chance someone might challenge them. So there are potential upsides, which could outweigh the downsides raised.
At the same time, though, I do find certain kinds of bets discomforting and expect a pretty large portion of people (esp. people without much EA exposure) to feel discomforted too. I think that the cases where I’m most likely to feel uncomfortable would be ones where:
The bet is about an ongoing, pretty concrete tragedy with non-hypothetical victims. One person “profits” if the victims become more numerous and suffer more.
The people making the bet aren’t, even pretty indirectly, in a position to influence the management of the tragedy or the dedication of resources to it. It doesn’t actually matter all that much, in other words, if one of them is over- or under-confident about some aspect of the tragedy.
The bet is made in an otherwise “casual”/”social” setting.
(Importantly) It feels like the people are pretty much just betting to have fun, embarrass the other person, or make money.
I realize these aren’t very principled criteria. It’d be a bit weird if the true theory of morality made a principled distinction between bets about “hypothetical” and “non-hypothetical” victims. Nevertheless, I do still have a pretty strong sense of moral queeziness about bets of this sort. To use an implausibly extreme case again, I’d feel like something was really going wrong if people were fruitlessly betting about stuff like “Will troubled person X kill themselves this year?”
I also think that the vast majority of public bets that people have made online are totally fine. So maybe my comments here don’t actually matter very much. I mainly just want to make the point that: (a) Feelings of common-sense moral discomfort shouldn’t be totally ignored or dismissed and (b) it’s at least sometimes the right call to refrain from public betting in light of these feelings.
At a more general level, I really do think it’s important for the community in terms of health, reputation, inclusiveness, etc., if common-sense feelings of moral and personal comfort are taken seriously. I’m definitely happy that the community has a norm of it typically being OK to publicly challenge others to bets. But I also want to make sure we have a strong norm against discouraging people from raising their own feelings of discomfort.
(I apologize if it turns out I’m disagreeing with an implicit straw-man here.)
Do you think the bet would be less objectionable if Justin was able to increase the number of deaths?
No, I think that would be far worse.
But if two people were (for example) betting on a prediction platform that’s been set up by public health officials to inform prioritization decisions, then this would make the bet better. The reason is that, in this context, it would obviously matter if their expressed credences are well-callibrated and honestly meant. To the extent that the act of making the bet helps temporarily put some observers “on their toes” when publicly expressing credences, the most likely people to be put “on their toes” (other users of the platform) are also people whose expressed credences have an impact. So there would be an especially solid pro-social case for making the bet.
I suppose this bullet point is mostly just trying to get at the idea that a bet is better if it can clearly be helpful. (I should have said “positively influence” instead of just “influence.”) If a bet creates actionable incentives to kill people, on the other hand, that’s not a good thing.
Thanks bmg. FWIW, I provide my justification (from my personal perspective) here: https://forum.effectivealtruism.org/posts/g2F5BBfhTNESR5PJJ/concerning-the-recent-wuhan-coronavirus-outbreak?commentId=mWi2L4S4sRZiSehJq
Thanks! I do want to stress that I really respect your motives in this case and your evident thoughtfulness and empathy in response to the discussion; I also think this particular bet might be overall beneficial. I also agree with your suggestion that explicitly stating intent and being especially careful with tone/framing can probably do a lot of work.
It’s maybe a bit unfortunate that I’m making this comment in a thread that began with your bet, then, since my comment isn’t really about your bet. I realize it’s probably pretty unpleasant to have an extended ethics debate somehow spring up around one of your posts.
I mainly just wanted to say that it’s OK for people to raise feelings of personal/moral discomfort and that these feelings of discomfort can at least sometimes be important enough to justify refraining from a public bet. It seemed to me like some of the reaction to Chi’s comment went too far in the opposite direction. Maybe wrongly/unfairly, it seemed to me that there was some suggestion that this sort of discomfort should basically just be ignored or that people should feel discouraged from expressing their discomfort on the EA Forum.
The US government attempted to create a prediction market to predict terrorist attacks. It was shut down basically because it was perceived as “ghoulish”.
My impression is that experts think that shutting down the market made terrorism more likely, but I’m not super well-informed.
I see this as evidence both that 1) markets are useful and 2) some people (including influential people like senators) react pretty negatively to betting on life or death issues, despite the utility.
Just as an additional note, to speak directly to the examples you gave: I would personally feel very little discomfort if two people (esp. people actively making or influencing decisions about donations and funding) wanted to publicly bet on the question: “What is the likelihood of this new bednet distribution method outperforming existing methods by more than 30%, saving 30000 additional people over the next year?” I obviously don’t know, but I would guess that Chi and Greg would both feel more comfortable about that question as well. I think that some random “passerby” might still feel some amount of discomfort, but probably substantially less.
I realize that there probably aren’t very principled reasons to view one bet here as intrinsically more objectionable than others. I listed some factors that seem to contribute to my judgments in my other comment, but they’re obviously a bit of a hodgepodge. My fully reflective moral view is also that there probably isn’t anything intrinsically wrong with any category of bets. For better or worse, though, I think that certain bets will predictably be discomforting and wrong-feeling to many people (including me). Then I think this discomfort is worth weighing against the plausible social benefits of the individual bet being made. At least on rare occasions, the trade-off probably won’t be worth it.
I ultimately don’t think my view here is that different than common views on lots of other more mundane social norms. For example: I don’t think there’s anything intrinsically morally wrong about speaking ill of the dead. I recognize that a blanket prohibition on speaking ill of the dead would be a totally ridiculous and socially/epistemically harmful form of censorship. But it’s still true that, in some hard-to-summarize class of cases, criticizing someone who’s died is going to strike a lot of people as especially uncomfortable and wrong. Even without any specific speech “ban” in place, I think that it’s worth giving weight to these feelings when you decide what to say.
What this general line of thought implies about particular bets is obviously pretty unclear. Maybe the value of publicly betting is consistently high enough to, in pretty much all cases, render feelings of discomfort irrelevant. Or maybe, if the community tries to have any norms around public betting, then the expected cost of wise bets avoided due to “false positives” would just be much higher than the expected the cost of unwise bets made due to “false negatives.” I don’t believe this, but I obviously don’t know. My best guess is that it probably makes sense to strike a (messy/unprincipled/disputed) balance that’s not too dissimilar from balances we strike in other social and professional contexts.
(As an off-hand note, for whatever it’s worth, I’ve also updated in the direction of thinking that the particular bet that triggered this thread was worthwhile. I also, of course, feel a bit weird having somehow now written so much about the fine nuances of betting norms in a thread about a deadly virus.)
I do think the “purely” matters a good bit here. While I would go as far as to argue that even purely financial motivations are fine (and should be leveraged for the public good when possible), I think in as much as I understand your perspective, it becomes a lot less bad if people are only partially motivated by making money (or gaining status within their community).
As a concrete example, I think large fractions of academia are motivated by wanting a sense of legacy and prestige (this includes large fractions of epidemiology, which is highly relevant to this situation). Those motivations also feel not fully great to me, and I would feel worried about an academic system that tries to purely operate on those motivations. However, I would similarly expect an academic system that does not recognize those motivations at all, bans all expressions of those sentiments, and does not build system that leverages them, to also fail quite disastrously.
I think in order to produce large-scale coordination, it is important to enable the leveraging a of a large variety of motivations, while also keeping them in check by ensuring at least a minimum level of more aligned motivations (or some other external systems that ensures partially aligned motivations still result in good outcomes).