I direct the AI:Futures and Responsibility Programme (https://www.ai-far.org/) at the University of Cambridge, which works on AI strategy, safety and governance. I also work on global catastrophic risks with the Centre for the Study of Existential Risk and AI strategy/policy with the Centre for the Future of Intelligence.
Sean_o_h
A few comments from Xrisk/EA folks that I’ve seen (which I agree with):
FHI’s Markus Andjerlung: https://twitter.com/Manderljung/status/1229863911249391618
CSER’s Haydn Belfield: https://twitter.com/HaydnBelfield/status/1230119965178630149
To me, AI heavyweight and past president of AAAI (and past critic of OpenAI) Rao Kambhampati put it well—written like / has tone of a hit piece, but without an actual hit (i.e. any relevation that actually justifies it):
https://twitter.com/rao2z/status/1229599668683673600
I don’t think so to any significant extent in most circumstances. And any tiny spike counterbalanced by general benefits pointed to by David. My understanding (former competitive runner) is that extended periods of heavily overdoing it with exercise (overtraining) can lead to an inhibited immune system among other symptoms, but this is rare with people generally keeping fit (other than e.g. someone jumping into marathon/triathlon training without building up). Other things to avoid/be mindful of are the usual (hanging around in damp clothes in the cold, hygiene in group sporting/exercise contexts etc).
Thanks bmg. FWIW, I provide my justification (from my personal perspective) here: https://forum.effectivealtruism.org/posts/g2F5BBfhTNESR5PJJ/concerning-the-recent-wuhan-coronavirus-outbreak?commentId=mWi2L4S4sRZiSehJq
Thanks Khorton, nothing to apologise for. I read your comment as a concern about how the motivations of a bet might be perceived from the outside (whether in the specific case or more generally); but this led me to the conclusion that actually stating my motivations rather than assuming everyone reading knows would be helpful at this stage!
While my read of your post is “there is the possibility that the aim could be interpreted this way” which I regard as fair, I feel I should state that ‘fun and money’ was not my aim, and (I strongly expect not Justin’s), as I have not yet done so explicitly.
I think it’s important to be as well-calibrated as reasonably possible on events of global significance. In particular, I’ve been seeing a lot of what appear to me to be poorly calibrated, alarmist statements, claims and musings on nCOV on social media, including from EAs, GCR researchers, Harvard epidemiologists, etc. I think these poorly calibrated/examined claims can result in substantial material harms to people, in terms of stoking up unnecessary public panic, confusing accurate assessment of the situation, and creating ‘boy who cried wolf’ effects for future events. I’ve spent a lot of time on social media trying to get people to tone down their more extreme statements re: nCOV.
(edit: I do not mean this to refer to Justin’s fermi estimate, which was on the more severe end but had clearly reasoned and transparent thinking behind it; more a broad comment on concerns re: poor calibration and the practical value of being well-calibrated).
As Habryka has said, this community in particular is one that has a set of tools it (or some part of it) uses for calibration. So I drew on it in this case. The payoff for me is small (£50; and I’m planning to give it to AMF); the payoff for Justin is higher but he accepted it as an offer rather than proposing it and so I doubt money is a factor for him either.
In the general sense I think both the concern about motivation and how something appears to parts of the community is valid. I would hope that it is still possible to get the benefits of betting on GCR-relevant topics for the benefits-to-people I articulate above (and the broader benefits Habryka and others have articulated). I would suggest that achieving this balance may be a matter of clearly stating aims and motivations, and (as others have suggested) taking particular care with tone and framing, but I would welcome further guidance.
Lastly, I would like to note my gratitude for the careful and thoughtful analysis and considerations that Khorton, Greg, Habryka, Chi and others are bringing to the topic. There are clearly a range of important considerations to be balanced appropriately, and I’m grateful both for the time taken and the constructive nature of the discussion.
- Feb 2, 2020, 5:15 PM; 5 points) 's comment on Concerning the Recent 2019-Novel Coronavirus Outbreak by (
Thanks, good to know on both, appreciate the feedback.
I would similarly be curious to understand the level of downvoting of my comment offering to remove my comments in light of concerns raised and encouragement to consider doing so. This is by far the most downvoted comment I’ve ever had. This may just be an artefact of how my call for objections to removing my comments has manifested (I was anticipating posts stating an objection like Ben’s and Habryka’s, and for those to be upvoted if popular, but people may have simply expressed objection by downvoting the original offer). In that case that’s fine.
Another possible explanation is an objection to me even making the offer in the first place. My steelman for this is that even the offer of self-censorship of certain practices in certain situations could be seen as coming at a very heavy cost to group epistemics. However from an individual-posting-to-forum perspective, this feels like an uncomfortable thing to be punished for. Posting possibly-controversial posts to a public forum has some unilateralist’s curse elements to it: risk is distributed to the overall forum, and the person who posts the possibly-controversial thing is likely to be someone who deems the risk lower than others. And we are not always the best at impartially judging our own actions. So when arguments are made in good faith that an action may respond in group harm, it seems like a reasonable step to make the offer to withdraw the action, and to signal a willingness to cooperate in whatever the group (or moderators, I guess) deemed to be in the group’s interest. And I built in a time delay to allow for objections and more views to be raised, before taking action. I would anticipate a more negative response if I were calling for deletion of others’ comments, but this was my own comment.
I would also note that offering to delete one’s comments comes at a personal cost, as does acknowledging possible fault of judgement; having an avalanche of negative karma on top of it adds to the discomfort.
If there’s something else going on—e.g. a sense that I was being dishonest about following through on the offer to delete; or something else—it would be good to know. I guess there could be a negative reaction to expressing the view that Chi’s perspective is valid. In my view, a point can be valid without being action-deciding. Here there are multiple considerations which I would all see as valid (value of betting to calibrate beliefs; value of doing so in public to reinforce a norm the group sees as beneficial and promote that norm to others; value of avoiding making insensitive-seeming posts that could possibly cause reputational damage to the group). The question is one of weighting of considerations—I have my own views, but it was very helpful to get a broader set of views in order to calibrate my actions.
My take is that this at this stage has been resolved in favour of “editing for tone but keeping the bet posts”. I have done the editing for tone. I am happy with this outcome, I hope most others are too.
My own personal view is that I think public betting on beliefs is good—it’s why I did it (both this time and in the past) and my preference is to continue doing so. However, my take is that that the discussion highlighted that in certain circumstances around betting (such as predictions on events such as an ongoing mass fatality event) it is worth being particularly careful about tone.
Re: Michael & Khorton’s points, (1) Michael fully agreed, casual figure of speech that I’ve now deleted. I apologise. (2) I’ve done some further editing for tone but would be grateful if others had further suggestions.
I also agree re: Chi’s comment—I’ve already remarked that I think the point was valid, but I would add that I found it to be respectful and considerate in how it made its point (as one of the people it was directed towards).
It’s been useful for me to reflect on. I think a combination of two things for me: one is some inherent personal discomfort/concern about causing offence by effectively saying “I think you’re wrong and I’m willing to bet you’re wrong”, which I think I unintentionally counteracted with (possibly excessive) levity. The second is how quickly the disconnect can happen from (initial discussion of very serious topic) to (checking in on forum several days later to quickly respond to some math). Both are things I will be more careful about going forward. Lastly, I may have been spending too much time around risk folk, for whom certain discussions become so standard that one forgets how they can come across.
I’m happy to remove my comments; I think Chi raises a valid point. The aim was basically calibration. I think this is quite common in EA and forecasting, but agree it could look morbid from the outside, and these are publicly searchable. (I’ve also been upbeat in my tone for friendliness/politeness towards people with different views, but this could be misread as a lack of respect for the gravity of the situation). Unless this post receives strong objections by this evening, I will delete my comments or ask moderators to delete.
10:1 on the original (1 order of magnitude) it is.
Possibility of verbal confusion as this is how most people vocalise ‘CSER’ (where EA folk also tend to go in the UK).
(We had a ‘Julius’ for a while, which was excellent).
Too good—how could you possibly turn this down!
This seems fair. I suggested the bet quite quickly. Without having time to work through the math of the bet, I suggested something that felt on the conservative side from the point of view of my beliefs. The more I think about it, (a) the more confident I am in my beliefs and (b) the more I feel it was not as generous as I originalyl thought*. I have a personal liking for binary bets rather than proportional payoffs. As a small concession in light of the points raised, I’d be happy to offer to modify the terms retroactively to make them more favourable to Justin, offering either of the following.
(i) Doubling the odds against me to 10:1 odds (rather than 5:1) on the original claim (at least an order of magnitude lower than his fermi). So his £50 would get £500 of mine.
OR
(ii) 5:1 on at least 1.5 orders of magnitude (50x) lower than his fermi (rather than 10x).
(My intuition is that (ii) is a better deal than (i) but I haven’t worked it through)
(*i.e. at time of bet—I think the likelihood of this being a severe global pandemic is now diminishing further in my mind)
I like Rosie’s suggestions (inspired by Jonas’s).
HEAR—Hub for enabling EA Research. HEALR—Hub for enabling EA Learning and Research.
Or call it the EARL—EA Research and Learning Centre (the ‘centre’ bit can often easily be dropped from the acronym).
Re: whose mortality estimates, I suggest we use metaculus’s list here (WHO has highest ranking) as standard (with the caveat above).
MERS was pretty age-agnostic. SARS had much higher mortality rates in >60s. All the current reports from China claim that it affects mainly older people or those with preexisting health conditions. Coronavirus is a broad class including everything from the common cold to MERS; not sure there’s good ground to anchor too closely to SARS or MERS as a reference class.
Agreed, thank you Justin. (I also hope I win the bet, and not for the money—while it is good to consider the possibility of the most severe plausible outcomes rigorously and soberly, it would be terrible if it came about in reality). Bet resolves 28 January 2021. (though if it’s within an order of magnitude of the win criterion, and there is uncertainty re: fatalities, I’m happy to reserve final decision for 2 further years until rigorous analysis done—e.g. see swine flu epidemiology studies which updated fatalities upwards significantly several years after the outbreak).
To anyone else reading. I’m happy to provide up to a £250 GBP stake against up to £50 of yours, if you want to take the same side as Justin.
On (2), I would note that the ‘hype’ criticism is one that is commonly made about the claims of both a range of individual groups in AI, and about the field as a whole. Criticisms of DeepMind’s claims, and IBM’s (usefulness/impact of IBM Watson in health) come immediately to mind, as well as claims by a range of groups re: deployment of self-driving cars. It’s also a criticism made of the field as a whole (e.g. see various of Gary Marcus, Jack Stilgoe’s comments etc). This does not necessarily mean that it’s untrue of OpenAI (or that OpenAI are not one of the ‘hypier’), but I think it’s worth noting that this is not unique to OpenAI.