I’d suggest considering:
Arms control association https://www.armscontrol.org/
Nuclear Threat Initiative https://www.nti.org/about/global-nuclear-policy/
Likewise for publications at CSER. I’d add that for policy work, written policy submissions often provide summaries and key takaways and action-relevant points based on ‘primary’ work done by the centre and its collaborators, where the primary work is peer-reviewed.
We’ve received informal/private feedback from people in policy/government roles at various points that our submissions and presentations have been particularly useful or influential. And we’ll have some confidential written testimony to support this for a few examples for University REF (research excellence framework) assessment purposes; however unfortunately I don’t have permission to share these publicly at this time. However, this comment I wrote last year provides some info that could be used as indirect indications of the work being seen as high-quality (being chosen as a select number to be invited to present orally; follow-up engagement, etc).
Thanks Peter, that’s awesome!
Thank you for writing this up; it’s extremely helpful, especially in such a rapidly developing space. A very optional request: might you consider updating this e.g. once a week with significant relevant developments on these ideas/questions? With so many of us involved in many different ways, it could provide a helpful evolving roadmap. Feel free to ignore if too much hassle or redundant with summaries elsewhere.
[disclaimer: I am co-director of CSER, but giving an individual view]. Hi, a quick comment (apologies that I may not have time to respond to replies, very busy period).
>“We understand that CSER’s work mostly has little direct relevance to COVID-19, but some of it is relevant to pandemics and that they are looking to expand this element of their team. We believe that this may be a suitable choice for funders inspired to support pandemics as a result of the coronavirus outbreak.”
This is accurate in my view. However, I would emphasise that for EA funders keen to support (a) *direct* response to Covid-19 and/or (b) most time-effective use of funds relating to the current situation within the next 6 months, my view is that there are likely to be more timely interventions than supporting CSER at this immediate time.
E.g. we ourselves are working to support other initiatives by collaborators relating to the immediate situation (I have been looking for ways to support Univursa*, whose researchers we’ve worked with before, and which I individually consider particularly promising in the current situation). As the writeup says, our work is more focused on broader GCR and pandemic/biorisk goverance and preparedness. We are in the process of making a number of hires (50% of whom are biorisk/epidemiology/biosecurity specialists). I expect we will have a lesser need for additional funding in the 0-6 month window. In the >6 month window, as the world (hopefully) moves from immediate crisis response to better preparedness/governance/biosecurity, and as our expanded bio team develops expands its work relevant to this, we are likely to have significantly more RFMF (although I could not give a view at this time on comparative value of funds with other orgs in future). I should also mention that some of our work is likely to be under the banner of other initiatives our researchers are a part of (e.g. the Biorisc initiative, which has gained good traction in the UK policy context https://www.caths.cam.ac.uk/research/biorisc)
Very grateful to Sanjay, and to everyone else working hard to identify opportunities to combat Covid-19!
*Footnote on my being excited about Univursa: While the approach was initially developed with a focus on haemorrhagic epidemics (e.g. ebola), based on my analysis of the method, and discussion with the researchers, I believe it will be very suitable for adaptation to covid-19 diagnostics (although no guarantees can be made until database development and field testing completed); and could play a v important role in resource-limited settings like sub-saharan Africa where testing and outbreak detection ability is extremely limited. Further, above and beyond regional benefits, it is my understanding that unless appropriate tools are provided to these regions, getting this pandemic under control globally will be a lot more challenging.
That would be a shame. If you’re fairly familiar with Xrisk literature and FHI’s work in particular, then a lot of the juiciest facts and details are in the footnotes—I found them fascinating.
Datapoint (my general considerations/thought processes around this, feeding into case-by case decisions about my own activities rather than a blanket decision): I am (young healthy male) pretty unconcerned personally about risk to myself individually; but quite concerned about becoming a vector for spread (especially to older or less robust people). While I have a higher-than-some-people personal risk tolerance, I don’t like the idea of imposing my risk tolerance on others. Particularly when travelling/fatigued/jetlagged, I’m not 100% sure I trust my own attention to detail quite enough on reliably taking all the necessary steps carefully enough, so this makes me a little hesitant to take on long-haul travel to international events (I also work/interact with older colleagues reasonably regularly, and am concerned re: the indirect activities of my actions on them).
I would also like to see society-level actions that reduce disease spread, and I intuitively feel that EA should be a participant in such actions, given it takes such risks seriously as a community.
The information Singapore is gathering, collating and making available is fascinating.
Singapore is also one of the nations that appears to be dealing most effectively with their coronavirus outbreak (rate of new cases is comparatively low). The country also had a very effective response to SARS in 2003. (Although by Western standards the extent to which they gather information on the population might be uncomfortable).
6 deaths now reported in Washington State is also consistent with the outbreak there being substantially larger than the 14 cases currently recorded.
FYI, sequencing from the Snohomish county washington cases suggest there has been cryptic transmission in washington state for the last 3-6 weeks, and potentially a substantial outbreak (a few hundred cases) ongoing in washington state (likely missed because of the focus on travellers returning from China).
Too early to have confidence on higher temperatures limiting spread IMO (although some reason to hope, certainly); cases in japan are only <2.5x higher than singapore (234 vs 102 last I saw, and IIRC it got to japan slightly earlier); surveillance and testing in African nations unlikely to be as extensive as e.g. Japan/SK; likely less volume of travel going through african nations than some of the Asian hubs.
I must admit, I would not make the same bet at the same odds on the 27th of February 2020.
Well done on your charitable giving, and thank you for sharing! For me, it’s important and inspirational to hear about giving at all levels (and sometimes we hear less about giving at the level less high-earning people can afford, so this is great).
Sorry I missed this. My strongest responses over the last while have fallen into the categories of: (1) responding to people claiming existential risk-or-approaching potential (or sharing papers by people like Taleb stating we are entering a phase where this is near-certain; e.g. https://static1.squarespace.com/static/5b68a4e4a2772c2a206180a1/t/5e2efaa2ff2cf27efbe8fc91/1580137123173/Systemic_Risk_of_Pandemic_via_Novel_Path.pdf
(shared in one xrisk group, for example, as “X-riskers, it would appear your time is now: “With increasing transportation we are close to a transition to conditions in which extinction becomes certain both because of rapid spread and because of the selective dominance of increasingly worse pathogens.”. My response: “We are **not** “close to a transition to conditions in which extinction becomes certain both because of rapid spread and because of the selective dominance of increasingly worse pathogens”.)
Or, responding to speculation that nCov is a deliberately developed bioweapon, or was accidentally released from a BSL4 lab in Wuhan. There isn’t evidence for either of these and I think they are unhelpful types of speculation to be made without evidence, and such speculations can spread widely. Further, some people making the latter speculation didn’t seem to be aware what a common class of virus coronaviruses are (ranging from common cold thru to SARS). Whether or not a coronavirus was being studied at the Wuhan lab, I think it would not be a major coincidence to find a lab studying a coronavirus in a major city.
A third example was clarifying that the event 201 exercise Johns Hopkins did (which involved 65 million hypothetical deaths) was a tabletop simulation , not a prediction, and therefore could not be used to extrapolate an expectation of 65 million deaths from the current outbreak.
I made various other comments as part of discussions, but more providing context or points for discussion etc as I recall as opposed to disagreeing per se, and don’t have time to dig them up.
The latter examples don’t relate to predictions of the severity of the outbreak, more so to what I perceived at the time to be misunderstandings, misinformation, and unhelpful/ungrounded speculations.
On (2), I would note that the ‘hype’ criticism is one that is commonly made about the claims of both a range of individual groups in AI, and about the field as a whole. Criticisms of DeepMind’s claims, and IBM’s (usefulness/impact of IBM Watson in health) come immediately to mind, as well as claims by a range of groups re: deployment of self-driving cars. It’s also a criticism made of the field as a whole (e.g. see various of Gary Marcus, Jack Stilgoe’s comments etc). This does not necessarily mean that it’s untrue of OpenAI (or that OpenAI are not one of the ‘hypier’), but I think it’s worth noting that this is not unique to OpenAI.
A few comments from Xrisk/EA folks that I’ve seen (which I agree with):
FHI’s Markus Andjerlung: https://twitter.com/Manderljung/status/1229863911249391618
CSER’s Haydn Belfield: https://twitter.com/HaydnBelfield/status/1230119965178630149
To me, AI heavyweight and past president of AAAI (and past critic of OpenAI) Rao Kambhampati put it well—written like / has tone of a hit piece, but without an actual hit (i.e. any relevation that actually justifies it):
I don’t think so to any significant extent in most circumstances. And any tiny spike counterbalanced by general benefits pointed to by David. My understanding (former competitive runner) is that extended periods of heavily overdoing it with exercise (overtraining) can lead to an inhibited immune system among other symptoms, but this is rare with people generally keeping fit (other than e.g. someone jumping into marathon/triathlon training without building up). Other things to avoid/be mindful of are the usual (hanging around in damp clothes in the cold, hygiene in group sporting/exercise contexts etc).
Thanks bmg. FWIW, I provide my justification (from my personal perspective) here: https://forum.effectivealtruism.org/posts/g2F5BBfhTNESR5PJJ/concerning-the-recent-wuhan-coronavirus-outbreak?commentId=mWi2L4S4sRZiSehJq
Thanks Khorton, nothing to apologise for. I read your comment as a concern about how the motivations of a bet might be perceived from the outside (whether in the specific case or more generally); but this led me to the conclusion that actually stating my motivations rather than assuming everyone reading knows would be helpful at this stage!
While my read of your post is “there is the possibility that the aim could be interpreted this way” which I regard as fair, I feel I should state that ‘fun and money’ was not my aim, and (I strongly expect not Justin’s), as I have not yet done so explicitly.
I think it’s important to be as well-calibrated as reasonably possible on events of global significance. In particular, I’ve been seeing a lot of what appear to me to be poorly calibrated, alarmist statements, claims and musings on nCOV on social media, including from EAs, GCR researchers, Harvard epidemiologists, etc. I think these poorly calibrated/examined claims can result in substantial material harms to people, in terms of stoking up unnecessary public panic, confusing accurate assessment of the situation, and creating ‘boy who cried wolf’ effects for future events. I’ve spent a lot of time on social media trying to get people to tone down their more extreme statements re: nCOV.
(edit: I do not mean this to refer to Justin’s fermi estimate, which was on the more severe end but had clearly reasoned and transparent thinking behind it; more a broad comment on concerns re: poor calibration and the practical value of being well-calibrated).
As Habryka has said, this community in particular is one that has a set of tools it (or some part of it) uses for calibration. So I drew on it in this case. The payoff for me is small (£50; and I’m planning to give it to AMF); the payoff for Justin is higher but he accepted it as an offer rather than proposing it and so I doubt money is a factor for him either.
In the general sense I think both the concern about motivation and how something appears to parts of the community is valid. I would hope that it is still possible to get the benefits of betting on GCR-relevant topics for the benefits-to-people I articulate above (and the broader benefits Habryka and others have articulated). I would suggest that achieving this balance may be a matter of clearly stating aims and motivations, and (as others have suggested) taking particular care with tone and framing, but I would welcome further guidance.
Lastly, I would like to note my gratitude for the careful and thoughtful analysis and considerations that Khorton, Greg, Habryka, Chi and others are bringing to the topic. There are clearly a range of important considerations to be balanced appropriately, and I’m grateful both for the time taken and the constructive nature of the discussion.