Biosecurity at Open Phil
ASB
Want to make a difference on policy and governance? Become an expert in something specific and boring
Just wanted to give my hearty +1 to approaching biosecurity issues with humility and striving to gain important context (which EAs often lack!)
Hi, thanks for raising these questions. I lead Open Philanthropy’s biosecurity and pandemic prevention work and I was the investigator of this grant. For context, in September last year, I got an introduction to Helena along with some information about work they were doing in the health policy space. Before recommending the grant, I did some background reference calls on the impact claims they were making, considered similar concerns to ones in this post, and ultimately felt there was enough of a case to place a hits-based bet (especially given the more permissive funding bar at the time).
Just so there’s no confusion: I think it’s easy to misread the nepotism claim as saying that I or Open Phil have a conflict of interest with Helena, and want to clarify that this is not the case. My total interactions with Helena have been three phone calls and some email, all related to health security work.
Excited to see this kind of analysis!
Worried that this is premature:
there is no reason for the great powers to ever deploy or develop planet-killing kinetic bombardment capabilities
This seems true to a first approximation, but if the risk we are preventing is tiny, then a tiny chance of dual-use becomes a big deal. The behavior of states suggests that we can’t put less than a 1 in 10,000 chance on something like this. Some random examples:
During WW2, there were powerful elements within the Japanese government that advocated total annihilation rather than surrender (Wikipedia).
Deterrence can benefit from credible signals of suicidal craziness (e.g. the ‘Samson Option’ named after biblical character who destroyed a temple, killing himself and taking everybody with him).
The Soviet bioweapons program invested heavily in contagious weapons (e.g. smallpox) and modifying them to overcome medical countermeasures. This work seemed to be driven by weird bureaucratic incentives that were pretty divorced from rational strategic goals/objectives of the Soviet Union.
See Daniel Greene’s comment about creating better norms around publishing dangerous information (he beat me to it!).
- 29 Sep 2022 19:16 UTC; 12 points) 's comment on Biosecurity Dual Use Screening—Project Proposal (seeking vetting & project lead) by (
- 19 Jul 2022 14:54 UTC; 1 point) 's comment on arxiv.org—I might work there soon by (
Request for proposals: Help Open Philanthropy quantify biological risk
I won’t comment on their endorsements or strategy, but I will say that even if Carrick is a longshot it doesn’t necessarily follow that it’s a bad use of marginal dollars.
Thanks for flagging, I missed this and agree this should be in blog category per the policy. Will chat with mods to figure out how to fix.
The best $5,800 I’ve ever donated (to pandemic prevention).
Update: after discussing and looking at some background documentation with Oli, we think the claim about ‘potentially thousands of lives’ is sufficiently supported.
Dropping a quick comment to say I’ve upvoted this and might respond with more later. I do concede the claim about thousands of lives was not throughly scrutinized and I’m getting more info on that now (and will remove if it doesn’t check out). I otherwise stand by what I’ve written and also think Oli has worthwhile points.
Concrete Biosecurity Projects (some of which could be big)
Thanks! And yes, this seems right to me.
Countermeasures & substitution effects in biosecurity
Huge +1 to this. If anybody is reading this and wants to get funded to start down this career track, please apply to Open Phil’s biosecurity scholarship: https://www.openphilanthropy.org/focus/global-catastrophic-risks/biosecurity/open-philanthropy-biosecurity-scholarships
The program supports independent projects for people to learn about a field as well as degree programs.
Thanks Evan, and welcome to the forum! I agree this is an important question for prioritization, and does imply that AI is substantially more important than bio (a statement I believe despite working on biosecurity, at least if we are only considering longtermism). As Linch mentioned, we have policies/norms against publicly brainstorming information hazards. If somebody is concerned about a biology risk that might constitute an information hazard, they can contact me privately to discuss options for responsible disclosure.
One possible advantage to using the platform would be that donations to charities are tax deductible, whereas donations to campaigns are not. If set up well, this mechanism could enable somebody to ‘donate’ to a campaign with tax deductibility.
Strong upvote. I think more people should be considering this as a skill/career to develop. For arms control and verification, I feel like these tools are potentially being overlooked (and could be useful across multiple GCR/xrisk-relevant areas).
I’ve heard good things about Jeffrey Lewis and his thinking on OSINT tools on the nuclear side of things: https://www.middlebury.edu/institute/people/jeffrey-lewis
Because you’ve been a public servant who took on the responsibility of shutting down the Soviet bioweapons program, securing loose nuclear material, and kickstarting a wildly successful early career program while at the DoD, I need to know: is it ever difficult being so awesome?
And, what would your advice be for younger folks aiming to follow in your footsteps?