Biosecurity at Open Phil
ASB
Mirror life is a concrete example of something I would consider an existential risk if we were unprepared. I like Niko and Fin’s writeup: https://press.asimov.com/articles/mirror-life
Mirror phages could help reduce the biomass of mirror bacteria in the environment (perhaps substantially), but they couldn’t drive the mirror bacteria to extinction, fully sterilize a contaminated area, or prevent the initial invasion of an environment. I’d therefore be reluctant to call mirror phages a ‘safeguard’ as opposed to something that is helpful but inadequate on its own. (mirror phages as a potential future therapeutic are discussed in Chapter 5, section 5.3, page 117 of the technical report, mirror phages as environmental countermeasures are discussed in Chapter 8, section 8.6, pages 188-189)
Also ‘immune to all our antibiotics, viruses, and immune defenses’ isn’t quite right. Some antibiotics are achiral or racemic mixtures, and certain parts of our immune system might still attack mirror bacteria (e.g. some parts of the complement system).
Fwiw my (admittedly vibes-based) sense is that Palantir was a deliberate push to fill the niche of ‘surveillance company’ in a way that had guardrails and civil liberties protected.
Others have made this point (e.g. Carl Shulman), but adding it here briefly: Since humans are K-strategists, our risk/reward psychology will be very risk-averse. The fitness cost of getting a limb ripped off heavily outweighs any fitness advantage of a good meal or mating opportunity. But for r-strategists, one good meal or one mating opportunity might easily be worth a high chance of losing a limb (since the fitness costs/benefits are far more skewed for rare upside). If the fitness cost/benefits are different and skewed in this way, we should expect the reward/punishment signal to evolve to be in line with this, making the psychology of an r-strategist potentially very alien to us.
Probably Good, 80k, ACE, GiveWell, Charity Entrepreneurship
Because you’ve been a public servant who took on the responsibility of shutting down the Soviet bioweapons program, securing loose nuclear material, and kickstarting a wildly successful early career program while at the DoD, I need to know: is it ever difficult being so awesome?
And, what would your advice be for younger folks aiming to follow in your footsteps?
Just wanted to give my hearty +1 to approaching biosecurity issues with humility and striving to gain important context (which EAs often lack!)
Hi, thanks for raising these questions. I lead Open Philanthropy’s biosecurity and pandemic prevention work and I was the investigator of this grant. For context, in September last year, I got an introduction to Helena along with some information about work they were doing in the health policy space. Before recommending the grant, I did some background reference calls on the impact claims they were making, considered similar concerns to ones in this post, and ultimately felt there was enough of a case to place a hits-based bet (especially given the more permissive funding bar at the time).
Just so there’s no confusion: I think it’s easy to misread the nepotism claim as saying that I or Open Phil have a conflict of interest with Helena, and want to clarify that this is not the case. My total interactions with Helena have been three phone calls and some email, all related to health security work.
Excited to see this kind of analysis!
Worried that this is premature:
there is no reason for the great powers to ever deploy or develop planet-killing kinetic bombardment capabilities
This seems true to a first approximation, but if the risk we are preventing is tiny, then a tiny chance of dual-use becomes a big deal. The behavior of states suggests that we can’t put less than a 1 in 10,000 chance on something like this. Some random examples:
During WW2, there were powerful elements within the Japanese government that advocated total annihilation rather than surrender (Wikipedia).
Deterrence can benefit from credible signals of suicidal craziness (e.g. the ‘Samson Option’ named after biblical character who destroyed a temple, killing himself and taking everybody with him).
The Soviet bioweapons program invested heavily in contagious weapons (e.g. smallpox) and modifying them to overcome medical countermeasures. This work seemed to be driven by weird bureaucratic incentives that were pretty divorced from rational strategic goals/objectives of the Soviet Union.
See Daniel Greene’s comment about creating better norms around publishing dangerous information (he beat me to it!).
I won’t comment on their endorsements or strategy, but I will say that even if Carrick is a longshot it doesn’t necessarily follow that it’s a bad use of marginal dollars.
Thanks for flagging, I missed this and agree this should be in blog category per the policy. Will chat with mods to figure out how to fix.
Update: after discussing and looking at some background documentation with Oli, we think the claim about ‘potentially thousands of lives’ is sufficiently supported.
Dropping a quick comment to say I’ve upvoted this and might respond with more later. I do concede the claim about thousands of lives was not throughly scrutinized and I’m getting more info on that now (and will remove if it doesn’t check out). I otherwise stand by what I’ve written and also think Oli has worthwhile points.
Thanks! And yes, this seems right to me.
Huge +1 to this. If anybody is reading this and wants to get funded to start down this career track, please apply to Open Phil’s biosecurity scholarship: https://www.openphilanthropy.org/focus/global-catastrophic-risks/biosecurity/open-philanthropy-biosecurity-scholarships
The program supports independent projects for people to learn about a field as well as degree programs.
Thanks Evan, and welcome to the forum! I agree this is an important question for prioritization, and does imply that AI is substantially more important than bio (a statement I believe despite working on biosecurity, at least if we are only considering longtermism). As Linch mentioned, we have policies/norms against publicly brainstorming information hazards. If somebody is concerned about a biology risk that might constitute an information hazard, they can contact me privately to discuss options for responsible disclosure.
One possible advantage to using the platform would be that donations to charities are tax deductible, whereas donations to campaigns are not. If set up well, this mechanism could enable somebody to ‘donate’ to a campaign with tax deductibility.
Strong upvote. I think more people should be considering this as a skill/career to develop. For arms control and verification, I feel like these tools are potentially being overlooked (and could be useful across multiple GCR/xrisk-relevant areas).
I’ve heard good things about Jeffrey Lewis and his thinking on OSINT tools on the nuclear side of things: https://www.middlebury.edu/institute/people/jeffrey-lewis
I’d be keen for great people to apply to the Deputy Director role ($180-210k/y, remote) at the Mirror Biology Dialogues Fund. I spoke a bit about mirror bacteria on the 80k podcast, James Smith also had a recent episode on it. I generally think this is among the most important roles in the biosecurity space and I’ve been working with the MBDF team for a while now and am impressed by what they’re getting done.
People might be surprised to hear that I put ballpark 1% p(doom) on mirror bacteria alone at the start of 2024. That risk has been cut substantially by the scientific consensus that has formed against building it since then, but there is some remaining risk that the boundaries are not drawn far enough from the brink that bad actors could access it. Having a great person in this role would help ensure a wider safety margin.