Biosecurity at Coefficient Giving. Previously GCR Lead at Founders Pledge.
christian.r
It’s worth separating two issues:
MacArthur’s longstanding nuclear grantmaking program as a whole
MacArthur’s late 2010s focus on weapons-usable nuclear material specifically
The Foundation had long been a major funder in the field, and made some great grants, e.g. providing support to the programs that ultimately resulted in the Nunn-Lugar Act and Cooperative Threat Reduction (See Ben Soskis’s report). Over the last few years of this program, the Foundation decided to make a “big bet” on “political and technical solutions that reduce the world’s reliance on highly enriched uranium and plutonium” (see this 2016 press release), while still providing core support to many organizations. The fissile materials focus turned out to be badly-timed, with Trump’s 2018 withdrawal from the JCPOA and other issues. MacArthur commissioned an external impact evaluation, which concluded that “there is not a clear line of sight to the existing theory of change’s intermediate and long-term outcomes” on the fissile materials strategy, but not on general nuclear security grantmaking (“Evaluation efforts were not intended as an assessment of the wider nuclear field nor grantees’ efforts, generally. Broader interpretation or application of findings is a misuse of this report.”)
Often comments like the ones Sanjay outlined above (e.g. “after throwing a lot of good money after bad, they had not seen strong enough impact for the money invested”) refer specifically to the evaluation report of the fissile materials focus.
My understanding is that the Foundation’s withdrawal from the field as a whole (not just the fissile materials bet of the late 2010s) coincided with this, but was ultimately driven by internal organizational politics and shifting priorities, not impact.
I agree with Sanjay that “some ‘creative destruction’ might be a positive,” but I think that this actually makes it a great time to help shape grantees’ priorities to refocus the field’s efforts back on GCR-level threats, major war between the great powers, etc. rather than nonproliferation.
I think the 80K profile notes (in a footnote) that their $1-10 billion guess includes many different kinds of government spending. I would guess it includes things like nonproliferation programs and fissile materials security, nuclear reactor safety, and probably the maintenance of parts of the nuclear weapons enterprise—much of it at best tangentially related to preventing nuclear war.
So I think the number is a bit misleading (not unlike adding up AI ethics spending and AI capabilities spending and concluding that AI safety is not neglected). You can look at the single biggest grant under “nuclear issues” in the Peace and Security Funding Index (admittedly an imperfect database): it’s the U.S. Overseas Private Investment Corporation (a former government funder) paying for spent nuclear fuel storage in Maryland…
A way to get at a better estimate of non-philanthropic spending might be to go through relevant parts of the State International Affairs Budget, the Bureau of Arms Control, Deterrence and Stability (ADS, formerly Arms Control, Verification, and Compliance), and some DoD entities (like DTRA), and a small handful of others, add those up, and add some uncertainty around your estimates. You would get a much lower number (Arms Control, Verification, and Compliance budget was only $31.2 million in FY 2013 according to Wikipedia—don’t have time to dive into more recent numbers rn).
All of which is to say that I think Ben’s observation that “nuclear security is getting almost no funding” is true in some sense both for funders focused on extreme risks (where Founders Pledge and Longview are the only ones) and for the field in general
Just to clarify:
MacArthur Foundation has left the field with a big funding shortfall
Carnegie Corporation is a funder that continues to support some nuclear security work
Carnegie Endowment is a think tank with a nuclear security program
Carnegie Foundation is an education nonprofit unrelated to nuclear security
FWIW, @Rosie_Bettle and I also found this surprising and intriguing when looking into far-UVC, and ended up recommending that philanthropists focus more on “wavelength-agnostic” interventions (e.g. policy advocacy for GUV generally)
Thanks for writing this! I like the post a lot. This heuristic is one of the criteria we use to evaluate bio charities at Founders Pledge (see the “Prioritize Pathogen- and Threat-Agnostic Approaches” section starting on p. 87 of my Founders Pledge bio report).
One reason that I didn’t see listed as one of your premises is just the general point about hedging against uncertainty: we’re just very uncertain about what a future pandemic might look like and where it will come from, and the threat landscape only becomes more complex with technological advances and intelligent adversaries. One person I talked to for that report said they’re especially worried about “pandemic Maginot lines”:
I also like the deterrence-by-denial argument that you make…
[broad defenses] might also act as a deterrent because malevolent actors might think: “It doesn’t even make sense to try this bioterrorist attack because the broad & passive defense system is so good that it will stop it anyways”
… though I think for it to work you have to also add a premise about the relative risk of substitution, right? I.e. if you’re pushing bad actors away from BW, what are you pushing them towards, and how does the risk of that new choice of weapon compare to the risk of BW? I think most likely substitutions (e.g. chem-for-bio substitution, as with Aum Shinrikyo) do seem like they would decrease overall risk.
Hi Ulrik, thanks for this comment! Very much agreed on the communications failures around aerosolized transmission. I wonder how much the mechanics of transmission would enter into a policy discussion around GUV (rather than a simplified “These lights can help suppress outbreaks.”)
An interesting quote relevant to bio attention hazards from an old CNAS report on Aum Shinrikyo:
“This unbroken string of failures with botulinum and anthrax eventually convinced the group that making biological weapons was more difficult than Endo [Seiichi Endo, who ran the BW program] was acknowledging. Asahara [Shoko Asahara, the founder/leader of the group] speculated that American comments on the risk of biological weapons were intended to delude would-be terrorists into pursuing this path.”
Footnote source in the report: “Interview with Fumihiro Joyu (21 April 2008).”
Thanks for this post! I’m not sure cyber is a strong example here. Given how little is known publicly about the extent and character of offensive cyber operations, I don’t feel that I’m able to assess the balance of offense and defense very well
Longview’s nuclear weapons fund and Founders Pledge’s Global Catastrophic Risks Fund (disclaimer: I manage the GCR Fund). We recently published a long report on nuclear war and philanthropy that may be useful, too. Hope this helps!
Just saw reporting that one of the goals for the Biden-Xi meeting today is “Being able to pick up the phone and talk to one another if there’s a crisis. Being able to make sure our militaries still have contact with one another.”
I had a Forum post about this earlier this year (with my favorite title) Call Me, Maybe? Hotlines and Global Catastrophic Risks with a section on U.S.-China crisis comms, in case it’s of interest:
“For example, after the establishment of an initial presidential-level communications link in 1997, Chinese leaders did not respond to repeated U.S. contact attempts during the 2001 Hainan Island incident. In this incident, Chinese fighter jets got too close to a U.S. spy plane conducting routine operations, and the U.S. plane had to make an emergency landing on Hainan Island. The U.S. plane contained highly classified technology, and the crew destroyed as much of it as they could (allegedly in part by pouring coffee on the equipment) before being captured and interrogated. Throughout the incident, the U.S. attempted to reach Chinese leadership via the hotline, but were unsuccessful, leading U.S. Deputy Secretary of State Richard Armitage to remark that “it seems to be the case that when very, very difficult issues arise, it is sometimes hard to get the Chinese to answer the phone.”
There is currently just one track 2/track 1.5 diplomatic dialogue between the U.S. and China that focuses on strategic nuclear issues. ~$250K/year is roughly my estimate for what it would cost to start one more
China and India. Then generally excited about leveraging U.S. alliance dynamics and building global policy advocacy networks, especially for risks from technologies that seem to be becoming cheaper and more accessible, e.g. in synthetic biology
I think in general, it’s a trade-off along the lines of uncertainty and leverage—GCR interventions pull bigger levers on bigger problems, but in high-uncertainty environments with little feedback. I think evaluations in GCR should probably be framed in terms of relative impact, whereas we can more easily evaluate GHD in terms of absolute impact.
This is not what you asked about, but I generally view GCR interventions as highly relevant to current-generation and near-term health and wellbeing. When we launched the Global Catastrophic Risks Fund last year, we wrote in the prospectus:
The Fund’s grantmaking will take a balanced approach to existential and catastrophic risks. Those who take a longtermist perspective in principle put special weight on existential risks—those that threaten to extinguish or permanently curtail humanity’s potential—even where interventions appear less tractable. Not everyone shares this view, however, and people who care mostly about current generations of humanity may prioritize highly tractable interventions on global catastrophic risks that are not directly “existential”. In practice, however, the two approaches often converge, both on problems and on solutions. A common-sense approach based on simple cost-benefit analysis points us in this direction even in the near-term.
I like that the GCR framing is becoming more popular, e.g. with Open Philanthropy renaming their grant portfolio:
We recently renamed our “Longtermism” grant portfolio to “Global Catastrophic Risks”. We think the new name better reflects our view that AI risk and biorisk aren’t only “longtermist” issues; we think that both could threaten the lives of many people in the near future.
I think: read a lot, interview a lot of people who are smarter (or more informed, connected, etc.) than I am about the problem, snowball sample from there, and then write a lot.
I wonder if FP’s research director, @Matt_Lerner, has a better answer for me, or for FP researchers in general
Thanks for the question! In 3 years, this might include:
Overall, “right of boom” interventions make up a larger fraction of funding (perhaps 1⁄4), even as total funding grows by an order of magnitude
There are major public and private efforts to understand escalation management (conventional and nuclear), war limitation, and war termination in the three-party world.
Much more research and investment in “civil defense” and resilience interventions across the board, not just nuclear. So that might include food security, bunkers, transmission-blocking interventions, better P4E, better national stockpiles and distribution systems, resilient crisis-communication systems, etc.
There are multiple ongoing track 2 and 1.5 talks, and eventually official dialogues between the U.S., Russia, and China to better understand each other’s views on limited war and find common ground on risk reduction measures and arms control beyond formal treaty-based tools
Not that I know of, but my colleage @jackva may have more to say here
A few that come to mind:
Risk-general/threat-agnostic/all-hazards risk-mitigation (see e.g. Global Shield and the GCRMA)
“Civil defense” interventions and resilience broadly defined
Intrawar escalation management
Protracted great power war
Definitely difficult. I think my colleagues’ work at Founders Pledge (e.g. How to Evaluate Relative Impact in High-Uncertainty Contexts) and iterating on “impact multipliers” to make ever-more-rigorous comparative judgments is the most promising path forward. I’m not sure that this is a problem unique to GCRs or climate. A more high-leverage risk-tolerant approach to global health and development faces the same issues, right?
Thank you for drawing attention to this funding gap! Really appreciate it