.01% Fund—Ideation and Proposal

Pronounced: Basis Fund. Alt: Point-O-One-Percent Fund

Summary

The basic idea is for a new funding agency, or a subproject within existing funding agencies, to solicit proposals for research or startup ventures that, assuming everything goes well, can reduce existential risk by > 0.01 percentage points (pp) at a price point of between 100M-1B/​0.01% of absolute xrisk reduced (including both financial costs and reasonable models of human capital).

EDIT 2022/​09/​21: The 100M-1B estimates are relatively off-the-cuff and very not robust, I think there are good arguments to go higher or lower. I think the numbers aren’t crazy, partially because others independently come to similar numbers (but some people I respect have different numbers). I don’t think it’s crazy to make decisions/​defer roughly based on these numbers given limited time and attention. However, I’m worried about having too much secondary literature/​large decisions based on my numbers, since it will likely result in information cascades. My current tentative guess as of 2022/​09/​21 is that there are more reasons to go higher (think averting x-risk is more expensive) than lower. However, overspending on marginal interventions is more -EV than underspending, which pushes us to bias towards conservatism.

Also, six months after publication and ~10 months after the idea’s inception, I do not currently have any real plans to actually implement this new fund, though if someone reading this is excited about it and think they might have a relevant background, please feel free to ping.

The two main advantages of having a specific fund or subfund that focuses on this are:

  1. Memetic call to increase ambition and quantification: Having an entire fund focused on projects that can reduce existential risk by >0.01% can help encourage people to look really hard and actively for potentially really amazing projects, rather than “settle” for mediocre or unscalable ones.

  2. Assessor/​grantmaker expertise: Having a fund that focuses on specific quantitative models of this form helps to develop individual and institutional expertise in evaluating the quality of grants a) quantitatively and b) at a specific (high) tier of ambition.

More specifically, we give initial seed funding in the between 30k − 10M region for projects that

  • have within-model validity,

  • where we don’t detect obvious flaws, and

  • seem healthily devoid of very large (EV) downside risks.

The idea is that initially successful projects will have an increasingly strong case for impact later, and then can move on to larger funders like Open Phil, Future Fund, SFF, Longview, etc.

We don’t care as much about naive counterfactual impact at the initial funding stage, as (for example) projects that can save other people’s time/​$s to do other important things are also valuable. We will preferentially fund projects that other people either a) are not doing or b) not doing well.

The fund will primarily fund areas with more strategic clarity than AI safety or AI governance, as the “reduce x-risk by 0.01 percentage points” framing may be less valuable for those ventures. We may also fund projects dedicated towards increasing strategic clarity, especially if we think the arguments for their research directions feel qualitatively and quantitatively compelling.

What does this funding source do that existing LT sources don’t?

  1. Point people heavily at a clear target.

    1. I feel like existing ventures I see, including both public EAIF/​LTFF grants and the small number of grants/​research questions that come across my desk at RP to (implicitly) evaluate, rarely have clear stories and never have numbers for “assuming this goes extremely well, we’d clearly reduce xrisk by X% through ABC”

    2. Having a clear target to aim for is often helpful

  2. Force quantification

    1. Quantification is often good, see (ironically) qualitative arguments here.

  3. Provide both a clear source of funding and specialization/​diversification of grantmakers.

    1. Grantmakers here can focus on assessing quantitative claims of a certain ambition bar for seed funding, while leaving qualitative claims, non-seed funding, and grant claims of much lower ambition to other grantmakers.

How will we determine if something actually reduces x-risk by 0.01pp if everything goes well?

We start with high-level estimates of xrisk from different sources, such as as Michael Aird’s Database of existential risk estimates or Ord’s book, as well as someone (maybe Linch? maybe a reader?) who’s good at evaluating quantitative models to look at the within-model validity of each promising grant. A process such as the one outlined here could also be used when attempting quantification.

As the fund progresses and matures, we may have increasingly more accurate and precise high-level quantitative estimates for levels of xrisk from each source (through a network of advisors, in-house research, etc), as well as stronger and stronger know-how and institutional capacity to rapidly and accurately assess/​evaluate theories of change. This may involve working with QURI or other quantification groups, hiring superforecaster teams, having in-house xrisk researchers, etc.

As the fund progresses and we build strong inside views of what’s needed to reduce x-risk, we may also generate increasingly many requests for proposals for specific projects.

Will this reduce people’s ambitions too much?

Having a >0.01pp xrisk reduction goal might be bad if people would otherwise work on having an uncertain chance of solving xrisk a lot. But I think there mostly isn’t enough xrisk in the world for this to be true, other than within AI.

But it might not be too hard to avoid poaching too much human capital from AI efforts, e.g. by making this less high-status than AI alignment, mandating pay to be lower than top AI Safety efforts, etc.

I do think there are projects with 1-2(?) orders of magnitude higher cost-effectiveness than 0.01pp of in biorisk reduction, but not higher. And I think aiming for >0.01pp does not preclude hitting 0.1pp. Note that 0.01pp is a lower bound.

Will this make people overly ambitious?

E.g., maybe the target is too lofty and important work that needs to be done but has no clear story for reducing xrisk by a basis point will be overlooked, or people will falter a lot trying to do lofty goals.

I think this is probably fine as long as we pay people well, provide social safety nets, etc. Right now EA’s problem is insufficient ambition, or at least insufficiently targeted ambition, rather than too much ambition.

In addition, we may in practice want to consider projects with existential risk estimates in the microdoom (0.0001%) region, though we will of course very strongly preferentially fund projects with >0.01% existential risk reduction.

Won’t you run into issues with Goodharting/​optimizer’s curse/​bad modeling?

In short, it naively seems like asking people to make a quantitative case for decreasing x-risk by a lot will result in fairly bad models. Like maybe we’d fund projects that collectively save like 10 Earths or something dumb like that. I agree with this concern but think it’s overstated because:

  1. the grantmaking agency will increasingly get good at judging models,

  2. it’s not actually that bad to overfund projects at the seed stage, since later grantmakers can then apply more judgment/​discretion/​skepticism at the point of scaling to tens or hundreds of millions of dollars, and

  3. my general intuition is that optimizer’s curse is what you currently see when you ask people to quantify their intuitive models, and (esp. in longtermism) we’d otherwise absolutely get the verbal equivalents to optimizer’s curse all the time, just not formally quantified enough so things are “not even wrong”

Next Steps

  1. People here evaluate this proposal and help decide whether this proposal is on-balance a good idea.

  2. I (Linch) to consider whether trying out a minimally viable version of this fund is worth doing, in consultation with advisors, commentators here, and other members of the EA community.

  3. I recruit part-time people needed for a simple, minimally viable version of this fund. Eg, a project manager, ops, and a few technical advisors.

  4. If we do think it’s worth doing, I make some processes and an application form.

  5. I launch the fund officially on the EA Forum!

Acknowledgements and caveats

Thanks to Adam Gleave and the many commentators on my EAForum question for discussions that led to this post. Thanks to Peter Wildeford, Michael Aird, Jonas Vollmer, Owen Cotton-Barret, Nuño Sempere, and Ozzie Gooen for feedback on earlier drafts. Thanks also to Sydney von Arx, Vaidehi Agarwalla, Thomas Kwa and others for verbal feedback.

This post was inspired by some of my work and thinking at Rethink Priorities, but this is not a Rethink Priorities project. All opinions are my own, and do not represent any of my employers.