From the many people I interact with and also from personal experience it seems like funding is tight right now. However, when I talk to larger funders, they typically still say that AI safety is their biggest priority and that they want to allocate serious amounts of money toward it. I’m not sure how to resolve this but I’d be very grateful to understand the perspective of funders better.
I think the uncertainty around funding is problematic because it makes it hard to plan ahead. It’s hard to do independent research, start an org, hire, etc. If there was clarity, people could at least consider alternative options.
(My own professional opinions, other LTFF fund managers etc might have other views)
Hmm I want to split the funding landscape into the following groups:
LTFF
OP
SFF
Other EA/longtermist funders
Earning-to-givers
Non-EA institutional funders.
Everybody else
LTFF
At LTFF our two biggest constraints are funding and strategic vision. Historically it was some combination of grantmaking capacity and good applications but I think that’s much less true these days. Right now we have enough new donations to fund what we currently view as our best applications for some months, so our biggest priority is finding a new LTFF chair to help (among others) address our strategic vision bottlenecks.
Going forwards, I don’t really want to speak for other fund managers (especially given that the future chair should feel extremely empowered to shepherd their own vision as they see fit). But I think we’ll make a bid to try to fundraise a bunch more to help address the funding bottlenecks in x-safety. Still, even if we double our current fundraising numbers or so[1], my guess is that we’re likely to prioritize funding more independent researchers etc below our current bar[2], as well as supporting our existing grantees, over funding most new organizations.
(Note that in $ terms LTFF isn’t a particularly large fraction of the longtermist or AI x-safety funding landscape, I’m only talking about it most because it’s the group I’m the most familiar with).
Open Phil
I’m not sure what the biggest constraints are at Open Phil. My two biggest guesses are grantmaking capacity and strategic vision. As evidence for the former, my impression is that they only have one person doing grantmaking in technical AI Safety (Ajeya Cotra). But it’s not obvious that grantmaking capacity is their true bottleneck, as a) I’m not sure they’re trying very hard to hire, and b) people at OP who presumably could do a good job at AI safety grantmaking (eg Holden) have moved on to other projects. It’s possible OP would prefer conserving their AIS funds for other reasons, eg waiting on better strategic vision or to have a sudden influx of spending right before the end of history.
SFF
I know less about SFF. My impression is that their problems are a combination of a) structural difficulties preventing them from hiring great grantmakers, and b) funder uncertainty.
Other EA/Longtermist funders
My impression is that other institutional funders in longtermism either don’t really have the technical capacity or don’t have the gumption to fund projects that OP isn’t funding, especially in technical AI safety (where the tradeoffs are arguably more subtle and technical than in eg climate change or preventing nuclear proliferation). So they do a combination of saving money, taking cues from OP, and funding “obviously safe” projects.
Exceptions include new groups like Lightspeed (which I think is more likely than not to be a one-off thing), and Manifund (which has a regranters model).
Earning-to-givers
I don’t have a good sense of how much latent money there is in the hands of earning-to-givers who are at least in theory willing to give a bunch to x-safety projects if there’s a sufficiently large need for funding. My current guess is that it’s fairly substantial. I think there are roughly three reasonable routes for earning-to-givers who are interested in donating:
pooling the money in a (semi-)centralized source
choosing for themselves where to give to
saving the money for better projects later.
If they go with (1), LTFF is probably one of the most obvious choices. But LTFF does have a number of dysfunctions, so I wouldn’t be surprised if either Manifund or some newer group ends up being the Schelling donation source instead.
Non-EA institutional funders
I think as AI Safety becomes mainstream, getting funding from government and non-EA philantropic foundations becomes an increasingly viable option for AI Safety organizations. Note that direct work AI Safety organizations have a comparative advantage in seeking such funds. In comparison, it’s much harder for both individuals and grantmakers like LTFF to seek institutional funding[3].
I know FAR has attempted some of this already.
Everybody else
As worries about AI risk becomes increasingly mainstream, we might see people at all levels of wealth become more excited to donate to promising AI safety organizations and individuals. It’s harder to predict what either non-Moskovitz billionaires or members of the general public will want to give to in the coming years, but plausibly the plurality of future funding for AI Safety will come from individuals who aren’t culturally EA or longtermist or whatever.
If the rest of the funding landscape doesn’t change, the tier which I previously called our 5M tier (as in 5M/6 months or 10M/year) can probably absorb on the order of 6-9M over 6 months, or 12-18M over 12 months. This is in large part because the lack of other funders means more projects are applying to us.
Regranting is pretty odd outside of EA; I think it’d be a lot easier for e.g. FAR or ARC Evals to ask random foundations or the US government for money directly for their programs than for LTFF to ask for money to regrant according to our own best judgment. My understanding is that foundations and the US government also often have long forms and application processes which will be a burden for individuals to fill; makes more sense for institutions to pay that cost.
Thanks! I’ve crossposted the comment to LessWrong. I don’t think it’s polished enough to repost as a frontpage post (and I’m unlikely to spend the effort to polish it). Let me know if there are other audiences which will find this comment useful
I have heard mixed messages about funding.
From the many people I interact with and also from personal experience it seems like funding is tight right now. However, when I talk to larger funders, they typically still say that AI safety is their biggest priority and that they want to allocate serious amounts of money toward it. I’m not sure how to resolve this but I’d be very grateful to understand the perspective of funders better.
I think the uncertainty around funding is problematic because it makes it hard to plan ahead. It’s hard to do independent research, start an org, hire, etc. If there was clarity, people could at least consider alternative options.
(My own professional opinions, other LTFF fund managers etc might have other views)
Hmm I want to split the funding landscape into the following groups:
LTFF
OP
SFF
Other EA/longtermist funders
Earning-to-givers
Non-EA institutional funders.
Everybody else
LTFF
At LTFF our two biggest constraints are funding and strategic vision. Historically it was some combination of grantmaking capacity and good applications but I think that’s much less true these days. Right now we have enough new donations to fund what we currently view as our best applications for some months, so our biggest priority is finding a new LTFF chair to help (among others) address our strategic vision bottlenecks.
Going forwards, I don’t really want to speak for other fund managers (especially given that the future chair should feel extremely empowered to shepherd their own vision as they see fit). But I think we’ll make a bid to try to fundraise a bunch more to help address the funding bottlenecks in x-safety. Still, even if we double our current fundraising numbers or so[1], my guess is that we’re likely to prioritize funding more independent researchers etc below our current bar[2], as well as supporting our existing grantees, over funding most new organizations.
(Note that in $ terms LTFF isn’t a particularly large fraction of the longtermist or AI x-safety funding landscape, I’m only talking about it most because it’s the group I’m the most familiar with).
Open Phil
I’m not sure what the biggest constraints are at Open Phil. My two biggest guesses are grantmaking capacity and strategic vision. As evidence for the former, my impression is that they only have one person doing grantmaking in technical AI Safety (Ajeya Cotra). But it’s not obvious that grantmaking capacity is their true bottleneck, as a) I’m not sure they’re trying very hard to hire, and b) people at OP who presumably could do a good job at AI safety grantmaking (eg Holden) have moved on to other projects. It’s possible OP would prefer conserving their AIS funds for other reasons, eg waiting on better strategic vision or to have a sudden influx of spending right before the end of history.
SFF
I know less about SFF. My impression is that their problems are a combination of a) structural difficulties preventing them from hiring great grantmakers, and b) funder uncertainty.
Other EA/Longtermist funders
My impression is that other institutional funders in longtermism either don’t really have the technical capacity or don’t have the gumption to fund projects that OP isn’t funding, especially in technical AI safety (where the tradeoffs are arguably more subtle and technical than in eg climate change or preventing nuclear proliferation). So they do a combination of saving money, taking cues from OP, and funding “obviously safe” projects.
Exceptions include new groups like Lightspeed (which I think is more likely than not to be a one-off thing), and Manifund (which has a regranters model).
Earning-to-givers
I don’t have a good sense of how much latent money there is in the hands of earning-to-givers who are at least in theory willing to give a bunch to x-safety projects if there’s a sufficiently large need for funding. My current guess is that it’s fairly substantial. I think there are roughly three reasonable routes for earning-to-givers who are interested in donating:
pooling the money in a (semi-)centralized source
choosing for themselves where to give to
saving the money for better projects later.
If they go with (1), LTFF is probably one of the most obvious choices. But LTFF does have a number of dysfunctions, so I wouldn’t be surprised if either Manifund or some newer group ends up being the Schelling donation source instead.
Non-EA institutional funders
I think as AI Safety becomes mainstream, getting funding from government and non-EA philantropic foundations becomes an increasingly viable option for AI Safety organizations. Note that direct work AI Safety organizations have a comparative advantage in seeking such funds. In comparison, it’s much harder for both individuals and grantmakers like LTFF to seek institutional funding[3].
I know FAR has attempted some of this already.
Everybody else
As worries about AI risk becomes increasingly mainstream, we might see people at all levels of wealth become more excited to donate to promising AI safety organizations and individuals. It’s harder to predict what either non-Moskovitz billionaires or members of the general public will want to give to in the coming years, but plausibly the plurality of future funding for AI Safety will come from individuals who aren’t culturally EA or longtermist or whatever.
Which will also be harder after OP’s matching expires.
If the rest of the funding landscape doesn’t change, the tier which I previously called our 5M tier (as in 5M/6 months or 10M/year) can probably absorb on the order of 6-9M over 6 months, or 12-18M over 12 months. This is in large part because the lack of other funders means more projects are applying to us.
Regranting is pretty odd outside of EA; I think it’d be a lot easier for e.g. FAR or ARC Evals to ask random foundations or the US government for money directly for their programs than for LTFF to ask for money to regrant according to our own best judgment. My understanding is that foundations and the US government also often have long forms and application processes which will be a burden for individuals to fill; makes more sense for institutions to pay that cost.
There’s some really useful information here. Getting it out in a more visible way would be useful.
Thanks! I’ve crossposted the comment to LessWrong. I don’t think it’s polished enough to repost as a frontpage post (and I’m unlikely to spend the effort to polish it). Let me know if there are other audiences which will find this comment useful