Long-Term Future Fund: Ask Us Anything!
The Long-Term Future Fund (LTFF) is one of the EA Funds. Between Friday Dec 4th and Monday Dec 7th, we’ll be available to answer any questions you have about the fund – we look forward to hearing from all of you!
The LTFF aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, we seek to promote, implement, and advocate for longtermist ideas, and to otherwise increase the likelihood that future generations will flourish.
Grant recommendations are made by a team of volunteer Fund Managers: Matt Wage, Helen Toner, Oliver Habryka, Adam Gleave and Asya Bergal. We are also fortunate to be advised by Nick Beckstead and Nicole Ross. You can read our bios here. Jonas Vollmer, who is heading EA Funds, also provides occasional advice to the Fund.
You can read about how we choose grants here. Our previous grant decisions and rationale are described in our payout reports. We’d welcome discussion and questions regarding our grant decisions, but to keep discussion in one place, please post comments related to our most recent grant round in this post.
Please ask any questions you like about the fund, including but not limited to:
Our grant evaluation process.
Areas we are excited about funding.
Coordination between donors.
Our future plans.
Any uncertainties or complaints you have about the fund. (You can also e-mail us at ealongtermfuture[at]gmail[dot]com for anything that should remain confidential.)
We’d also welcome more free-form discussion, such as:
What should the goals of the fund be?
What is the comparative advantage of the fund compared to other donors?
Why would you/would you not donate to the fund?
What, if any, goals should the fund have other than making high-impact grants? Examples could include: legibility to donors; holding grantees accountable; setting incentives; identifying and training grant-making talent.
How would you like the fund to communicate with donors?
We look forward to hearing your questions and ideas!
- 2020 AI Alignment Literature Review and Charity Comparison by 21 Dec 2020 15:25 UTC; 155 points) (
- 2020 AI Alignment Literature Review and Charity Comparison by 21 Dec 2020 15:27 UTC; 137 points) (LessWrong;
- Long-Term Future Fund: November 2020 grant recommendations by 3 Dec 2020 12:57 UTC; 76 points) (
- EA Updates for January 2021 by 4 Jan 2021 17:09 UTC; 19 points) (
- 20 Mar 2021 2:19 UTC; 12 points)'s comment on Introducing The Nonlinear Fund: AI Safety research, incubation, and funding by (
- 4 Sep 2021 9:33 UTC; 8 points)'s comment on Crazy ideas sometimes do work by (
- 7 Dec 2020 22:01 UTC; 2 points)'s comment on EA Meta Fund Grants – July 2020 by (
I am wondering how the fund managers are thinking more long-term about encouraging more independent researchers and projects to come into existence and stay in existence. So far as I can tell, there hasn’t been much renewed granting to independent individuals and projects (i.e. granting for a second or third time to grantees who have previously already received an LTFF grant). Do most grantees have a solid plan for securing funding after their LTFF grant money runs out, and if so what do they tend to do?
I think LTFF is doing something valuable by giving people the freedom to not “sell out” to more traditional or mass-appeal funding sources (e.g. academia, established orgs, Patreon). I’m worried about a situation where receiving a grant from LTFF isn’t enough to be sustainable, so that people go back to doing more “safe” things like working in academia or at an established org.
Any thoughts on this topic?
The LTFF is happy to renew grants so long as the applicant has been making strong progress and we believe working independently continues to be the best option for them. Examples of renewals in this round include Robert Miles, who we first funded in April 2019, and Joe Collman, who we funded in November 2019. In particular, we’d be happy to be the #1 funding source of a new EA org for several years (subject to the budget constraints Oliver mentions in his reply).
Many of the grants we make to individuals are for career transitions, such as someone retraining from one research field to another, or for one-off projects. So I would expect most grants to not be renewals. That said, the bar for renewals does tend to be higher. This is because we pursue a hits-based giving approach, so are willing to fund projects that are likely not to work out—but of course will not want to renew the grant if it is clearly not working.
I think being a risk-tolerant funder is particularly valuable since most employers are, quite rightly, risk-averse. Firing people tends to be harmful to morale; internships or probation periods can help, but take a lot of supervisory time. This means people who might be a great hire but are high-variance often don’t get hired. Funding them for a period of time to do independent work can derisk the grantee, since they’ll have a more substantial portfolio to show.
The level of excitement about long-term independent work varies between fund managers. I tend to think it’s hard for people to do great work independently. I’m still open to funding it, but I want to see a compelling case that there’s not an organisation that would be a good home for the applicant. Some other fund managers are more concerned by perverse incentives in established organisations (especially academia), so are more willing to fund independent research.
I’d be interested to hear thoughts on how we could better support our grantees here. We do sometimes forward applications on to other funders (with the applicants permission), but don’t have any systematic program to secure further funding (beyond applying for renewals). We could try something like “demo days” popular in the VC world, but I’m not sure there’s a large enough ecosystem of potential funders for this to be worth it.
My impression is that it is not possible for everyone who want to help with the long term ti get hired by an org, for the simple reason that there are not enough openings at those orgs. At least in AI Safety, all entry level jobs are very competitive, meaning that not getting in is not a strong signal that one could not have done well there.
Do you disagree with this?
I can’t respond for Adam, but just wanted to say that I personally agree with you, which is one of the reasons I’m currently excited about funding independent work.
Thanks for picking up the thread here Asya! I think I largely agree with this, especially about the competitiveness in this space. For example, with AI PhD applications, I often see extremely talented people get rejected who I’m sure would have got an offer a few years ago.
I’m pretty happy to see the LTFF offering effectively “bridge” funding for people who don’t quite meet the hiring bar yet, but I think are likely to in the next few years. However, I’d be hesitant about heading towards a large fraction of people working independently long-term. I think there’s huge advantages from the structure and mentorship an org can provide. If orgs aren’t scaling up fast enough, then I’d prefer to focus on trying to help speed that up.
The main way I could see myself getting more excited about long-term independent research is if we saw flourishing communities forming amongst independent researchers. Efforts like LessWrong and the Alignment Forum help in terms of providing infrastructure. But right now it still seems much worse than working for an org, especially if you want to go down any of the more traditional career paths later. But I’d love to be proven wrong here.
I claim we have proof of concept. The people who started the existing AI Safety research orgs did not have AI Safety mentors. Current independent researcher have more support than they had. In a way an org is just a crystalized collaboration of previously independent researchers.
I think that there are some PR reasons why it would be good if most AI Safety researchers where part of academia or other respectable orgs (e.g. DeepMind). But I also think it is good to have a minority of researchers who are disconnected from the particular pressures of that environment.
However, being part of academia is not the same as being part of an AI Safety org. MIRI people are not part of academia, and someone doing AI Safety research as part of their PhD in a “normal” (not AI Safety focused) PhD program, is sorta an independent researcher.
We are working on that. I’m not optimistic about current orgs keeping up with the growth of the field, and I don’t think it is healthy for the career to be too competitive, since this will lead to goodhearted on career intensives. But I do think a looser structure, built on personal connections rather than formal org employment, can grow in a much more flexible way, and we are experimenting with various methods to make this happen.
Yeah, I am also pretty worried about this. I don’t think we’ve figured out a great solution to this yet.
I think we don’t really have sufficient capacity to evaluate organizations on an ongoing basis and provide good accountability. Like, if a new organization were to be funded by us and then grow to a budget of $1M a year, I don’t feel like we have the capacity to evaluate their output and impact sufficiently well to justify giving them $1M each year (or even just $500k).
Our current evaluation process routes feels pretty good for smaller projects, and granting to established organizations that have other active evaluators looking into them that we can talk to, but doesn’t feel very well-suited to larger organizations that don’t have existing evaluations done on them (there is a lot of due diligence work to be done on that I think requires higher staff capacity than we have).
I also think the general process of the LTFF specializing into more something like venture funding, with other funders stepping in for more established organizations feels pretty good to me. I do think the current process has a lot of unnecessary uncertainty and risk in it, and I would like to work on that. So one thing I’ve been trying to get better at is predicting which projects could get long-term funding from other funders, and to try to help projects get to a place where they can receive long-term funding from more than just the LTFF.
Capital wise, I also think that we don’t really have the funding to support organizations over longer periods of time. I.e. supporting 3 organizations at $500k a year would take up almost all of our budget, and I think it’s not worth trading that off against the other smaller grants we’ve historically been making. But it is one of the most promising ways I would want to use additional funds we could get.
I agree with @Habryka that our current process is relatively lightweight which is good for small grants but doesn’t provide adequate accountability for large grants. I think I’m more optimistic about the LTFF being able to grow into this role. There’s a reasonable number of people who we might be excited about working as fund managers—the main thing that’s held us back from growing the team is the cost of coordination overhead as you add more individuals. But we could potentially split the fund into two sub-teams that specialize in smaller and larger grants (with different evaluation process), or even create a separate fund in EA Funds that focuses on more established organisations. Nothing certain yet, but it’s a problem we’re interested in addressing.
Ah yeah, I also think that if the opportunity presents itself we could grow into this role a good amount. Though I think on the margin I think it’s more likely we are going to invest even more into more early-stage expertise and maybe do more active early-stage grantmaking.
Just to add a comment with regards to sustainable funding for independent researchers. There haven’t previously been many options available for this, however, there are a growing number of virtual research institutes through which affiliated researchers can apply to academic funding agencies. The virtual institute can then administer the grant for a researcher (usually for much lower overheads than a traditional institution), while they effectively still do independent work. The Ronin Institute administers funding from US granters, and I am a Board member at IGDORE which can receive funding from some European granters. That said, it may still be quite difficult for individuals to secure academic funding without having some traditional academic credentials (PhD, publications, etc.).
What do you mean by “There haven’t previously been many options available”? What is stopping you from just giving people money? Why do you need an institute as middle hand?
My understanding is that (1) to deal with the paperwork etc. for grants from governments or government-like bureaucratic institutions, you need to be part of an institution that’s done it before; (2) if the grantor is a nonprofit, they have regulations about how they can use their money while maintaining nonprofit status, and it’s very easy for them to forward the money to a different nonprofit institution, but may be difficult or impossible for them to forward the money to an individual. If it is possible to just get a check as an individual, I imagine that that’s the best option. Unless there are other considerations I don’t know about.
Btw Theiss is another US organization in this space.
One other benefit of a virtual research institute is that they can act as formal employers for independent researchers, which may be desirable for things like receiving healthcare coverage or welfare benefits.
Thanks for mentioning Theiss, I didn’t know of them before. Their website doesn’t look so active now, but it’s good to know about the history of the independent research scene.
Theiss was very much active as of December 2020. They’ve just been recruiting so successfully through word-of-mouth that they haven’t gotten around to updating the website.
I don’t think healthcare and taxes undermine what I said, at least not for me personally. For healthcare, individuals can buy health insurance too. For taxes, self-employed people need to pay self-employment tax, but employees and employers both have to pay payroll tax which adds up to a similar amount, and then you lose the QBI deduction (this is all USA-specific), so I think you come out behind even before you account for institutional overhead, and certainly after. Or at least that’s what I found when I ran the numbers for me personally. It may be dependent on income bracket or country so I don’t want to over-generalize...
That’s all assuming that the goal is to minimize the amount of grant money you’re asking for, while holding fixed after-tax take-home pay. If your goal is to minimize hassle, for example, and you can just apply for a bit more money to compensate, then by all means join an institution, and avoid the hassle of having to research health care plans and self-employment tax deductions and so on.
I could be wrong or misunderstanding things, to be clear. I recently tried to figure this out for my own project but might have messed up, and as I mentioned, different income brackets and regions may differ. Happy to talk more. :-)
In the April 2020 payout report, Oliver Habryka wrote:
I’m curious to hear more about this (either from Oliver or any of the other fund managers).
Regardless of whatever happens, I’ve benefited greatly from all the effort you’ve put in your public writing on the fund Oliver.
I am planning to respond to this in more depth, but it might take me a few days longer, since I want to do a good job with it. So please forgive me if I don’t get around to this before the end of the AMA.
Any update on this?
I wrote a long rant that I shared internally that was pretty far from publishable, but then a lot of things changed, and I tried editing it for a bit, but more things kept changing. Enough that at some point I gave up on trying to edit my document to keep up with the new changes, and instead just wait until things settle down, so I can write something that isn’t going to be super confusing.
Sorry for the confusion here. At any given point it seemed like things would settle down more so I would have a more consistent opinion.
Overall, a lot of the changes have been great, and I am currently finding myself more excited about the LTFF than I have in a long time. But a bunch of decisions are still to be made, so I will hold off on writing a bit longer. Sorry again for the delay.
If you had $1B, and you weren’t allowed to give it to other grantmakers or fund prioritisation research, where might you allocate it?
$1B is a lot. It also gets really hard if I don’t get to distribute it to other grantmakers. Here are some really random guesses. Please don’t hold me to this, I have thought about this topic some, but not under these specific constraints, so some of my ideas will probably be dumb.
My guess is I would identify the top 20 people who seem to be doing the best work around long-term-future stuff, and give each of at least $10M, which would allow each of them to reliably build an exoskeleton around them and increase their output.
My guess is that I would then invest a good chunk more into scaling up LessWrong and the EA Forum, and make it so that I could distribute funds to researchers working primarily on those forums (while building a system for peer evaluation to keep researchers accountable). My guess is this could consume another $100M over the next 10 years or so.
I expect it would take me at least a decade to distribute that much money. I would definitely continue taking in applications for organizations and projects from people and kind of just straightforwardly scale up LTFF spending of the same type, which I think could take another $40M over the next decade.
I think I would spend a substantial amount of money on prizes for people who seem to have done obviously really good things for the world. Giving $10M to scihub seems worth it. Maybe giving $5M to Daniel Ellsberg as a prize for his lifetime achievements. There are probably more people in this reference class of people who seem to me to have done heroic things, but haven’t even been remotely well enough rewarded (like, it seems obvious that I would have wanted Einstein to die having at least a few millions in the bank, so righting wrongs of that reference class seems valuable, though Einstein did at least get a Nobel prize). My guess is one could spend another $100M this way.
It seems pretty plausible that one should consider buying a large newspaper with that money and optimizing it for actual careful analysis without the need for ads. This seems pretty hard though, but also, I really don’t like the modern news landscape, and it doesn’t take that much money to even run a large newspaper like the Washington Post, so I think this is pretty doable. But I do think it has the potential to take a good chunk of the $1B, so I am pretty unsure whether I would do it, even if you were to force me to make a call right now (for reference, the Washington Post was acquired for $250M).
I would of course just pay my fair share of all the existing good organizations that exist and currently get funded by Open Phil. My guess is that would take about $100M over the next decade.
I would probably keep a substantial chunk in reserve for worlds where some kind of quick pivotal action is needed that requires a lot of funds. Like, I don’t know, a bunch of people pooling money for a list minute acquisition of Deepmind or something to prevent an acute AI risk threat.
If I had the money right now I would probably pay someone to run a $100K-$1M study of the effects of Vitamin D on COVID. It’s really embarrassing that we don’t have more data on that yet, even though it has such a large effect.
Maybe I would try to do something crazy like try to get permission to establish a new city in some U.S. state that I would try to make into a semi-libertarian utopia and get all the good people to move there? But like, that sure doesn’t seem like it would straightforwardly work out. Also, seems like it would cost substantially more money than $1B.
I’m really surprised by this; I think things like the Future of Life award are good, but if I got $1B I would definitely not think about spending potentially $100m on similar awards as an EA endeavor. Can you say more about this? Why do you think this is so valuable?
It seems to me that one of the biggest problems with the world is that only a small fraction of people who do a really large amount of good get much rewarded for it. It seems likely that this prevents many people from pursuing doing much good with their lives.
My favorite way of solving this kind of issue is with Impact Certificates, which has decent amount of writing on it, and you can think of the above as just buying about $100M of impact certificates for the relevant people (in practice I expect that if you get a good impact certificate market going, which is a big if, you could productively spend substantially more than $1B).
The cop-out answer of course is to say we’d grow the fund team or, if that isn’t an option, we’d all start working full-time on the LTFF and spend a lot more time thinking about it.
If there’s some eccentric billionaire who will only give away their money right now to whatever I personally recommend, then off the top of my head:
For any long-termist org who (a) I’d usually want to fund at a small scale; and (b) whose leadership’s judgement I’d trust, I’d give them as much money as they can plausibly make use of in the next 10 years. I expect that even organisations that are not usually considered funding constrained could probably produce 10-20% extra impact if they invested twice as much in their staff (let them rent really close to the office, pay for PAs or other assistants to save time, etc).
I also think there can be value in having an endowment: it lets the organisation make longer-term plans, can raise the organisation’s prestige, and some things (like creating a professorship) often require endowments.
However, I do think there are some cases it can be negative: some organisations benefit a lot from the accountability of donors, and being too well-funded can attract the wrong people, like with the resource curse. So I’d be selective here, but more in terms of “do I trust the board and leadership to a blank cheque” than “at a detailed level, do I think this org is doing the most valuable work?”
I’d also be tempted to throw a lot of money at interventions that seem shovel-ready and robustly positive, even if they wouldn’t normally be something I’d be excited about. For example, I’d feel reasonably good about funding the CES for $10-20m, and probably similar sized grants to Nuclear Threat Initiative, etc.
This is more speculative, but I’d be tempted to try and become the go-to angel investor or VC fund for AI startups. I think I’m in a reasonably good position for this now, being an AI researcher and also having a finance background, and having a billion dollars would help out here.
The goal wouldn’t be to make money (which is good since most VC’s don’t seem to do that well!) But being an early investor gives a lot of leverage over a company’s direction. Industry is a huge player in fundamental AI research, and in particular I would 85% predict the first transformative AI is developed by an industry lab, not academia. Having a board seat and early insight into a start-up that is about to develop the first transformative AI seems hugely valuable. Of course, there’s no guarantee I’d manage this—perhaps I miss that startup, or a national lab or pre-existing industrial lab (Google/Facebook/Huawei/etc) develops the technologies first. But start-ups are responsible for a big fraction of disruptive technology, so it’s a reasonable bet.
What’s your all-things-considered view for probability that the first transformative AI (defined by your lights) will be developed by a company that, as of December 2020, either a) does not exist or b) has not gone through Series A?
(Don’t take too much time on this question, I just want to see a gut check plus a few sentences if possible).
About 40%. This is including startups that later get acquired, but the parent company would not have been the first to develop transformative AI if the acquisition had not taken place. I think this is probably my modal prediction: the big tech companies are effectively themselves huge VCs, and their infrastructure provides a comparative advantage over a startup trying to do it entirely solo.
I think I put around 40% on it being a company that does already exist, and 20% on “other” (academia, national labs, etc).
Conditioning on transformative AI being developed in the next 20 years my probability for a new company developing it is a lot lower—maybe 20%? So part of this is just me not expecting transformative AI particularly soon, and tech company half-life being plausibly quite short. Google is only 21 years old!
Thanks a lot, really appreciate your thoughts here!
What processes do you have for monitoring the outcome/impact of grants, especially grants to individuals?
As part of CEA’s due diligence process, all grantees must submit progress reports documenting how they’ve spent their money. If a grantee applies for renewal, we’ll perform a detailed evaluation of their past work. Additionally, we informally look back at past grants, focusing on grants that were controversial at the time, or seem to have been particularly good or bad.
I’d like us to be more systematic in our grant evaluation, and this is something we’re discussing. One problem is that many of the grants we make are quite small: so it just isn’t cost-effective for us to evaluate all our grants in detail. Because of this, any more detailed evaluation we perform would have to be on a subset of grants.
I view there being two main benefits of evaluation: 1) improving future grant decisions; 2) holding the fund accountable. Point 1) would suggest choosing grants we expect to be particularly informative: for example, those where fund managers disagreed internally, or those which we were particularly excited about and would like to replicate. Point 2) would suggest focusing on grants that were controversial amongst donors, or where there were potential conflicts of interest.
It’s important to note that other things help with these points, too. For 1) improving our grant making process, we are working on sharing best-practices between the different EA Funds. For 2) we are seeking to increase transparency about our internal processes, such as in this doc (which we will soon add as an FAQ entry). Since evaluation is time consuming in the short-term we are likely to only evaluate a small percentage of our grants, though we may scale this up as fund capacity grows.
Interesting question and answer!
Do the LTFF fund managers make forecasts about potential outcomes of grants?
And/or do you write down in advance what sort of proxies you’d want to see from this grant after x amount of time? (E.g., what you’d want to see to feel that this had been a big success and that similar grant applications should be viewed (even) more positively in future, or that it would be worth renewing the grant if the grantee applied again.)
One reason that that first question came to mind was that I previously read a 2016 Open Phil post that states:
(I don’t know whether, how, and how much Open Phil and GiveWell still do things like this.)
We haven’t historically done this. As someone who has tried pretty hard to incorporate forecasting into my work at LessWrong, my sense is that it actually takes a lot of time until you can get a group of 5 relatively disagreeable people to agree on an operationalization that makes sense to everyone, and so this isn’t really super feasible to do for lots of grants. I’ve made forecasts for LessWrong, and usually creating a set of forecasts that actually feels useful in assessing our performance takes me at least 5-10 hours.
It’s possible that other people are much better at this than I am, but this makes me kind of hesitant to use at least classical forecasting methods as part of LTFF evaluation.
Thanks for that answer.
It seems plausible to me that a useful version of forecasting grant outcomes would be too time-consuming to be worthwhile. (I don’t really have a strong stance on the matter currently.) And your experience with useful forecasting for LessWrong work being very time-consuming definitely seems like relevant data.
But this part of your answer confused me:
Naively, I’d have thought that, if that was a major obstacle, you could just have a bunch of separate operationalisations, and people can forecast on whichever ones they want to forecast on. If, later, some or all operationalisations do indeed seem to have been too flawed for it to be useful to compare reality to them, assess calibration, etc., you could just not do those things for those operationalisations/that grant.
(Note that I’m not necessarily imagining these forecasts being made public in advance or afterwards. They could be engaged in internally to the extent that makes sense—sometimes ignoring them if that seems appropriate in a given case.)
Is there a reason I’m missing for why this doesn’t work?
Or was the point about difficulty of agreeing on an operationalisation really meant just as evidence of how useful operationalisations are hard to generate, as opposed to the disagreement itself being the obstacle?
I think the most lightweight-but-still-useful forecasting operationalization I’d be excited about is something like
This gets at whether people think it’s a good idea ex post, and also (if people are well-calibrated) can quantify whether people are insufficiently or too risk/ambiguity-averse, in the classic sense of the term.
This seems helpful to assess fund managers’ calibration and improve their own thinking and decision-making. It’s less likely to be useful for communicating their views transparently to one another, or to the community, and it’s susceptible to post-hoc rationalization. I’d prefer an oracle external to the fund, like “12 months from now, will X have a ≥7/10 excitement about this grant on a 1-10 scale?”, where X is a person trusted by the fund managers who will likely know about the project anyway, such that the cost to resolve the forecast is small.
I plan to encourage the funds to experiment with something like this going forward.
I agree that your proposed operationalization is better for the stated goals, assuming similar levels of overhead.
Just to make sure I’m understanding, are you also indicating that the LTFF doesn’t write down in advance what sort of proxies you’d want to see from this grant after x amount of time? And that you think the same challenges with doing useful forecasting for your LessWrong work would also apply to that?
These two things (forecasts and proxies) definitely seem related, and both would involve challenges in operationalising things. But they also seem meaningfully different.
I’d also think that, in evaluating a grant, I might find it useful to partly think in terms of “What would I like to see from this grantee x months/years from now? What sorts of outputs or outcomes would make me update more in favour of renewing this grant—if that’s requested—and making similar grants in future?”
We’ve definitely written informally things like “this is what would convince me that this grant was a good idea”, but we don’t have a more formalized process for writing down specific objective operationalizations that we all forecast on.
I’m personally actually pretty excited about trying to make some quick forecasts for a significant fraction (say, half) of the grants that we actually make, but this is something that’s on my list to discuss at some point with the LTFF. I mostly agree with the issues that Habryka mentions, though.
To add to Habryka’s response: we do give each grant a quantitative score (on −5 to +5, where 0 is zero impact). This obviously isn’t as helpful as a detailed probabilistic forecast, but I think it does give a lot of the value. For example, one question I’d like to answer from retrospective evaluation is whether we should be more consensus driven or fund anything that at least one manager is excited about. We could address this by scrutinizing past grants that had a high variance in scores between managers.
I think it might make sense to start doing forecasting for some of our larger grants (where we’re willing to invest more time), and when the key uncertainties are easy to operationalize.
I notice that all but one of the November 2020 grants were given to individuals as opposed to organisations. What is the reason for this?
To clarify I’m certainly not criticising—I guess it makes quite a bit of sense as individuals are less likely than organisations to be able to get funding from elsewhere, so funding them may be better at the margin. However I would still be interested to hear your reasoning.
I notice that the animal welfare fund gave exclusively to organisations rather than individuals in the most recent round. Why do you think there is this difference between LTFF and AWF?
Speaking just for myself on why I tend to prefer the smaller individual grants:
Currently when I look at the funding landscape, it seems that without the LTFF there would be a pretty big hole in available funding for projects to get off the ground and for individuals to explore interesting new projects or enter new domains. Open Phil very rarely makes grants smaller than ~$300k, and even many donors don’t really like giving to individuals and early-stage organizations because they often lack established charity status, which makes their donations non-tax-deductable.
CEA has set up infrastructure to allow tax-deductible grants to individuals and organizations without charity status, and the fund itself seems well-suited to evaluate organizations by individuals, since we all have pretty wide networks and can pretty quickly gather good references on individuals that are working on projects that don’t yet have an established track record.
I think in a world without Open Phil or the Survival and Flourishing Fund, much more of our funding would go to established organizations.
Separately, I also think that I personally view a lot of the intellectual work to be done on the Long Term Future to be quite compatible with independent researchers asking for grants for just them, or maybe small teams around them. This feels kind of similar to how academic funding is often distributed, and I think makes sense for domains where a lot of people should explore a lot of different directions and we have set up infrastructure so that researchers and distillers can make contributions without necessarily needing a whole organization around them (which I think the EA Forum enables pretty well).
In addition to both of those points, I also think evaluating organizations requires a somewhat different skillset than evaluating individuals and small team projects, and we are currently better at the second than the first (though I think we would reskill if we thought it was more likely that more organizational grants would become more important again).
Thanks for this detailed answer. I think that all makes a lot of sense.
I largely agree with Habryka’s comments above.
In terms of the contrast with the AWF in particular, I think the funding opportunities in the long-termist vs animal welfare spaces look quite different. One big difference is that interest in long-termist causes has exploded in the last decade. As a result, there’s a lot of talent interested in the area, but there’s limited organisational and mentorship capacity to absorb this talent. By contrast, the animal welfare space is more mature, so there’s less need to strike out in an independent direction. While I’m not sure on this, there might also be a cultural factor—if you’re trying to perform advocacy, it seems useful to have an organisation brand behind you (even if it’s just a one-person org). This seems much less important if you want to do research.
Tangentially, I see a lot of people debating whether EA is talent constrained, funding constrained, vetting constrained, etc. My view is that for most orgs, at least in the AI safety space, they can only grow by a relatively small (10-30%) rate per year while still providing adequate mentorship. This is talent constrained in the sense that having a larger applicant pool will help the orgs select even better people. But adding more talent won’t necessarily increase the number of hires.
While 10-30% is a relatively small growth rate, if it is sustained then I expect it to eventually outstrip growth in the longtermist talent pipeline: my median guess would be sometime in the next 3-7 years. I see the LTFF’s grants to individuals in part trying to bridge the gap while orgs scale up, giving talented people space to continue to develop, and perhaps even found an org. So I’d expect our proportion of individual grants to decline eventually. This is a personal take, though, and I think others on the fund are more excited about independent research on a more long-term basis.
This might be a small point, but while I would agree, I imagine that strategically there are some possible orgs that could grow more quickly; and due to them growing, could dominate the funding eventually.
I think one thing that’s going on is that right now due to funding constraints individuals are encouraged to create organizations that are efficient when small, as opposed to efficient when large. I’ve made this decision myself. Doing the latter would require a fair amount of trust that large funders would later be interested in it at that scale. Right now it seems like we only have one large funder, which makes things tricky.
This is a good point, and I do think having multiple large funders would help with this. If the LTFF’s budget grew enough I would be very interested in funding scalable interventions, but it doesn’t seem like our comparative advantage now.
I do think possible growth rates vary a lot between fields. My hot take is new research fields are particularly hard to grow quickly. The only successful ways I’ve seen of teaching people how to do research involve apprenticeship-style programs (PhDs, residency programs, learning from a team of more experienced researchers, etc). You can optimize this to allow senior researchers to mentor more people (e.g. lots of peer advice assistants to free up senior staff time, etc), but that seems unlikely to yield more than a 2x increase in growth rate.
Most cases where orgs have scaled up successfully have drawn on a lot of existing talent. Tech startups can grow quickly but they don’t teach each new hire how to program from scratch. So I’d love to see scalable ways to get existing researchers to work on priority areas like AI safety, biosecurity, etc.
It can be surprisingly hard to change what researchers work on, though. Researchers tend to be intrinsically motivated, so right now the best way I know is to just do good technical work to show that problems exist (and are tractable to solve), combined with clear communication. Funding can help here a bit: make sure the people doing the good technical work are not funding constrained.
One other approach might be to build better marketing: DeepMind, OpenAI, etc are great at getting their papers a lot of attention. If we could promote relevant technical work that might help draw more researchers to these problems. Although a lot of people in academia really hate these companies self-promotion, so it could backfire if done badly.
The other way to scale up is to get people to skill-up in areas with more scalable mentorship: e.g. just work on any AI research topic for your PhD where you can get good mentorship, then go work at an org doing more impactful work once you graduate. I think this is probably our best bet to absorb most additional junior talent right now. This may beat the 10-30% figure I gave, but we’d still have to wait 3-5 years before the talent comes on tap unfortunately.
I agree that research organizations of the type that we see are particularly difficult to grow quickly.
My point is that we could theoretically focus more on other kinds of organizations that are more scalable. I could imagine there being more scalable engineering-heavy or marketing-heavy paths to impact on these problems. For example, setting up an engineering/data organization to manage information and metrics about bio risks. These organizations might have rather high upfront costs (and marginal costs), but are ones where I could see investing $10-100mil/year if we wanted.
Right now it seems like our solution to most problems is “try to solve it with experienced researchers”, which seems to be a tool we have a strong comparative advantage in, but not the only tool in the possible toolbox. It is a tool that’s very hard to scale, as you note (I know of almost no organizations that have done this well).
I just want to flag that I think I agree, but also feel pretty bad about this. I get the impression that for AI many of the grad school programs are decent enough, but for other fields (philosophy, some of Econ, things bio related), grad school can be quite long winded, demotivating, occasionally the cause of serious long term psychological problems, and often distracting or actively harmful for alignment. It definitely feels like we should eventually be able to do better, but it might be a while.
Just want to say I agree with both Habryka’s comments and Adam’s take that part of what the LTFF is doing is bridging the gap while orgs scale up (and increase in number) and don’t have the capacity to absorb talent.
Thanks for this reply, makes a lot of sense!
I agree with Habryka and Adam.
Regarding the LTFF (Long-Term Future Fund) / AWF (Animal Welfare Fund) comparison in particular, I’d add the following:
The global longtermist community is much smaller than the global animal rights community, which means that the animal welfare space has a lot more existing organizations and people trying to start organizations that can be funded.
Longtermist cause areas typically involve a lot more research, which often implies funding individual researchers, whereas animal welfare work is typically more implementation-oriented.
Also makes sense, thanks.
What do you think has been the biggest mistake by the LTF fund (at least that you can say publicly)?
(I’m not a Fund manager, but I’ve previously served as an advisor to the fund and now run EA Funds, which involves advising EA Funds.)
In addition to what Adam mentions, two further points come to mind:
1. I personally think some of the April 2019 grants weren’t good, and I thought that some (but not all) of the critiques the LTFF received from the community were correct. (I can’t get more specific here – I don’t want to make negative public statements about specific grants, as this might have negative consequences for grant recipients.) The LTFF has since implemented many improvements that I think will prevent such mistakes from occurring again.
2. I think we could have communicated better around conflicts of interest. I know of some 2019 grants donors perceived to be subject to a conflict of interest, but there actually wasn’t a conflict of interest, or it was dealt with appropriately. (I also can recall one case where I think a conflict of interest may not have been dealt with well, but our improved policies and practices will prevent a similar potential issue from occurring again.) I think we’re now dealing appropriately with COIs (not in the sense that we refrain from any grants with a potential COI, but that we have appropriate safeguards in place that prevent the COI from impairing the decision). I would like to publish an updated policy once I get to it.
Historically I think the LTFF’s biggest issue has been insufficiently clear messaging, especially for new donors. For example, we received feedback from numerous donors in our recent survey that they were disappointed we weren’t funding interventions on climate change. We’ve received similar feedback from donors surprised by the number of AI-related grants we make. Regardless of whether or not the fund should change the balance of cause areas we fund, it’s important that donors have clear expectations regarding how their money will be used.
We’ve edited the fund page to make our focus areas more explicit, and EA Funds also added Founders Pledge Climate Change Fund for donors who want to focus on that area (and Jonas emailed donors who made this complaint, encouraging to switch their donations to the climate change fund). I hope this will help clarify things, but we’ll have to be attentive to donor feedback both via things like this AMA and our donor survey, so that we can proactively correct any misconceptions.
Another issue I think we have is that we currently lack the capacity to be more proactively engaged with our grantees. I’d like us to do this for around 10% of our grant applications, particularly those where we are a large proportion of an organisation’s budget. In these cases it’s particularly important that we hold the organisation accountable, and provide strategic advice. In around a third of these cases, we’ve chosen not to make the grant because we feel unexcited about the organisation’s current direction, even though we think it could be a good donation opportunity for a more proactive philanthropist. We’re looking to grow our capacity, so we can hopefully pursue more active philanthropy in the future.
I agree unclear messaging has been a big problem for the LTFF, and I’m glad to see the EA Funds team being responsive to feedback around this. However, the updated messaging on the fund page still looks extremely unclear and I’m surprised you think it will clear up the misunderstandings donors have.
It would probably clear up most of the confusion if donors saw the clear articulation of the LTFF’s historical and forward looking priorities that is already on the fund page (emphasis added):
The problem is that this text is buried in the 6th subsection of the 6th section of the page. So people have to read through ~1500 words, the equivalent of three single spaced typed pages, to get an accurate description of how the fund is managed. This information should be in the first paragraph (and I believe that was the case at one point).
Compounding this problem, aside from that one sentence the fund page (even after it has been edited for clarity) makes it sound like AI and pandemics are prioritized similarly, and not that far above other LT cause areas. I believe the LTFF has only made a few grants related to pandemics, and would guess that AI has received at least 10 times as much funding. (Aside: it’s frustrating that there’s not an easy way to see all grants categorized in a spreadsheet so that I could pull the actual numbers without going through each grant report and hand entering and classifying each grant.)
In addition to clearly communicating that the fund prioritizes AI, I would like to see the fund page (and other communications) explain why that’s the case. What are the main arguments informing the decision? Did the fund managers decide this? Did whoever selected the fund managers (almost all of who have AI backgrounds) decide this? Under what conditions would the LTFF team expect this prioritization to change? The LTFF has done a fantastic job providing transparency into the rationale behind specific grants, and I hope going forward there will be similar transparency around higher level prioritization decisions.
The very first sentence on that page reads (emphasis mine):
I personally think that’s quite explicit about the focus of the LTFF, and am not sure how to improve it further. Perhaps you think we shouldn’t mention pandemics in that sentence? Perhaps you think “especially” is not strong enough?
An important reason why we don’t make more grants to prevent pandemics is that we only get few applications in that area. The page serves a dual purpose: it informs both applicants and donors. Emphasizing pandemics less could be good for donor transparency, but might further reduce the number of biorisk-related applications we receive. As Adam mentions here, he’s equally excited about AI safety and biosecurity at the margins, and I personally mostly agree with him on this.
Here’s a spreadsheet with all EA Funds grants (though without categorization). I agree a proper grants database would be good to set up at some point; I have now added this to my list of things we might work on in 2021.
We prioritize AI roughly for the reasons that have been elaborated on at length by others in the EA community (see, e.g., Open Phil’s report), plus additional considerations regarding our comparative advantage. I agree it would be good to provide more transparency regarding high-level prioritization decisions; I personally would find it a good idea if each Fund communicated its overall strategy for the next two years, though this takes a lot of time. I hope we will have the resources to do this sometime soon.
I don’t think it’s appropriate to discuss pandemics in that first sentence. You’re saying the fund makes grants that “especially” address pandemics, and that doesn’t seem accurate. I looked at your spreadsheet (thank you!) and tried to do a quick classification. As best I can tell, AI has gotten over half the money the LTFF has granted, ~19x the amount granted to pandemics (5 grants for $114,000). Forecasting projects have received 2.5x as much money as pandemics, and rationality training has received >4x as much money. So historically, pandemics aren’t even that high among non-AI priorities.
If pandemics will be on equal footing with AI going forward, then that first sentence would be okay. But if that’s the plan, why is the management team skillset so heavily tilted toward AI?
I’m glad there’s interest in funding more biosecurity work going forward. I’m pretty skeptical that relying on applications is an effective way to source biosecurity proposals though, since relatively few EAs work in that area (at least compared to AI) and big biosecurity funding opportunities (like Open Phil grantees Johns Hopkins Center for Health Security and Blue Ribbon Study Panel on Biodefense) probably aren’t going to be applying for LTFF grants.
Regarding the page’s dual purpose, I’d say informing donors is much more important than informing applicants: it’s a bad look to misinform people who are investing money based on your information.
There’s been plenty of discussion (including that Open Phil report) on why AI is a priority, but there’s been very little explicit discussion of why AI should be prioritized relative to other causes like biosecurity.
Open Phil prioritizes both AI and biosecurity. For every dollar Open Phil has spent on biosecurity, it’s spent ~$1.50 on AI. If the LTFF had a similar proportion, I’d say the fund page’s messaging would be fine. But for every dollar LTFF has spent on biosecurity, it’s spent ~$19 on AI. That degree of concentration warrants an explicit explanation, and shouldn’t be obscured by the fund’s messaging.
Thanks, I appreciate the detailed response, and agree with many of the points you made. I don’t have the time to engage much more (and can’t share everything), but we’re working on improving several of these things.
Thanks Jonas, glad to hear there are some related improvements in the works For whatever it’s worth, here’s an example of messaging that I think accurately captures what the fund has done, what it’s likely to do in the near term, and what it would ideally like to do:
The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks or promote the adoption of longtermist thinking. While many grants so far have prioritized projects addressing risks posed by artificial intelligence (and the grantmakers expect to continue this at least in the short term), the Fund is open to funding, and welcomes applications from, a broader range of activities related to the long-term future.
I agree with you that that’s pretty clear. Perhaps you could just have another sentence explaining that most grants historically have been AI-related because that’s where you receive most of your applications?
On another note, I can’t help but feel that “Global Catastrophic Risk Fund” would be a better name than “Long-term Future Fund”. This is because there are other ways to improve the long-term trajectory of civilisation than by mitigating global catastrophic risks. Also, if you were to make this change, it may help distinguish the fund from the long-term investment fund that Founders Pledge may set up.
Some of the LTFF grants (forecasting, long-term institutions, etc.) are broader than GCRs, and my guess is that at least some Fund managers are pretty excited about trajectory changes, so I’d personally think the current name seems more accurate.
Ah OK. The description below does make it sound like it’s only global catastrophic risks.
Perhaps include the word ‘predominantly’ before the word “making”?
The second sentence on that page (i.e. the sentence right after this one) reads:
“Predominantly” would seem redundant with “in addition”, so I’d prefer leaving it as-is.
OK sorry this is just me not doing my homework! That all seems reasonable.
Which of these two sentences, both from the fund page, do you think describes the fund more accurately?
The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. (First sentence of fund page.)
Grants so far have prioritized projects addressing risks posed by artificial intelligence, and the grantmakers expect to continue this at least in the short term. (Located 1500 words into fund page.)
I’d say 2 is clearly more accurate, and I think the feedback you’ve received about donors being surprised at how many AI grants were made suggests I’m not alone.
Could you operationalize “more accurately” a bit more? Both sentences match my impression of the fund. The first is more informative as to what our aims are, the second is more informative as to the details of our historical (and immediate future) grant composition.
My sense is that the first will give people an accurate predictive model of the LTFF in a wider range of scenarios. For example, if next round we happen to receive an amazing application for a new biosecurity org, the majority of the round’s funding could go on that. The first sentence would predict this, the second not.
But the second will give most people better predictions in a “business as usual” case, where our applications in future rounds are similar to those of current rounds.
My hunch is that knowing what our aims are is more important for most donors. In particular, many people reading this for the first time will be choosing between the LTFF and one of the other EA Funds, which focus on completely different cause areas. The high-level motivation seems more salient than our current grant composition for this purpose.
Ideally of course we’d communicate both. I’ll think about if we should add some kind of high-level summary of % of grants to different areas under the “Grantmaking and Impact” section which occurs earlier. My main worry is this kind of thing is hard to keep up to date, and as described above could end up misleading donors in the other direction, if our application pool suddenly changes.
Adam has mentioned elsewhere here that he will prefer making more biosecurity grants. An interesting question here is how much the messaging should be descriptive of past donations, vs aspirational of where they want to donate more to in the future.
Good point! I’d say ideally the messaging should describe both forward and backward looking donations, and if they differ, why. I don’t think this needs to be particularly lengthy, a few sentences could do it.
I agree that both of these are among our biggest mistakes.
(Not sure if this is the best place to ask this. I know the Q&A is over, but on balance I think it’s better for EA discourse for me to ask this question publicly rather than privately, to see if others concur with this analysis, or if I’m trivially wrong for boring reasons and thus don’t need a response).
Open Phil’s Grantmaking Approaches and Process has the 50/40/10 rule, where (in my medicore summarization) 50% of a grantmaker’s grants have to have the core stakeholders (Holden Karnofsky from Open Phil and Cari Tuna from Good Ventures) on board, 40% have to be grants where Holden and Cari are not clearly on board, but can imagine being on board if they knew more, and up to 10% can be more “discretionary.”
Reading between the lines, this suggests that up to 10% of funding from Open Phil will go to places Holden Karnofsky and Cari Tuna are not inside-view excited about, because they trust the grantmakers’ judgements enough.
Is there a similar (explicit or implicit) process at LTFF?
I ask because
part of the original pitch for EA Funds, as I understood it, was that it would be able to evaluate higher-uncertainty, higher-reward donation opportunities that individual donors may not be equipped to evaluate.
Yet there’s an obvious structural incentive to make “safer” and easier-to-justify-to-donors decisions.
When looking at the April, September, and November 2020 reports, none of the grants look obviously dumb, and there’s only one donation that I feel moderately confident in disagreeing with.
Now perhaps both I and the LTFF grantmakers are unusually enlightened individuals, and accurately converged independently to great donation opportunities given the information available. Or I coincidentally share the same taste and interests. But it seems more likely that the LTFF is somewhat bounding the upside by making grants that seems good to informed donors on a first glance with public information in addition to grants that are good for very informed grantmakers upon careful reflection and private information. This seems suboptimal if true.
A piece of evidence for this view is that the April 2019 grants seems more inside-view intuitively suspicious to me at the time (and judging from the high density of critical comments on that post, this opinion is shared by many others on the EA Forum).
Now part of this is certainly that both LTFF and the EA community were trying to “find its feet” so to speak, and there was less of a shared social reality for what LTFF ought to do. And nowadays we’re more familiar with funding independent researchers and projects like that.
However, I do not think this is the full story.
In general, I think I’m inclined to encourage the LTFF to become moderately more risk-seeking. In particular (if I recall my thoughts at the time correctly, and note that I have far from perfect memory or self-knowledge), I think if I were to rank the “most suspicious” LTFF grants in April 2019, I would have missed quite a few grants that I now think are good (moderate confidence). This suggests to me that moderately informed donors are not in a great spot to quickly evaluate the quality of LTFF grants.
This is an important question. It seems like there’s an implicit assumption here that highest impact path for the fund to take is to make grants which the inside view of the fund managers think is highest impact, regardless of if we can explain the grant. This is a reasonable position—and thank you for your confidence! -- however I think the fund being legible does have some significant advantages:
Accountability generally seems to improve organisations functioning. It’d be surprising if the LTFF was a complete exception to this, and legibility seems necessary for accountability.
There’s asymmetric information between us and donors, so less legibility will tend to mean less donations (and I think this is reasonable). So, there’s a tradeoff between greater counterfactual impact from scale, v.s. greater impact per $ moved.
There’s may be community building value in having a fund that is attractive to people without deep context or trust in the fund managers.
I’m not sure what the right balance of legibility vs inside view is for the LTFF. One possibility would be to split into a more inside view / trust-based fund, and a more legible and “safer” fund. Then donors can choose what kind of worldview they want to buy into.
That said, personally I don’t feel like I make any significantly different votes for LTFF money v.s. my own donations. The main difference would be that I am much more cautious about conflicts of interest with LTFF money than my personal money, but I don’t think I’d want to change that. However, I do think I tend to have a more conservative taste in grants than some others in the long-termist community.
One thing to flag is that we do occasionally (with applicant’s permission) make recommendations to private donors rather than providing funding directly from the LTFF. This is often for logistical reasons, if something is tricky for CEA to fund, but it’s also an option if a grant requires a lot of context to understand (which we can provide to an individual highly engaged donor, but not in a brief public write-up). I think this further decreases the number of grant decisions that are influenced by any legibility considerations.
I’m not very familiar with the funds, but wouldn’t retrospective evaluations like Linch‘s be more useful than legible reasoning? I feel like the grantees and institutions like EA funds with sufficiently long horizons want to stay trusted actors in the longer run and so are sufficiently motivated to be trusted with some more inside-view decisions.
trust from donors can still be gained by explaining a meaningful fraction of decisions
less legible bets may have higher EV
I imagine funders will always be able to meaningfully explain at least some factors that informed them, even if some factors are hard to communicate
some donors may still not trust judgement sufficiently
maybe funded projects have measurable outcomes only far in the future (though probably there are useful proxies on the way)
evaluation of funded projects takes effort (but I imagine you want to do this anyway)
(Looks like this sentence got cut off in the middle)
To be clear I think this is not my all-things-considered position. Rather, I think this is a fairly significant possibility, and I’d favor an analogue of Open Phil’s 50/40/10 rule (or something a little more aggressive) than to eg whatever the socially mediated equivalent of full discretionary control by the specific funders would. be.
This seems like a fine compromise that I’m in the abstract excited about, though of course it depends a lot on implementation details.
This is really good to hear!
I do indeed think there has been a pressure towards lower risk grants, am not very happy about it, and think it reduced the expected value of the fund by a lot. I am reasonably optimistic about that changing again in the future, but it’s one of the reasons why I’ve become somewhat less engaged with the fund. In particular Alex Zhu leaving the fund was I think a really great loss on this dimension.
I think you, Adam, and Oli covered a lot of the relevant points.
I’d add that the LTFF’s decision-making is based on the average score vote from the different fund managers, which would allow for grants to go through in scenarios where one person is very excited, and others aren’t very unexcited or against the grant. I.e., the mechanism allows an excited minority to make a grant that wouldn’t be approved by the majority of the committee. Overall, the mechanism strikes me as near-optimal. (Perhaps we should lower the threshold for making grants a bit further.)
I do think the LTFF might be slightly too risk-averse, and splitting the LTFF into a “legible longtermist fund” and a “judgment-driven longtermist fund” to remove pressure from donors towards the legible version seems a good idea and is tentatively on the roadmap.
How much room for additional funding does LTF have? Do you have an estimate of how much money you could take on and still achieve your same ROI on the marginal dollar donated?
Really good question!
We currently have ~$315K in the fund balance.* My personal median guess is that we could use $2M over the next year while maintaining this year’s bar for funding. This would be:
$1.7M more than our current balance
$500K more per year than we’ve spent in previous years
$800K more than the total amount of donations received in 2020 so far
$400K more than a naive guess for what the total amount of donations received will be in all of 2020. (That is, if we wanted a year of donations to pay for a year of funding, we would need $400K more in donations next year than what we got this year.)
Generally, we fund anything above a certain bar, without accounting explicitly for the amount of money we have. According to this policy, for the last two years, the fund has given out ~$1.5M per year, or ~$500K per grant round, and has not accumulated a significant buffer.
This round had an unusually large number of high-quality applicants. We spent $500K, but we pushed two large grant decisions to our next payout round, and several of our applicants happened to receive money from another source just before we communicated our funding decision. This makes me think that if this increase in high-quality applicants persists, it would be reasonable to have $600K - $700K per grant round, for a total of ~$2M over the next year.
My personal guess is that the increase in high-quality applications will persist, and I’m somewhat hopeful that we will get even more high-quality applications, via a combination of outreach and potentially some active grantmaking. This makes me think that $2M over the next year would be reasonable for not going below the ROI on the last marginal dollar of the grants we made this year, though I’m not certain. (Of the two other fund managers who have made quantitative guesses on this so far, one fund manager also had $2M as their median guess, while another thought slightly above $1.5M was more likely.)
I also think there’s a reasonable case for having slightly more than our median guess available in the fund. This would both act as a buffer in case we end up with more grants above our current bar than expected, and would let us proactively encourage potential grantees to apply for funding without being worried that we’ll run out of money.
If we got much more money than applications that meet our current bar, we would let donors know. I think we would also consider lowering our bar for funding, though this would only happen after checking in with the largest donors.
* This is less than the amount displayed in our fund page, which is still being updated with our latest payouts.
Do you have a vision for what the 3 to 10 year vision for the Long-Term Future Fund looks like? Do you expect it to be mostly the same and possibly add revenue, or have any large structural changes?
As mentioned in the original post, I’m not a Fund manager, but I sometimes advise the LTFF as part of my role as Head of EA Funds, and I’ve also been thinking about the longer-term strategy for EA Funds as a whole.
Some thoughts on this question:
LTFF strategy: There is no official 3-10 year vision or strategy for the LTFF yet, but I hope we will get there sometime soon. My own best guess for the LTFF’s vision (which I haven’t yet discussed with the LTFF) is: ‘Thoughtful people have the resources they need to successfully implement highly impactful projects to improve the long-term future.’ My best guess for the LTFF’s mission/strategy is ‘make judgment-driven grants to individuals and small organizations and proactively seed new longtermist projects.’ A plausible goal could be to allocate $15 million per year to effective longtermist projects by 2025 (where ‘effective’ means something like ‘significantly better than Open Phil’s last dollar, similar to the current quality of grants’).
Grantmaking capacity: To get there, we need 1) more grantmaking capacity (especially for active grantmaking), 2) more ideas that would be impactful if implemented well, and 3) more people capable of implementing these ideas. EA Funds can primarily improve the first factor, and I think this is the main limiting factor right now (though this could change within a few months). I am currently implementing the first iteration of a fund manager appointment process, where we invite potential grantmakers to apply as Fund managers, and are also considering hiring a full-time grantmaking grant specialist. Hopefully, this will allow the LTFF to increase the number of grants it can evaluate, and its active grantmaking capacity in particular.
Types of grants: Areas in which I expect the LTFF to be able to substantially expand its current grantmaking include academic teaching buy-outs, scholarships and top-up funding for poorly paid academics, research assistants for academics, and proactively seeding new longtermist organizations and research projects (active grantmaking).
Structural changes: I think having multiple fund managers on a committee rather than a single decision-maker leads to improved diversity of networks and opinions, and increased robustness in decision-making. Increasing the number of committee members on a single committee leads to disproportionately larger coordination overhead, so the way to scale this might be to create multiple committees. I also think a committee model would benefit from having one or more full-time staff who can dedicate their full attention to EA Funds or the LTFF and collaborate with a committee of part-time/volunteer grantmakers, so I may want to look into hiring for such positions.
Legible longtermist fund: Donating to the LTFF currently requires a lot of trust in the Fund managers because many of the grants are speculative and hard to understand for people less involved in EA. While I think the current LTFF grants are plausibly the most effective use of longtermist funding, there is significant donor demand for a more legible longtermist donation option (i.e., one that isn’t subject to massive information asymmetry and thus doesn’t rely on trust as much). This may speak in favor of setting up a second, more ‘mainstream’ long-term future fund. That fund might give to most longtermist institutes and would have a lot of fungibility with Open Phil’s funding, but seems likely a better way to introduce interested donors to longtermism.
Perhaps EA Funds shouldn’t focus on grantmaking as much: At a higher level, I’m not sure whether EA Funds’ strategy should be to build a grantmaking organization, or to become the #1 website on the internet for giving effectively, or something else. Regarding the LTFF and longtermism in particular, Open Phil has expanded its activities, Survival And Flourishing (SAF) has launched, and other donors and grantmakers (such as Longview Philanthropy) continue to be active in the area to some degree, which means that effective projects may get funded even if the LTFF doesn’t expand its grantmaking. It’s pretty plausible to me that EA Funds should pursue a strategy that’s less focused on grantmaking than what I wrote in the above paragraphs, which would mean that I might not dedicate as much attention to expanding the LTFF in the ways suggested above. I’m still thinking about this; the decision will likely depend on external feedback and experiments (e.g., how quickly we can make successful active grants).
If anyone has any feedback, thoughts, or questions about the above, I’d be interested in hearing from you (here or via PM).
I found this point interesting, and have a vague intuition that EA Funds (and especially the LTFF) are really trying to do two different things:
Having a default place for highly engaged EAs to donate, that is willing to take on large risks, fund things that seem weird, and rely heavily on social connections, the community and grantmaker intuitions
Have a default place for risk-neutral donors who feel value aligned with EA to donate to, who don’t necessarily have high trust for the community
Having something doing (1) seems really valuable, and I would feel sad if the LTFF reined back the kinds of things it funded to have a better public image. But I also notice that, eg, when giving donation advice to friends who broadly agree with EA ideas but aren’t really part of the community, that I don’t feel comfortable recommending EA Funds. And think that a bunch of the grants seem weird to anyone with moderately skeptical priors. (This is partially an opinion formed from the April 2019 grants, and I feel this less strongly for more recent grants).
And it would be great to have a good, default place to recommend my longtermist friends donate to, analogous to being able to point people to GiveWell top charities.
The obvious solution to this is to have two separate institutions, trying to do these two different things? But I’m not sure how workable that is here (and I’m not sure what a ’longtermist fund that tries to be legible and public facing, but without OpenPhil scale of money would actually look like!)
This sounds right to me.
Do you mean this as distinct from Jonas’s suggestion of:
It seems to me that that could address this issue well. But maybe you think the other institution should have a more different structure or be totally separate from EA Funds?
FWIW, my initial reaction is “Seems like it should be very workable? Just mostly donate to organisations that have relatively easy to understand theories of change, have already developed a track record, and/or have mainstream signals of credibility or prestige (e.g. affiliations with impressive universities). E.g., Center for Health Security, FHI, GPI, maybe CSET, maybe 80,000 Hours, maybe specific programs from prominent non-EA think tanks.”
Do you think this is harder than I’m imagining? Or maybe that the ideal would be to give to different types of things?
Nah, I think Jonas’ suggestion would be a good implementation of what I’m suggesting. Though as part of this, I’d want the LTFF to be less public facing and obvious—if someone googled ‘effective altruism longtermism donate’ I’d want them to be pointed to this new fund.
Hmm, I agree that a version of this fund could be implemented pretty easily—eg just make a list of the top 10 longtermist orgs and give 10% to each. My main concern is that it seems easy to do in a fairly disingenuous and manipulative way, if we expect all of its money to just funge against OpenPhil. And I’m not sure how to do it well and ethically.
Yeah, we could simply explain transparently that it would funge with Open Phil’s longtermist budget.
Are there any areas covered by the fund’s scope where you’d like to receive more applications?
I’d overall like to see more work that has a solid longtermist justification but isn’t as close to existing longtermist work. It seems like the LTFF might be well-placed to encourage this, since we provide funding outside of established orgs. This round, we received many applications from people who weren’t very engaged with the existing longtermist community. While these didn’t end up meeting our bar, some of the projects were fairly novel and good enough to make me excited about funding people like this in general.
There are also lots of particular less-established directions where I’d personally be interested in seeing more work, e.g.:
Work on structured transparency tools for detecting risks from rogue actors
Work on information security’s effect on AI development
Work on the offense—defense balance in a world with many advanced AI systems
Work on the likelihood and moral value of extraterrestrial life
Work on increasing institutional competence, particularly around existential risk mitigation
Work on effectively spreading longtermist values outside of traditional movement-building
These are largely a reflection of what I happen to have been thinking about recently and definitely not my fully-endorsed answer to this question—I’d like to spend time talking to others and coming to more stable conclusions about specific work the LTFF should encourage more of.
These are very much a personal take, I’m not sure if others on the fund would agree.
Buying extra time for people already doing great work. A lot of high-impact careers pay pretty badly: many academic roles (especially outside the US), some non-profit and think-tank work, etc. There’s certainly diminishing returns to money, and I don’t want the long-termist community to engage in zero-sum consumption of Veblen goods. But there’s also plenty of things that are solid investments in your productivity, like having a comfortable home office, a modern computer, ordering takeaway or having cleaners, enough runway to not have financial insecurity, etc.
Financial needs also vary a fair bit from person to person. I know some people who are productive and happy living off Soylent and working on a laptop on their bed, whereas I’d quickly burn out doing that. Others might have higher needs than me, e.g. if they have financial dependents.
As a general rule, if I’d be happy to fund someone for $Y/year if they were doing this work by themselves, and they’re getting paid $X/year by their employer to do this work, I think I should be happy to pay the difference $(Y-X)/year provided the applicant has a good plan for what to do with the money. If you think you might benefit from more money, I’d encourage you to apply. Even if you don’t think you’ll get it: a lot of people underestimate how much their time is worth.
Biosecurity. At the margins I’m about equally excited by biosecurity as I am about mitigating AI risks, largely because biosecurity currently seems much more neglected from a long-termist perspective. Yet the fund makes many more grants in the AI risk space.
We have received a reasonable number of biosecurity applications in recent rounds (though we still receive substantially more for AI), but our acceptance rate has been relatively low. I’d be particularly excited about seeing applications with a relatively clear path to impact. Many of our applications have been for generally trying to raise awareness, and I think getting the details right is really crucial here: targeting the right community, having enough context and experience to understand what that community would benefit from hearing, etc.
What is the LTFF’s position on whether we’re currently at an extremely influential time for direct work? I saw that there was a recent grant on research into patient philanthropy, but most of the grants seem to be made from the perspective of someone who thinks that we are at “the hinge of history”. Is that true?
At least for me the answer is yes, I think the arguments for the hinge of history are pretty compelling, and I have not seen any compelling counterarguments. I think the comments on Will’s post (which is the only post I know arguing against the hinge of history hypothesis) are basically correct and remove almost all basis I can see for Will’s arguments. See also Buck’s post on the same topic.
I think this century is likely to be extremely influential, but there’s likely important direct work to do at many parts of this century. Both patient philanthropy projects we funded have relevance to that timescale—I’d like to know about how best to allocate longtermist resources between direct work, investment, and movement-building over the coming years, and I’m interested in how philanthropic institutions might change.
I also think it’s worth spending some resources thinking about scenarios where this century isn’t extremely influential.
Whether we are at the “hinge of history” is a gradual question; different moments in history have different degrees of influentialness. I personally think the current moment is likely very influential, such that I want to spend a significant fraction of the resources we have now, and I think on the current margin we should probably be spending more. I think this could change over the coming years, though.
What are you not excited to fund?
Of course there’s lots of things we would not want to (or cannot) fund, so I’ll focus on things which I would not want to fund, but which someone reading this might have been interested in supporting or applying for.
Organisations or individuals seeking influence, unless they have a clear plan for how to use that influence to improve the long-term future, or I have an exceptionally high level of trust in them
This comes up surprisingly often. A lot of think-tanks and academic centers fall into this trap by default. A major way in which non-profits sustain themselves is by dealing in prestige: universities selling naming rights being a canonical example. It’s also pretty easy to justify to oneself: of course you have to make this one sacrifice of your principles, so you can do more good later, etc.
I’m torn on this because gaining leverage can be a good strategy, and indeed it seems hard to see how we’ll solve some major problems without individuals or organisations pursuing this. So I wouldn’t necessarily discourage people from pursuing this path, though you might want to think hard about whether you’ll be able to avoid value drift. But there’s a big information asymmetry as a donor: if someone is seeking support for something that isn’t directly useful now, with the promise of doing something useful later, it’s hard to know if they’ll follow through on that.
Movement building that increases quantity but reduces quality or diversity. The initial composition of a community has a big effect on its long-term composition: people tend to recruit people like themselves. The long-termist community is still relatively small, so we can have a substantial effect on the current (and therefore long-term) composition now.
So when I look for whether to fund a movement building intervention, I don’t just ask if it’ll attract enough good people to be worth the cost, but also whether the intervention is sufficiently targeted. This is a bit counterintuitive, and certainly in the past (e.g. when I was running student groups) I tended to assume that bigger was always better.
That said, the details really matter here. For example, AI risk is already in the public conscience, but most people have only been exposed to terrible low-quality articles about it. So I like Robert Miles YouTube channel since it’s a vastly better explanation of AI risk than most people will have come across. I still think most of the value will come from a small percentage of people who seriously engage with it, but I expect it to be positive or at least neutral for the vast majority of viewers.
I agree that both of these are among the top 5 things that I’ve encountered that make me unexcited about a grant.
Like Adam, I’ll focus on things that someone reading this might be interested in supporting or applying for. I want to emphasize that this is my personal take, not representing the whole fund, and I would be sad if this response stopped anyone from applying—there’s a lot of healthy disagreement within the fund, and we fund lots of things where at least one person thinks it’s below our bar. I also think a well-justified application could definitely change my mind.
Improving science or technology, unless there’s a strong case that the improvement would differentially benefit existential risk mitigation (or some other aspect of our long-term trajectory). As Ben Todd explains here, I think this is unlikely to be as highly-leveraged for improving the long-term future as trajectory changing efforts. I don’t think there’s a strong case that generally speeding up economic growth is an effective existential risk intervention.
Climate change mitigation. From the evidence I’ve seen, I think climate change is unlikely to be either directly existentially threatening or a particularly highly-leveraged existential risk factor. (It’s also not very neglected.) But I could be excited about funding research work that changed my mind about this.
Most self-improvement / community-member-improvement type work, e.g. “I want to create materials to help longtermists think better about their personal problems.” I’m not universally unexcited about funding this, and there are people who I think do good work like this, but my overall prior is that proposals here won’t be very good.
I am also unexcited about the things Adam wrote.
(I drafted this comment earlier and feel like it’s largely redundant by now, but I thought I might as well post it.)
I agree with what Adam and Asya said. I think many of those points can be summarized as ‘there isn’t a compelling theory of change for this project to result in improvements in the long-term future.’
Many applicants have great credentials, impressive connections, and a track record of getting things done, but their ideas and plans seem optimized for some goal other than improving the long-term future, and it would be a suspicious convergence if they were excellent for the long-term future as well. (If grantseekers don’t try to make the case for this in their application, I try to find out myself if this is the case, and the answer is usually ‘no.’)
We’ve received applications from policy projects, experienced professionals, and professors (including one with tens of thousands of citations), but ended up declining largely for this reason. It’s worth noting that these applications aren’t bad – often, they’re excellent – but they’re only tangentially related to what the LTFF is trying to achieve.
What are you excited to fund?
A related question: are there categories of things you’d be excited to fund, but haven’t received any applications for so far?
I think the long-termist and EA communities seem too narrow on several important dimensions:
Methodologically there are several relevant approaches that seem poorly represented in the community. A concrete example would be having more people with a history background, which seems critical for understanding long-term trends. In general I think we could do better interfacing with the social sciences and other intellectual movements.
I do think there are challenges here. Most fields are not designed to answer long-term questions. For example, history is often taught by focusing on particular periods, whereas we are more interested in trends that persist across many periods. So the first people joining from a particular field are going to need to figure out how to adapt their methodology to the unique demands of long-termism.
There’s also risks from spreading ourselves too thin. It’s important we maintain a coherent community that’s able to communicate with each other. Having too many different methodologies and epistemic norms could make this hard. Eventually I think we’re going to need to specialize: I expect different fields will benefit from different norms and heuristics. But right now I don’t think we know what the right way to split long-termism is, so I’d be hesitant to specialize too early.
I also think we are currently too centered in Europe and North America, and see a lot of value in having a more active community in other countries. Many long-term problems require some form of global coordination, which will benefit significantly from having people in a variety of countries.
I do think we need to take care here. First impressions count a lot, so poorly targeted initial outreach could hinder long-term growth in a country. Even seemingly simple things like book translations can be quite difficult to get right. For example, the distinction in English between “safety” and “security” is absent in many languages, which can make translating AI safety texts quite challenging!
More fundamentally, EA ideas arose out of quite a specific intellectual tradition around questions of how to lead a good life, what meaning looks like, and so on, so figuring out how our ideas do or don’t resonate with people in places with very different intellectual traditions is a serious challenge.
Of course, our current demographic breakdown is not ideal for a community that wants to exist for many decades to come, and I think we’re missing out on some talented people because of this. It doesn’t help that many of the fields and backgrounds we are drawing from tend to be unrepresentative, especially in terms of gender balance. So improving this seems like it would dovetail well with drawing people from a broader range of academic backgrounds.
I also suspect that the set of motivations we’re currently tapping into is quite narrow. The current community is mostly utilitarian. But the long-termist case stands up well under a wide range of moral theories, so I’d like to see us reaching people with a wider range of moral views.
Related to this I think we currently appeal only to a narrow range of personality types. This is inevitable to a degree: I’d expect individuals higher in conscientiousness or neuroticism to be more likely to want to work to protect the long-term future, for example. But I also think we have so far disproportionately attracted introverts, which seems more like an accident of the communities we’ve drawn upon and how we message things. Notably extraversion vs introversion does not seem to correlate with pro-environmental behaviours, for example, whereas agreeableness and openness were correlated (Walden, 2015; Hirsh, 2010).
I would be excited about projects that work towards these goals.
(As mentioned in the original post, I’m not a Fund manager, but I sometimes advise the LTFF as part of my role as Head of EA Funds.)
I agree with Adam and Asya. Some quick further ideas off the top of my head:
More academic teaching buy-outs. I think there are likely many longtermist academics who could get a teaching buy-out but aren’t even considering it.
Research into the long-term risks (and potential benefits) of genetic engineering.
Research aimed at improving cause prioritization methodology. (This might be a better fit for the EA Infrastructure Fund, but it’s also relevant to the LTFF.)
Open access fees for research publications relevant to longtermism, such that this work is available to anyone on the internet without any obstacles, plausibly increasing readership and citations.
Research assistants for academic researchers (and for independent researchers if they have a track record and there’s no good organization for them).
Books about longtermism-relevant topics.
How important is this in the context of eg scihub existing?
Not everyone uses sci-hub, and even if they do, it still removes trivial inconveniences. But yeah, sci-hub and the fact that PDFs (often preprints) are usually easy to find even if it’s not open access makes me a bit less excited.
That’s really interesting to read, thanks very much! (Both for this answer and for the whole AMA exercise)
I’ve already covered in this answer areas where we don’t make many grants but I would be excited about us making more grants. So in this answer I’ll focus on areas where we already commonly make grants, but would still like to scale this up further.
I’m generally excited to fund researchers when they have a good track record, are focusing on important problems and when the research problem is likely to slip through the cracks of other funders or research groups. For example, distillation style research, or work that is speculative or doesn’t neatly fit into an existing discipline.
Another category which is a bit harder to define are grants where we have a comparative advantage at evaluating. This could be that one of the fund managers happens to already be an expert in the area and has a lot of context. Or maybe the application is time-sensitive and we’re just about to start evaluating a grant round. In these cases the counterfactual impact is higher: these grants are less likely to be made by other donors.
LTF covers a lot of ground. How do you prioritize between different cause areas within the general theme of bettering the long term future?
The LTFF chooses grants to make from our open application rounds. Because of this, our grant composition depends a lot on the composition of applications we receive. Although we may of course apply a different bar to applications in different areas, the proportion of grants we make certainly doesn’t represent what we think is the ideal split of total EA funding between cause-areas.
In particular, I tend to see more variance in our scores between applications in the same cause-area than I do between cause-areas. This is likely because most of our applications are for speculative or early-stage projects. Given this, if you’re reading this and are interested in applying to the LTFF but haven’t seen us fund projects in your area before—don’t let that put you off. We’re open to funding things in a very broad range of areas provided there’s a compelling long-termist case.
Because cause prioritization isn’t actually that decision relevant for most of our applications, I haven’t thought especially deeply about it. In general, I’d say the fund is comparably excited about marginal work in reducing long-term risks from AI, biosafety, and general longtermist macrostrategy and capacity building. I don’t currently see promising interventions in climate change, which already attracts significant funding from other sources, although we’d be open to funding something that seemed neglected, especially if it focused on mitigating or predicting extreme risks.
One area where there’s active debate is the degree to which we should support general governance improvements. For example, we made a $50,000 grant to the Center for Election Science (CES) in our September 2020 round. CES has significantly more room for funding, so the main thing holding us back was uncertainty regarding the long-termist case for impact compared to more targeted interventions.
What are the most common reasons for rejection for applications of the Long-Term Future Fund?
Filtering for obvious misfits, I think the majority reason is that I don’t think the project proposal will be sufficiently valuable for the long-term future, even if executed well. The minority reason is that there isn’t strong enough evidence that the project will be executed well.
Sorry if this is an unsatisfying answer—I think our applications are different enough that it’s hard to think of common reasons for rejection that are more granular. Also, often the bottom line is “this seems like it could be good, but isn’t as good as other things we want to fund”. Here are some more concrete kinds of reasons that I think have come up at least more than once:
Project seems good for the medium-term future, but not for the long-term future
Applicant wants to learn the answer to X, but X doesn’t seem like an important question to me
Applicant wants to learn about X via doing Y, but I think Y is not a promising approach for learning about X
Applicant proposes a solution to some problem, but I think the real bottleneck in the problem lies elsewhere
Applicant wants to write something for a particular audience, but I don’t think that writing will be received well by that audience
Project would be good if executed exceptionally well, but applicant doesn’t have a track record in this area, and there are no references that I trust to be calibrated to vouch for their ability
Applicant wants to do research on some topic, but their previous research on similar topics doesn’t seem very good
Applicant wants money to do movement-building, but several people have reported negative interactions with them
Hey Asya! I’ve seen that you’ve received a comment prize on this. Congratulations! I have found it interesting. I was wondering: you give these two reasons for rejecting a funding application
Project would be good if executed exceptionally well, but applicant doesn’t have a track record in this area, and there are no references that I trust to be calibrated to vouch for their ability.
Applicant wants to do research on some topic, but their previous research on similar topics doesn’t seem very good.
My question is: what method would you use to evaluate the track record of someone who has not done a Ph.D. in AI Safety, but rather on something like Physics (my case :) )? Do you expect the applicant to have some track record in AI Safety research? I do not plan on applying for funding on the short term, but I think I would find some intuition on this valuable. I also ask because I find it hard to calibrate myself on the quality of my own research.
Hey! I definitely don’t expect people starting AI safety research to have a track record doing AI safety work—in fact, I think some of our most valuable grants are paying for smart people to transition into AI safety from other fields. I don’t know the details of your situation, but in general I don’t think “former physics student starting AI safety work” fits into the category of “project would be good if executed exceptionally well”. In that case, I think most of the value would come from supporting the transition of someone who could potentially be really good, rather than from the object-level work itself.
In the case of other technical Ph.D.s, I generally check whether their work is impressive in the context of their field, whether their academic credentials are impressive, what their references have to say. I also place a lot of weight on whether their proposal makes sense and shows an understanding of the topic, and on my own impressions of the person after talking to them.
I do want to emphasize that “paying a smart person to test their fit for AI safety” is a really good use of money from my perspective—if the person turns out to be good, I’ve in some sense paid for a whole lifetime of high-quality AI safety research. So I think my bar is not as high as it is when evaluating grant proposals for object-level work from people I already know.
Most common is definitely that something doesn’t really seem very relevant to the long-term future (concrete example: “Please fund this local charity that helps people recycle more”). This is probably driven by people applying with the same project to lots of different grant opportunities, at least that’s how the applications often read.
I would have to think a bit more about patterns that apply to the applications that pass the initial filter (i.e. are promising enough to be worth a deeper investigation).
Do you think it’s possible that, by only funding individuals/organisations that actually apply for funding, you are missing out on even better funding opportunities for individuals or organisations that didn’t apply for some reason?
If yes, one possible remedy might be putting more effort into advertising the fund so that you get more applications. Alternatively, you could just decide that you won’t be limited by the applications you receive and that you can give money to individuals/organisations who don’t actually apply for funding (but could still use it well). What do you think about these options?
Yes, I think we’re definitely limited by our application pool, and it’s something I’d like to change.
I’m pretty excited about the possibility of getting more applications. We’ve started advertising the fund more, and in the latest round we got the highest number of applications we rated as good (score >= 2.0, where 2.5 is the funding threshold). This is about 20-50% more than the long-term trend, though it’s a bit hard to interpret (our scores are not directly comparable across time). Unfortunately the percentage of good applications also dropped this round, so we do need to avoid too indiscriminate outreach to avoid too high a review burden.
I’m most excited about more active grant-making. For example, we could post proposals we’d like to see people work on, or reach out to people in particular areas to encourage them to apply for funding. Currently we’re bottlenecked on fund manager time, but we’re working on scaling that.
I’d be hesitant about funding individuals or organisations that haven’t applied—our application process is lightweight, so if someone chooses not to apply even after we prompt them, that seems like a bad sign. A possible exception would be larger organisations that already make the information we need available for assessment. Right now I’m not excited about funding more large organisations, since I think the marginal impact there is lower, but if the LTFF had a lot more money to distribute then I’d want to scale up our organisation grants.
Thanks for this reply. Active grant-making sounds like an interesting idea!
Good question! Relatedly, are there common characteristics among people/organizations who you think would make promising applicants but often don’t apply? Put another way, who would you encourage to apply who likely hasn’t considered applying?
A common case is people who are just shy to apply for funding. I think a lot of people feel awkward about asking for money. This makes sense in some contexts—asking your friends for cash could have negative consequences! And I think EAs often put additional pressure on themselves: “Am I really the best use of this $X?” But of course as a funder we love to see more applications: it’s our job to give out money, and the more applications we have, the better grants we can make.
Another case is people (wrongly) assuming they’re not good enough. I think a lot of people underestimate their abilities, especially in this community. So I’d encourage people to just apply, even if you don’t think you’ll get it.
Do you feel that someone who had applied, unsuccessfully, and then re-applied for a similar project (but perhaps having gathered more evidence), would be more likely, less likely, or equally likely to get funding than someone submitting an identical application to the second case, but not having been rejected once before, having chosen to not apply?
It feels easy to get into the mindset of “Once I’ve done XYZ, my application will be stronger, so I should do those things before applying”, and if that’s a bad line of reasoning to use (which I suspect it might be), some explicit reassurance might result in more applications.
I think definitely more or equally likely. :) Please apply!
Another one is that people assume we are inflexible in some way (e.g., constrained by maximum grant sizes or fixed application deadlines), but we can often be very flexible in working around those constraints, and have done that in the past.
Do you have any plans to become more risk tolerant?
Without getting too much into details, I disagree with some things you’ve chosen not to fund, and as an outsider view it as being too unwilling to take risks on projects, especially projects where you don’t know the requesters well, and truly pursue a hits-based model. I really like some of the big bets you’ve taken in the past on, for example, funding people doing independent research who then produce what I consider useful or interesting results, but I’m somewhat hesitant around donating to LTF because I’m not sure it takes enough risks that it represents a clearly better choice for someone like me whose fairly risk tolerant with their donations than donating to other established projects or just donating directly (but this has the disadvantage of making it hard for me to give something like seed funding and still get tax advantages).
From an internal perspective I’d view the fund as being fairly close to risk-neutral. We hear around twice as many complaints that we’re too risk-tolerant than too risk-averse, although of course the people who reach out to us may not be representative of our donors as a whole.
We do explicitly try to be conservative around things with a chance of significant negative impact to avoid the unilateralist’s curse. I’d estimate this affects less than 10% of our grant decisions, although the proportion is higher in some areas, such as community building, biosecurity and policy.
It’s worth noting that, unless I see a clear case for a grant, I tend to predict a low expected value—not just a high-risk opportunity. This is because I think most projects aren’t going to positively influence the long-term future—otherwise the biggest risks to our civilization would already be taken care of. Based on that prior, it takes significant evidence to update me in favour of a grant having substantial positive expected value. This produces similar decisions to risk-aversion with a more optimistic prior.
Unfortunately, it’s hard to test this prior: we’d need to see how good the grants we didn’t make would have been. I’m not aware of any grants we passed on that turned out to be really good. But I haven’t evaluated this systematically, and we’d only know about those which someone else chose to fund.
An important case where donors may be better off making donations themselves rather than donating via us is when they have more information than we do about some promising donation opportunities. In particular, you likely hear disproportionately about grants we rejected from people already in your network. You may be in a much better position to evaluate these than we are, especially if the impact of the grant hinges on the individual’s abilities, or requires a lot of context to understand.
It’s unfortunate that individual donors can’t directly make grants to individuals in a tax efficient manner. You could consider donating to a donor lottery—these will allow you to donate the same amount of money (in expectation) in a tax efficient manner. While grants can only be made within CEA’s charitable objects, this should cover the majority of things donors would want to support, and in any case the LTFF also faces this restriction. (Jonas also mentioned to me that EA Funds is considering offering Donor-Advised Funds that could grant to individuals as long as there’s a clear charitable benefit. If implemented, this would also allow donors to provide tax-deductible support to individuals.)
This is pretty exciting to me. Without going into too much detail, I expect to have a large amount of money to donate in the near future, and LTF is basically the best option I know of (in terms of giving based on what I most want to give to) for the bulk of that money short of having the ability to do exactly this. I’d still want LTF as a fall back for funds I couldn’t figure out how to better allocate myself, but the need for tax deductibility limits my options today (though, yes, there are donor lotteries).
Interested in talking more about this – sent you a PM!
EDIT: I should mention that this is generally pretty hard to implement, so there might be a large fee on such grants, and it might take a long time until we can offer it.
Can you clarify on your models on which kinds of projects could cause net harm? My impression is that there are some thoughts that funding many things would be actively harmful, but I don’t feel like I have a great picture of the details here.
If there are such models, are there possible structural solutions to identifying particularly scalable endeavors? I’d hope that we could eventually identify opportunities for long-term impact that aren’t “find a small set of particularly highly talented researchers”, but things more like, “spend X dollars advertising Y in a way that could scale” or “build a sizeable organization of people that don’t all need to be top-tier researchers”.
Some things I think could actively cause harm:
Projects that accelerate technological development of risky technologies without corresponding greater speedup of safety technologies
Projects that result in a team covering a space or taking on some coordination role that is worse than the next person who could have come along
Projects that engage with policymakers in an uncareful way, making them less willing to engage with longtermism in the future, or causing them to make bad decisions that are hard to reverse
Movement-building projects that give a bad first impression of longtermists
Projects that risk attracting a lot of controversy or bad press
Projects with ‘poisoning the well’ effects where if it’s executed poorly the first time, someone trying it again will have a harder time—e.g., if a large-scale project doing EA outreach to highschoolers went poorly, I think a subsequent project would have a much harder time getting buy-in from parents.
More broadly, I think as Adam notes above that the movement grows as a function of its initial composition. I think that even if the LTFF had infinite money, this pushes against funding every project where we expect the EV of the object-level work to be positive—if we want the community to attract people who do high-quality work, we should fund primarily high-quality work. Since the LTFF does not have infinite money, I don’t think this has much of an effect on my funding decisions, but I’d have to think about it more explicitly if we end up with much more money than our current funding bar requires. (There are also other obvious reasons not to fund all positive-EV things, e.g. if we expected to be able to use the money better in the future.)
I think it would be good to have scalable interventions for impact. A few thoughts on this:
At the org-level, there’s a bottleneck in mentorship and organizational capacity, and loosening it would allow us to take on more inexperienced people. I don’t know of a good way to fix this other than funding really good people to create orgs and become mentors. I think existing orgs are very aware of this bottleneck and working on it, so I’m optimistic that this will get much better over time.
Personally, I’m interested in experimenting with trying to execute specific high-value projects by actively advertising them and not providing significant mentorship (provided there aren’t negative externalities to the project not being executed well). I’m currently discussing this with the fund.
Overall, I think we will always be somewhat bottlenecked by having really competent people who want to work on longtermist projects, and I would be excited for people to think of scalable interventions for this in particular. I don’t have any great ideas here off the top of my head.
I agree with the above response, but I would like to add some caveats because I think potential grant applicants may draw the wrong conclusions otherwise:
If you are the kind of person who thinks carefully about these risks, are likely to change your course of action if you get critical feedback, and proactively sync up with the main people/orgs in your space to ensure you’re not making things worse, I want to encourage you to try risky projects nonetheless, including projects that have a risk of making things worse. Many EAs have made mistakes that caused harm, including myself (I mentioned one of them here), and while it would have been good to avoid them, learning from those mistakes also helped us improve our work.
My perception is that “taking carefully calculated risks” won’t lead to your grant application being rejected (perhaps it would even improve your chances of being funded because it’s hard to find people who can do that well) – but “taking risks without taking good measures to prevent/mitigate them” will.
Thanks so much for this, that was informative. A few quick thoughts:
I’ve heard this one before and I could sympathize with it, but it strikes me as a red flag that something is going a bit wrong. ( I’m not saying that this is your fault, but am flagging it is an issue for the community more broadly.) Big companies often don’t have the ideal teams for new initiatives. Often urgency is very important so they put something together relatively quickly. If it doesn’t work well is not that big of a deal, it is to spend the team and have them go to other projects, and perhaps find better people to take their place.
In comparison with nonprofits it’s much more difficult. My read is that we sort of expect the nonprofits to never die, which means we need to be *very very* sure about them before setting them up. But if this is the case it would be obviously severely limiting. The obvious solution to this would be to have bigger orgs with more possibility. Perhaps of specific initiatives were going well and demanded independence it could happen later on, but hopefully not for the first few years.
Some ideas I’ve had:
- Experiment with advertising campaigns that could be clearly scaled up. Some of them seem linearly useful up to millions of dollars.
- Add additional resources to make existing researchers more effective.
- Buy the rights to books and spend on marketing for the key ones.
- Pay for virtual assistants and all other things that could speed researchers out.
- Add additional resources to make nonprofits more effective, easily.
- Better budgets for external contractors.
- Focus heavily on funding non-EA projects that are still really beneficial. This could mean an emphasis on funding new nonprofits that do nothing but rank and do strategy for more funding.
While it might be a strange example, the wealthy, or in particular, the Saudi government are examples of how to spend lots of money with relatively few trusted people, semi-successfully.
Having come from the tech sector, in particular, it feels like there are often much more stingy expectations placed on EA researchers.
To clarify, I don’t think that most projects will be actively harmful—in particular, the “projects that result in a team covering a space that is worse than the next person who have come along” case seems fairly rare to me, and would mostly apply to people who’d want to do certain movement-facing work or engage with policymakers. From a purely hits-based perspective, I think there’s still a dearth of projects that have a non-trivial chance of being successful, and this is much more limiting than projects being not as good as the next project to come along.
I agree with this. Maybe another thing that could help would be to have safety nets such that EAs who overall do good work could start and wind down projects without being worried about sustaining their livelihood or the livelihood of their employees? Though this could also create some pretty bad incentives.
Thanks for these, I haven’t thought about this much in depth and think these are overall very good ideas that I would be excited to fund. In particular:
I agree with this; I think there’s a big opportunity to do better and more targeted marketing in a way that could scale. I’ve discussed this with people and would be interested in funding someone who wanted to do this thoughtfully.
Also super agree with this. I think an unfortunate component here is that many altruistic people are irrationally frugal, including me—I personally feel somewhat weird about asking for money to have a marginally more ergonomic desk set-up or an assistant, but I generally endorse people doing this and would be happy to fund them (or other projects making researchers more effective).
I think historically, people have found it pretty hard to outsource things like this to non-EAs, though I agree with this in theory.
One total guess at an overarching theme for why we haven’t done some of these things already is that people implicitly model longtermist movement growth on the growth of academic fields, which grow via slowly accruing prestige and tractable work to do over time, rather than modeling them as a tech company the way you describe. I think there could be good reasons for this—in particular, putting ourselves in the reference class of an academic field might attract the kind of people who want to be academics, which are generally the kinds of people we want—people who are very smart and highly-motivated by the work itself rather than other perks of the job. For what it’s worth, though, my guess is that the academic model is suboptimal, and we should indeed move to a more tech-company like model on many dimensions.
Again, I agree with Asya. A minor side remark:
As someone who has experience with hiring all kinds of virtual and personal assistants for myself and others, I think the problem here is not the money, but finding assistants who will actually do a good job, and organizing the entire thing in a way that’s convenient for the researchers/professionals who need support. More than half of the assistants I’ve worked with cost me more time than they saved me. Others were really good and saved me a lot of time, but it’s not straightforward to find them. If someone came up with a good proposal for this, I’d want to fund them and help them.
Similar points apply to some of the other ideas. We can’t just spend money on these things; we need to receive corresponding applications (which generally hasn’t happened) or proactively work to bring such projects into existence (which is a lot of work).
There will likely be a more elaborate reply, but these two links could be useful.
What crucial considerations and/or key uncertainties do you think the EA LTF fund operates under?
Some related questions with slightly different framings:
What types/lines of research do you expect would be particularly useful for informing the LTFF’s funding decisions?
Do you have thoughts on what types/lines of research would be particularly useful for informing other funders’ funding decisions in the longtermism space?
Do you have thoughts on how the answers to those two questions might differ?
I’d be interested in better understanding the trade-off between independent vs established researchers. Relative to other donors we fund a lot of independent research. My hunch here is that most independent researchers are less productive than if they were working at organisations—although, of course, for many of them that’s not an option (geographical constraints, organisational capacity, etc). This makes me place a bit of a higher bar for funding independent research. Some other fund managers disagree with me and think independent researchers tend to be more productive, e.g. due to bad incentives in academic and industry labs.
I expect distillation style work to be particularly useful. I expect there’s already relevant research here: e.g. case studies of the most impressive breakthroughs, studies looking at different incentives in academic funding, etc. There probably won’t be a definitive answer, so it’d also be important that I trust the judgement of the people involved, or have a variety of people with different priors going in coming to similar conclusions.
While larger donors can suffer from diminishing returns, there are sometimes also increasing returns to scale. One important thing larger donors can do that isn’t really possible at the LTFF’s scale is to found new academic fields. More clarity into how to achieve this and have the field go in a useful direction would be great.
It’s still mysterious to me how academic fields actually come into being. Equally importantly, what predicts whether they have good epistemics, whether they have influence, etc? Clearly part of this is the domain of study (it’s easier to get rigorous results in category theory than economics; it’s easier to get policymakers to care about economics than category theory). But I suspect it’s also pretty dependent on the culture created by early founders and the impressions outsiders form of the field. Some evidence for this is that some very closely related fields can end up going in very different directions: e.g. machine learning and statistics.
A key difference between the LTFF and some other funders is we receive donations on a rolling basis, and I expect these donations to continue to increase over time. By contrast, many major donors have an endowment to spend down. So for them it’s a really important question to know how to time those donations: how much should they give now, v.s. donate later? Whereas I think for us the case for just donating every $ we receive seems pretty strong (except for keeping enough of a buffer to even out short-term fluctuations in application quality and donation revenue).
Edit: I really like Adam’s answer
There are a lot of things I’m uncertain about, but I should say that I expect most research aimed at resolving these uncertainties not to provide strong enough evidence to change my funding decisions (though some research definitely could!) I do think weaker evidence could change my decisions if we had a larger number of high-quality applications to choose from. On the current margin, I’d be more excited about research aimed at identifying new interventions that could be promising.
Here’s a small sample of the things that feel particularly relevant to grants I’ve considered recently. I’m not sure if I would say these are the most crucial:
What sources of existential risk are plausible?
If I thought that AI capabilities were perfectly entangled with their ability to learn human preferences, I would be unlikely to fund AI alignment work.
If I thought institutional incentives were such that people wouldn’t create AI systems that could be existentially threatening without taking maximal precautions, I would be unlikely to fund AI risk work at all.
If I thought our lightcone was overwhelmingly likely to be settled by another intelligent species similar to us, I would be unlikely to fund existential risk mitigation outside of AI.
What kind of movement-building work is effective?
Adam writes above how he thinks movement-building work that sacrifices quality for quantity is unlikely to be good. I agree with him, but I could be wrong about that. If I changed my mind here, I’d be more likely to fund a larger number of movement-building projects.
It seems possible to me that work that’s explicitly labeled as ‘movement-building’ is generally not as effective for movement-building as high-quality direct work, and could even be net-negative. If I decided this was true, I’d be less likely to fund movement-building projects at all.
What strands of AI safety work are likely to be useful?
I currently take a fairly unopinionated approach to funding AI safety work—I feel willing to fund anything that I think a sufficiently large subset of smart researchers would think is promising. I can imagine becoming more opinionated here, and being less likely to fund certain kinds of work.
If I believed that it was certain that very advanced AI systems were coming soon and would look like large neural networks, I would be unlikely to fund speculative work focused on alternate paths to AGI.
If I believed that AI systems were overwhelmingly unlikely to look like large neural networks, this would have some effect on my funding decisions, but I’d have to think more about the value of near-term work from an AI safety field-building perspective.
Several comments have mentioned that CEA provides good infrastructure for making tax-deductible grants to individuals and also that the LTF often does, and is well suited to, make grants to individual researchers. Would it make sense for either the LTF or CEA to develop some further guidelines about the practicalities of receiving and administering grants for individuals (or even non-charitable organisations) that are not familiar with this sort of income, to help funds get used effectively?
As a motivating example, when I recently received an LTF grant, I sought legal advice in my tax jurisdiction and found out the grant was tax-exempt. However, prior to that CEA staff said that many grantees do pay tax on grant funds and they would consider it reasonable for me to do so. I have been paid on scholarships and fellowships for nearly 10 years and had the strong expectation that such funding is typically tax-free, which lead me to follow this up with a taxation lawyer; still, I wonder if other people, who haven’t previously received grant income, come into this with different expectations and end up paying tax unnecessarily. While specifics vary between tax-jurisdictions, having the right set of expectations for being a grantee helped me a lot. Maybe there would also be other general areas of grant receipt/administration that would be useful to provide advice on.
Thanks for the input, we’ll take this into account. We do provide tax advice for the US and UK, but we’ve also looked into expanding this. Edit: If you don’t mind, could you let me know which jurisdiction was relevant to you at the time?
I received my LTF grant while living in Brazil (I forwarded the details of the Brazilian tax lawyer I consulted to CEA staff). However, I built up my grantee expectations while doing research in Australia and Sweden, and was happy they were also valid in Brazil.
My intuition is that most countries that allow either PhD students or postdocs to receive tax-free income for doing research at universities will probably also allow CEA grants to individuals to be declared in a tax-free manner, at least if the grant is for a research project.
Makes sense, thanks!
Is that tax advice published anywhere? I’d assumed any grants I received in the UK would be treated as regular income, and if that’s not the case it’s a pleasant surprise!
It’s not public. If you like, you can PM me your email address and I can try asking someone to get in touch with you.
What would you like to fund, but can’t because of organisational constraints? (e.g. investing in private companies is IIRC forbidden for charities).
It’s actually pretty rare that we’ve not been able to fund something; I don’t think this has come up at all while I’ve been on the fund (2 rounds), and I can only think of a handful of cases before.
It helps that the fund knows some other private donors we can refer grants to (with applicants permission), so in the rare cases something is out of scope, we can often still get it funded.
Of course, people who know we can’t fund them because of the fund’s scope may choose not to apply, so the true proportion of opportunities we’re missing may be higher. A big class of things the LTFF can’t fund is political campaigns. I think that might be high-impact in some high-stakes elections, though I’ve not donated to campaigns myself, and I’m generally pretty nervous of anything that could make long-termism perceived as a partisan issue (which it obviously is not).
I don’t think we’d often want to invest in private companies. As discussed elsewhere in this thread, we tend to find grants to individuals better than to orgs. Moreover, one of the attractive points of investing in a private company is that you may get a return on your investment. But I think the altruistic return on our current grants is pretty high, so I wouldn’t want to lock up capital. If we had 10-100x more money to distribute and so had to invest some of it to grant out later, then investing some proportion of it in companies where there’s an altruistic upside might make more sense.
If a private company applied for funding to the LTFF and they checked the “forward to other funders” checkbox in their application, I’d refer them to private donors who can directly invest in private companies (and have done so once in the past, though they weren’t funded).
What do you think is a reasonable amount of time to spend on an application to the LFTT?
If you’re applying for funding for a project that’s already well-developed (i.e. you have thought carefully about its route to value, what the roadmap looks like, etc.), 30-60 minutes should be enough, and further time spent polishing likely won’t improve your chances of getting funding.
If you don’t have a well-developed project, it seems reasonable to add whichever amount of time it takes to develop the project in some level of detail on top of that.
That’s surprisingly short, which is great by the way.
I think most grants are not like this. That is, you can increase your chance of funding by spending a lot of time polishing a application, which leads to a sort of arms-raise among applicants where more and more time are wasted on polishing applications.
I’m happy to hear that LTFF do not reward such behavior. On the other hand, the same dynamic will still happen as long as people don’t know that more polish will not help.
You can probably save a lot of time on the side of the applicants by:
Stating how much time you recommend people spend on the application
Share some examples of successful applications (with the permission of the applicant) to show others what level and style of wringing to aim for.
I understand that no one application will be perfectly representative, but even just one example would still help, and several examples would help even more. Preferably if the examples are examples of good enough, rather than optimal writing, assuming that you want people to be satisfyzers, rather than maximizes with regards to application writing quality.
On reflection I actually think 1-4 hours seems more correct. That’s still pretty short, and we’ll do our best to keep it as quick and simple as possible.
We’re just updating the application form and had been planning to make the types of changes you’re suggesting (though not sharing successful applications—but that could be interesting, too)
What percentage of people who are applying for a transition grant from something else to AI Safety, get approved? Anything you want to add to put this number in context?
What percentage of people who are applying for funding for independent AI Safety research, get approved? Anything you want to add to put this number in context?
For example, if there is a clear category of people who don’t get funding, becasue they clearly want to do something different than saving the long term future, than this would be useful contextual information.
This isn’t exactly what you asked, but the LTFF’s acceptance rate of applications that aren’t obvious rejections is ~15-30%.