I notice that all but one of the November 2020 grants were given to individuals as opposed to organisations. What is the reason for this?
To clarify I’m certainly not criticising—I guess it makes quite a bit of sense as individuals are less likely than organisations to be able to get funding from elsewhere, so funding them may be better at the margin. However I would still be interested to hear your reasoning.
I notice that the animal welfare fund gave exclusively to organisations rather than individuals in the most recent round. Why do you think there is this difference between LTFF and AWF?
Speaking just for myself on why I tend to prefer the smaller individual grants:
Currently when I look at the funding landscape, it seems that without the LTFF there would be a pretty big hole in available funding for projects to get off the ground and for individuals to explore interesting new projects or enter new domains. Open Phil very rarely makes grants smaller than ~$300k, and even many donors don’t really like giving to individuals and early-stage organizations because they often lack established charity status, which makes their donations non-tax-deductable.
CEA has set up infrastructure to allow tax-deductible grants to individuals and organizations without charity status, and the fund itself seems well-suited to evaluate organizations by individuals, since we all have pretty wide networks and can pretty quickly gather good references on individuals that are working on projects that don’t yet have an established track record.
I think in a world without Open Phil or the Survival and Flourishing Fund, much more of our funding would go to established organizations.
Separately, I also think that I personally view a lot of the intellectual work to be done on the Long Term Future to be quite compatible with independent researchers asking for grants for just them, or maybe small teams around them. This feels kind of similar to how academic funding is often distributed, and I think makes sense for domains where a lot of people should explore a lot of different directions and we have set up infrastructure so that researchers and distillers can make contributions without necessarily needing a whole organization around them (which I think the EA Forum enables pretty well).
In addition to both of those points, I also think evaluating organizations requires a somewhat different skillset than evaluating individuals and small team projects, and we are currently better at the second than the first (though I think we would reskill if we thought it was more likely that more organizational grants would become more important again).
In terms of the contrast with the AWF in particular, I think the funding opportunities in the long-termist vs animal welfare spaces look quite different. One big difference is that interest in long-termist causes has exploded in the last decade. As a result, there’s a lot of talent interested in the area, but there’s limited organisational and mentorship capacity to absorb this talent. By contrast, the animal welfare space is more mature, so there’s less need to strike out in an independent direction. While I’m not sure on this, there might also be a cultural factor—if you’re trying to perform advocacy, it seems useful to have an organisation brand behind you (even if it’s just a one-person org). This seems much less important if you want to do research.
Tangentially, I see a lot of people debating whether EA is talent constrained, funding constrained, vetting constrained, etc. My view is that for most orgs, at least in the AI safety space, they can only grow by a relatively small (10-30%) rate per year while still providing adequate mentorship. This is talent constrained in the sense that having a larger applicant pool will help the orgs select even better people. But adding more talent won’t necessarily increase the number of hires.
While 10-30% is a relatively small growth rate, if it is sustained then I expect it to eventually outstrip growth in the longtermist talent pipeline: my median guess would be sometime in the next 3-7 years. I see the LTFF’s grants to individuals in part trying to bridge the gap while orgs scale up, giving talented people space to continue to develop, and perhaps even found an org. So I’d expect our proportion of individual grants to decline eventually. This is a personal take, though, and I think others on the fund are more excited about independent research on a more long-term basis.
My view is that for most orgs, at least in the AI safety space, they can only grow by a relatively small (10-30%) rate per year while still providing adequate mentorship
This might be a small point, but while I would agree, I imagine that strategically there are some possible orgs that could grow more quickly; and due to them growing, could dominate the funding eventually.
I think one thing that’s going on is that right now due to funding constraints individuals are encouraged to create organizations that are efficient when small, as opposed to efficient when large. I’ve made this decision myself. Doing the latter would require a fair amount of trust that large funders would later be interested in it at that scale. Right now it seems like we only have one large funder, which makes things tricky.
This is a good point, and I do think having multiple large funders would help with this. If the LTFF’s budget grew enough I would be very interested in funding scalable interventions, but it doesn’t seem like our comparative advantage now.
I do think possible growth rates vary a lot between fields. My hot take is new research fields are particularly hard to grow quickly. The only successful ways I’ve seen of teaching people how to do research involve apprenticeship-style programs (PhDs, residency programs, learning from a team of more experienced researchers, etc). You can optimize this to allow senior researchers to mentor more people (e.g. lots of peer advice assistants to free up senior staff time, etc), but that seems unlikely to yield more than a 2x increase in growth rate.
Most cases where orgs have scaled up successfully have drawn on a lot of existing talent. Tech startups can grow quickly but they don’t teach each new hire how to program from scratch. So I’d love to see scalable ways to get existing researchers to work on priority areas like AI safety, biosecurity, etc.
It can be surprisingly hard to change what researchers work on, though. Researchers tend to be intrinsically motivated, so right now the best way I know is to just do good technical work to show that problems exist (and are tractable to solve), combined with clear communication. Funding can help here a bit: make sure the people doing the good technical work are not funding constrained.
One other approach might be to build better marketing: DeepMind, OpenAI, etc are great at getting their papers a lot of attention. If we could promote relevant technical work that might help draw more researchers to these problems. Although a lot of people in academia really hate these companies self-promotion, so it could backfire if done badly.
The other way to scale up is to get people to skill-up in areas with more scalable mentorship: e.g. just work on any AI research topic for your PhD where you can get good mentorship, then go work at an org doing more impactful work once you graduate. I think this is probably our best bet to absorb most additional junior talent right now. This may beat the 10-30% figure I gave, but we’d still have to wait 3-5 years before the talent comes on tap unfortunately.
I agree that research organizations of the type that we see are particularly difficult to grow quickly.
My point is that we could theoretically focus more on other kinds of organizations that are more scalable. I could imagine there being more scalable engineering-heavy or marketing-heavy paths to impact on these problems. For example, setting up an engineering/data organization to manage information and metrics about bio risks. These organizations might have rather high upfront costs (and marginal costs), but are ones where I could see investing $10-100mil/year if we wanted.
Right now it seems like our solution to most problems is “try to solve it with experienced researchers”, which seems to be a tool we have a strong comparative advantage in, but not the only tool in the possible toolbox. It is a tool that’s very hard to scale, as you note (I know of almost no organizations that have done this well).
Separately,
The other way to scale up is to get people to skill-up in areas with more scalable mentorship: e.g. just work on any AI research topic for your PhD where you can get good mentorship, then go work at an org doing more impactful work once you graduate. I think this is probably our best bet to absorb most additional junior talent right now. This may beat the 10-30% figure I gave, but we’d still have to wait 3-5 years before the talent comes on tap unfortunately.
I just want to flag that I think I agree, but also feel pretty bad about this. I get the impression that for AI many of the grad school programs are decent enough, but for other fields (philosophy, some of Econ, things bio related), grad school can be quite long winded, demotivating, occasionally the cause of serious long term psychological problems, and often distracting or actively harmful for alignment. It definitely feels like we should eventually be able to do better, but it might be a while.
Just want to say I agree with both Habryka’s comments and Adam’s take that part of what the LTFF is doing is bridging the gap while orgs scale up (and increase in number) and don’t have the capacity to absorb talent.
Regarding the LTFF (Long-Term Future Fund) / AWF (Animal Welfare Fund) comparison in particular, I’d add the following:
The global longtermist community is much smaller than the global animal rights community, which means that the animal welfare space has a lot more existing organizations and people trying to start organizations that can be funded.
Longtermist cause areas typically involve a lot more research, which often implies funding individual researchers, whereas animal welfare work is typically more implementation-oriented.
I notice that all but one of the November 2020 grants were given to individuals as opposed to organisations. What is the reason for this?
To clarify I’m certainly not criticising—I guess it makes quite a bit of sense as individuals are less likely than organisations to be able to get funding from elsewhere, so funding them may be better at the margin. However I would still be interested to hear your reasoning.
I notice that the animal welfare fund gave exclusively to organisations rather than individuals in the most recent round. Why do you think there is this difference between LTFF and AWF?
Speaking just for myself on why I tend to prefer the smaller individual grants:
Currently when I look at the funding landscape, it seems that without the LTFF there would be a pretty big hole in available funding for projects to get off the ground and for individuals to explore interesting new projects or enter new domains. Open Phil very rarely makes grants smaller than ~$300k, and even many donors don’t really like giving to individuals and early-stage organizations because they often lack established charity status, which makes their donations non-tax-deductable.
CEA has set up infrastructure to allow tax-deductible grants to individuals and organizations without charity status, and the fund itself seems well-suited to evaluate organizations by individuals, since we all have pretty wide networks and can pretty quickly gather good references on individuals that are working on projects that don’t yet have an established track record.
I think in a world without Open Phil or the Survival and Flourishing Fund, much more of our funding would go to established organizations.
Separately, I also think that I personally view a lot of the intellectual work to be done on the Long Term Future to be quite compatible with independent researchers asking for grants for just them, or maybe small teams around them. This feels kind of similar to how academic funding is often distributed, and I think makes sense for domains where a lot of people should explore a lot of different directions and we have set up infrastructure so that researchers and distillers can make contributions without necessarily needing a whole organization around them (which I think the EA Forum enables pretty well).
In addition to both of those points, I also think evaluating organizations requires a somewhat different skillset than evaluating individuals and small team projects, and we are currently better at the second than the first (though I think we would reskill if we thought it was more likely that more organizational grants would become more important again).
Thanks for this detailed answer. I think that all makes a lot of sense.
I largely agree with Habryka’s comments above.
In terms of the contrast with the AWF in particular, I think the funding opportunities in the long-termist vs animal welfare spaces look quite different. One big difference is that interest in long-termist causes has exploded in the last decade. As a result, there’s a lot of talent interested in the area, but there’s limited organisational and mentorship capacity to absorb this talent. By contrast, the animal welfare space is more mature, so there’s less need to strike out in an independent direction. While I’m not sure on this, there might also be a cultural factor—if you’re trying to perform advocacy, it seems useful to have an organisation brand behind you (even if it’s just a one-person org). This seems much less important if you want to do research.
Tangentially, I see a lot of people debating whether EA is talent constrained, funding constrained, vetting constrained, etc. My view is that for most orgs, at least in the AI safety space, they can only grow by a relatively small (10-30%) rate per year while still providing adequate mentorship. This is talent constrained in the sense that having a larger applicant pool will help the orgs select even better people. But adding more talent won’t necessarily increase the number of hires.
While 10-30% is a relatively small growth rate, if it is sustained then I expect it to eventually outstrip growth in the longtermist talent pipeline: my median guess would be sometime in the next 3-7 years. I see the LTFF’s grants to individuals in part trying to bridge the gap while orgs scale up, giving talented people space to continue to develop, and perhaps even found an org. So I’d expect our proportion of individual grants to decline eventually. This is a personal take, though, and I think others on the fund are more excited about independent research on a more long-term basis.
This might be a small point, but while I would agree, I imagine that strategically there are some possible orgs that could grow more quickly; and due to them growing, could dominate the funding eventually.
I think one thing that’s going on is that right now due to funding constraints individuals are encouraged to create organizations that are efficient when small, as opposed to efficient when large. I’ve made this decision myself. Doing the latter would require a fair amount of trust that large funders would later be interested in it at that scale. Right now it seems like we only have one large funder, which makes things tricky.
This is a good point, and I do think having multiple large funders would help with this. If the LTFF’s budget grew enough I would be very interested in funding scalable interventions, but it doesn’t seem like our comparative advantage now.
I do think possible growth rates vary a lot between fields. My hot take is new research fields are particularly hard to grow quickly. The only successful ways I’ve seen of teaching people how to do research involve apprenticeship-style programs (PhDs, residency programs, learning from a team of more experienced researchers, etc). You can optimize this to allow senior researchers to mentor more people (e.g. lots of peer advice assistants to free up senior staff time, etc), but that seems unlikely to yield more than a 2x increase in growth rate.
Most cases where orgs have scaled up successfully have drawn on a lot of existing talent. Tech startups can grow quickly but they don’t teach each new hire how to program from scratch. So I’d love to see scalable ways to get existing researchers to work on priority areas like AI safety, biosecurity, etc.
It can be surprisingly hard to change what researchers work on, though. Researchers tend to be intrinsically motivated, so right now the best way I know is to just do good technical work to show that problems exist (and are tractable to solve), combined with clear communication. Funding can help here a bit: make sure the people doing the good technical work are not funding constrained.
One other approach might be to build better marketing: DeepMind, OpenAI, etc are great at getting their papers a lot of attention. If we could promote relevant technical work that might help draw more researchers to these problems. Although a lot of people in academia really hate these companies self-promotion, so it could backfire if done badly.
The other way to scale up is to get people to skill-up in areas with more scalable mentorship: e.g. just work on any AI research topic for your PhD where you can get good mentorship, then go work at an org doing more impactful work once you graduate. I think this is probably our best bet to absorb most additional junior talent right now. This may beat the 10-30% figure I gave, but we’d still have to wait 3-5 years before the talent comes on tap unfortunately.
I agree that research organizations of the type that we see are particularly difficult to grow quickly.
My point is that we could theoretically focus more on other kinds of organizations that are more scalable. I could imagine there being more scalable engineering-heavy or marketing-heavy paths to impact on these problems. For example, setting up an engineering/data organization to manage information and metrics about bio risks. These organizations might have rather high upfront costs (and marginal costs), but are ones where I could see investing $10-100mil/year if we wanted.
Right now it seems like our solution to most problems is “try to solve it with experienced researchers”, which seems to be a tool we have a strong comparative advantage in, but not the only tool in the possible toolbox. It is a tool that’s very hard to scale, as you note (I know of almost no organizations that have done this well).
Separately,
I just want to flag that I think I agree, but also feel pretty bad about this. I get the impression that for AI many of the grad school programs are decent enough, but for other fields (philosophy, some of Econ, things bio related), grad school can be quite long winded, demotivating, occasionally the cause of serious long term psychological problems, and often distracting or actively harmful for alignment. It definitely feels like we should eventually be able to do better, but it might be a while.
Just want to say I agree with both Habryka’s comments and Adam’s take that part of what the LTFF is doing is bridging the gap while orgs scale up (and increase in number) and don’t have the capacity to absorb talent.
Thanks for this reply, makes a lot of sense!
I agree with Habryka and Adam.
Regarding the LTFF (Long-Term Future Fund) / AWF (Animal Welfare Fund) comparison in particular, I’d add the following:
The global longtermist community is much smaller than the global animal rights community, which means that the animal welfare space has a lot more existing organizations and people trying to start organizations that can be funded.
Longtermist cause areas typically involve a lot more research, which often implies funding individual researchers, whereas animal welfare work is typically more implementation-oriented.
Also makes sense, thanks.