In terms of the contrast with the AWF in particular, I think the funding opportunities in the long-termist vs animal welfare spaces look quite different. One big difference is that interest in long-termist causes has exploded in the last decade. As a result, there’s a lot of talent interested in the area, but there’s limited organisational and mentorship capacity to absorb this talent. By contrast, the animal welfare space is more mature, so there’s less need to strike out in an independent direction. While I’m not sure on this, there might also be a cultural factor—if you’re trying to perform advocacy, it seems useful to have an organisation brand behind you (even if it’s just a one-person org). This seems much less important if you want to do research.
Tangentially, I see a lot of people debating whether EA is talent constrained, funding constrained, vetting constrained, etc. My view is that for most orgs, at least in the AI safety space, they can only grow by a relatively small (10-30%) rate per year while still providing adequate mentorship. This is talent constrained in the sense that having a larger applicant pool will help the orgs select even better people. But adding more talent won’t necessarily increase the number of hires.
While 10-30% is a relatively small growth rate, if it is sustained then I expect it to eventually outstrip growth in the longtermist talent pipeline: my median guess would be sometime in the next 3-7 years. I see the LTFF’s grants to individuals in part trying to bridge the gap while orgs scale up, giving talented people space to continue to develop, and perhaps even found an org. So I’d expect our proportion of individual grants to decline eventually. This is a personal take, though, and I think others on the fund are more excited about independent research on a more long-term basis.
My view is that for most orgs, at least in the AI safety space, they can only grow by a relatively small (10-30%) rate per year while still providing adequate mentorship
This might be a small point, but while I would agree, I imagine that strategically there are some possible orgs that could grow more quickly; and due to them growing, could dominate the funding eventually.
I think one thing that’s going on is that right now due to funding constraints individuals are encouraged to create organizations that are efficient when small, as opposed to efficient when large. I’ve made this decision myself. Doing the latter would require a fair amount of trust that large funders would later be interested in it at that scale. Right now it seems like we only have one large funder, which makes things tricky.
This is a good point, and I do think having multiple large funders would help with this. If the LTFF’s budget grew enough I would be very interested in funding scalable interventions, but it doesn’t seem like our comparative advantage now.
I do think possible growth rates vary a lot between fields. My hot take is new research fields are particularly hard to grow quickly. The only successful ways I’ve seen of teaching people how to do research involve apprenticeship-style programs (PhDs, residency programs, learning from a team of more experienced researchers, etc). You can optimize this to allow senior researchers to mentor more people (e.g. lots of peer advice assistants to free up senior staff time, etc), but that seems unlikely to yield more than a 2x increase in growth rate.
Most cases where orgs have scaled up successfully have drawn on a lot of existing talent. Tech startups can grow quickly but they don’t teach each new hire how to program from scratch. So I’d love to see scalable ways to get existing researchers to work on priority areas like AI safety, biosecurity, etc.
It can be surprisingly hard to change what researchers work on, though. Researchers tend to be intrinsically motivated, so right now the best way I know is to just do good technical work to show that problems exist (and are tractable to solve), combined with clear communication. Funding can help here a bit: make sure the people doing the good technical work are not funding constrained.
One other approach might be to build better marketing: DeepMind, OpenAI, etc are great at getting their papers a lot of attention. If we could promote relevant technical work that might help draw more researchers to these problems. Although a lot of people in academia really hate these companies self-promotion, so it could backfire if done badly.
The other way to scale up is to get people to skill-up in areas with more scalable mentorship: e.g. just work on any AI research topic for your PhD where you can get good mentorship, then go work at an org doing more impactful work once you graduate. I think this is probably our best bet to absorb most additional junior talent right now. This may beat the 10-30% figure I gave, but we’d still have to wait 3-5 years before the talent comes on tap unfortunately.
I agree that research organizations of the type that we see are particularly difficult to grow quickly.
My point is that we could theoretically focus more on other kinds of organizations that are more scalable. I could imagine there being more scalable engineering-heavy or marketing-heavy paths to impact on these problems. For example, setting up an engineering/data organization to manage information and metrics about bio risks. These organizations might have rather high upfront costs (and marginal costs), but are ones where I could see investing $10-100mil/year if we wanted.
Right now it seems like our solution to most problems is “try to solve it with experienced researchers”, which seems to be a tool we have a strong comparative advantage in, but not the only tool in the possible toolbox. It is a tool that’s very hard to scale, as you note (I know of almost no organizations that have done this well).
Separately,
The other way to scale up is to get people to skill-up in areas with more scalable mentorship: e.g. just work on any AI research topic for your PhD where you can get good mentorship, then go work at an org doing more impactful work once you graduate. I think this is probably our best bet to absorb most additional junior talent right now. This may beat the 10-30% figure I gave, but we’d still have to wait 3-5 years before the talent comes on tap unfortunately.
I just want to flag that I think I agree, but also feel pretty bad about this. I get the impression that for AI many of the grad school programs are decent enough, but for other fields (philosophy, some of Econ, things bio related), grad school can be quite long winded, demotivating, occasionally the cause of serious long term psychological problems, and often distracting or actively harmful for alignment. It definitely feels like we should eventually be able to do better, but it might be a while.
Just want to say I agree with both Habryka’s comments and Adam’s take that part of what the LTFF is doing is bridging the gap while orgs scale up (and increase in number) and don’t have the capacity to absorb talent.
I largely agree with Habryka’s comments above.
In terms of the contrast with the AWF in particular, I think the funding opportunities in the long-termist vs animal welfare spaces look quite different. One big difference is that interest in long-termist causes has exploded in the last decade. As a result, there’s a lot of talent interested in the area, but there’s limited organisational and mentorship capacity to absorb this talent. By contrast, the animal welfare space is more mature, so there’s less need to strike out in an independent direction. While I’m not sure on this, there might also be a cultural factor—if you’re trying to perform advocacy, it seems useful to have an organisation brand behind you (even if it’s just a one-person org). This seems much less important if you want to do research.
Tangentially, I see a lot of people debating whether EA is talent constrained, funding constrained, vetting constrained, etc. My view is that for most orgs, at least in the AI safety space, they can only grow by a relatively small (10-30%) rate per year while still providing adequate mentorship. This is talent constrained in the sense that having a larger applicant pool will help the orgs select even better people. But adding more talent won’t necessarily increase the number of hires.
While 10-30% is a relatively small growth rate, if it is sustained then I expect it to eventually outstrip growth in the longtermist talent pipeline: my median guess would be sometime in the next 3-7 years. I see the LTFF’s grants to individuals in part trying to bridge the gap while orgs scale up, giving talented people space to continue to develop, and perhaps even found an org. So I’d expect our proportion of individual grants to decline eventually. This is a personal take, though, and I think others on the fund are more excited about independent research on a more long-term basis.
This might be a small point, but while I would agree, I imagine that strategically there are some possible orgs that could grow more quickly; and due to them growing, could dominate the funding eventually.
I think one thing that’s going on is that right now due to funding constraints individuals are encouraged to create organizations that are efficient when small, as opposed to efficient when large. I’ve made this decision myself. Doing the latter would require a fair amount of trust that large funders would later be interested in it at that scale. Right now it seems like we only have one large funder, which makes things tricky.
This is a good point, and I do think having multiple large funders would help with this. If the LTFF’s budget grew enough I would be very interested in funding scalable interventions, but it doesn’t seem like our comparative advantage now.
I do think possible growth rates vary a lot between fields. My hot take is new research fields are particularly hard to grow quickly. The only successful ways I’ve seen of teaching people how to do research involve apprenticeship-style programs (PhDs, residency programs, learning from a team of more experienced researchers, etc). You can optimize this to allow senior researchers to mentor more people (e.g. lots of peer advice assistants to free up senior staff time, etc), but that seems unlikely to yield more than a 2x increase in growth rate.
Most cases where orgs have scaled up successfully have drawn on a lot of existing talent. Tech startups can grow quickly but they don’t teach each new hire how to program from scratch. So I’d love to see scalable ways to get existing researchers to work on priority areas like AI safety, biosecurity, etc.
It can be surprisingly hard to change what researchers work on, though. Researchers tend to be intrinsically motivated, so right now the best way I know is to just do good technical work to show that problems exist (and are tractable to solve), combined with clear communication. Funding can help here a bit: make sure the people doing the good technical work are not funding constrained.
One other approach might be to build better marketing: DeepMind, OpenAI, etc are great at getting their papers a lot of attention. If we could promote relevant technical work that might help draw more researchers to these problems. Although a lot of people in academia really hate these companies self-promotion, so it could backfire if done badly.
The other way to scale up is to get people to skill-up in areas with more scalable mentorship: e.g. just work on any AI research topic for your PhD where you can get good mentorship, then go work at an org doing more impactful work once you graduate. I think this is probably our best bet to absorb most additional junior talent right now. This may beat the 10-30% figure I gave, but we’d still have to wait 3-5 years before the talent comes on tap unfortunately.
I agree that research organizations of the type that we see are particularly difficult to grow quickly.
My point is that we could theoretically focus more on other kinds of organizations that are more scalable. I could imagine there being more scalable engineering-heavy or marketing-heavy paths to impact on these problems. For example, setting up an engineering/data organization to manage information and metrics about bio risks. These organizations might have rather high upfront costs (and marginal costs), but are ones where I could see investing $10-100mil/year if we wanted.
Right now it seems like our solution to most problems is “try to solve it with experienced researchers”, which seems to be a tool we have a strong comparative advantage in, but not the only tool in the possible toolbox. It is a tool that’s very hard to scale, as you note (I know of almost no organizations that have done this well).
Separately,
I just want to flag that I think I agree, but also feel pretty bad about this. I get the impression that for AI many of the grad school programs are decent enough, but for other fields (philosophy, some of Econ, things bio related), grad school can be quite long winded, demotivating, occasionally the cause of serious long term psychological problems, and often distracting or actively harmful for alignment. It definitely feels like we should eventually be able to do better, but it might be a while.
Just want to say I agree with both Habryka’s comments and Adam’s take that part of what the LTFF is doing is bridging the gap while orgs scale up (and increase in number) and don’t have the capacity to absorb talent.
Thanks for this reply, makes a lot of sense!