It may not be worth becoming a research lead under many worldviews.
I’m with you on almost all of your essay, regarding the advantages of a PhD, and the need for more research leads in AIS, but I would raise another kind of issue—there are not very many career options for a research lead in AIS at present. After a PhD, you could pursue:
Big RFPs. But most RFPs from large funders have a narrow focus area—currently it tends to be prosaic ML, safety, and mechanistic interpretability. And having to submit to grantmakers’ research direction somewhat defeats the purpose of being a research lead.
Joining an org working on an adjacent research direction. But they may not exist, depending on what you’re excited to work on.
Academia. But you have to be willing to travel, teach a lot, and live on well below the salary for a research contributor.
Little funders (like LTFF). But they may take 3+ months to apply for, and only last a year at a time, and they won’t respond to your emails for an explanation of this.
Get hired by as a researcher at OpenPhil? But very few will be hired and given research autonomy here.
For a many research leads, these options won’t be very attractive, and I find it hard to feel positive about convincing people to become research leads until better opportunities are in place. What would make me excited? I think we should have:
A. Research agenda agnostic RFPs. There needs to be some way for experienced AI safety researchers to figure out whether AI safety is actually a viable long-term career for them. Currently, there’s no way to get OpenPhil’s opinion on this—you simply have to wait years until they notice you. But there aren’t very many AI safety researchers, and there should be a way for them to run this test so that they can decide which way to direct their lives.
Concrete proposal: OpenPhil should say “we want applications from AIS researchers who we might be excited about as individuals, even if we don’t find their research exciting” and should start an RFP along these lines.
B. MLGA (Make LTFF great again). I’m not asking much here, but they should be faster, be calibrated on their timelines, respond to email in case of delays, offer multi-year grants.
Concrete proposal: LTFF should say “we want to fund people for multiple years at a time, and we will resign if we can’t get our grantmaking process work properly
C. At least one truly research agenda-agnostic research organisation, that will hire research leads to pursue their own research interests.
Concrete proposal: Folks should found an academic department-style research organisation that hires research leads, gets them office space and visas, and gives them a little support to apply for grants to support their teams. Of course this requires a level of interest from OpenPhil and other grantmakers in supporting this organisation.
Finally, I conclude on a personal note. As Adam knows, and other readers may deduce, I myself am a research lead underwhelmed with options (1-5). I would like to fix C (or A-B) and am excited to talk about ways of achieving this, but a big part of me just wants to leave AIS for a while, as these options are so much stronger, from a selfish perspective. Given that AIS has been this way for years, I suspect many others might leave before these issues are fixed.
(A) Call this “Request For Researchers” (RFR). OpenPhil has tried a more general version of this in the form of the Century Fellowship, but they discontinued this. That in turn is a Thiel Fellowship clone, like several other programs (e.g. Magnificent Grants). The early years of the Thiel Fellowship show that this can work, but I think it’s hard to do well, and it does not seem like OpenPhil wants to keep trying.
(B) I think it would be great for some people to get support for multiple years. PhDs work like this, and good research can be hard to do over a series of short few-month grants. But also the long durations just do make them pretty high-stakes bets, and you need to select hard not just on research skill but also the character traits that mean people don’t need external incentives.
(C) I think “agenda-agnostic” and “high quality” might be hard to combine. It seems like there are three main ways to select good people: rely on competence signals (e.g. lots of cited papers, works at a selective organisation), rely on more-or-less standardised tests (e.g. a typical programming interview, SATs), or rely on inside-view judgements of what’s good in some domain. New researchers are hard to assess by the first, I don’t think there’s a cheap programming-interview-but-for-research-in-general that spots research talent at high rates, and therefore it seems you have to rely a bunch on the third. And this is very correlated with agendas; a researcher in domain X will be good at judging ideas in that domain, but less so in others.
The style of this that I’d find most promising is:
Someone with a good overview of the field (e.g. at OpenPhil) picks a few “department chairs”, each with some agenda/topic.
Each department chair picks a few research leads who they think have promising work/ideas in the direction of their expertise.
These research leads then get collaborators/money/ops/compute through the department.
I think this would be better than a grab-bag of people selected according to credentials and generic competence, because I think an important part of the research talent selection process is the part where someone with good research taste endorses the agenda takes of someone else on agenda-specific inside-view grounds.
This is an important point. There’s a huge demand for research leads in general, but the people hiring & funding often have pretty narrow interests. If your agenda is legibly exciting to them, then you’re in a great position. Otherwise, there can be very little support for more exploratory work. And I want to emphasize the legible part here: you can do something that’s great & would be exciting to people if they understood it, but novel research is often time-consuming to understand, and these are time-constrained people who will not want to invest that time unless they have a strong signal it’s promising.
A lot of this problem is downstream of very limited grantmaker time in AI safety. I expect this to improve in the near future, but not enough to fully solve the problem.
I do like the idea of a more research agenda agnostic research organization. I’m striving to have FAR be more open-minded, but we can’t support everything so are still pretty opinionated to prioritize agendas that we’re most excited by & which are a good fit for our research style (engineering-intensive empirical work). I’d like to see another org in this space set-up to support a broader range of agendas, and am happy to advise people who’d like to set something like this up.
I’m with you on almost all of your essay, regarding the advantages of a PhD, and the need for more research leads in AIS, but I would raise another kind of issue—there are not very many career options for a research lead in AIS at present. After a PhD, you could pursue:
Big RFPs. But most RFPs from large funders have a narrow focus area—currently it tends to be prosaic ML, safety, and mechanistic interpretability. And having to submit to grantmakers’ research direction somewhat defeats the purpose of being a research lead.
Joining an org working on an adjacent research direction. But they may not exist, depending on what you’re excited to work on.
Academia. But you have to be willing to travel, teach a lot, and live on well below the salary for a research contributor.
Little funders (like LTFF). But they may take 3+ months to apply for, and only last a year at a time, and they won’t respond to your emails for an explanation of this.
Get hired by as a researcher at OpenPhil? But very few will be hired and given research autonomy here.
For a many research leads, these options won’t be very attractive, and I find it hard to feel positive about convincing people to become research leads until better opportunities are in place. What would make me excited? I think we should have:
A. Research agenda agnostic RFPs. There needs to be some way for experienced AI safety researchers to figure out whether AI safety is actually a viable long-term career for them. Currently, there’s no way to get OpenPhil’s opinion on this—you simply have to wait years until they notice you. But there aren’t very many AI safety researchers, and there should be a way for them to run this test so that they can decide which way to direct their lives.
Concrete proposal: OpenPhil should say “we want applications from AIS researchers who we might be excited about as individuals, even if we don’t find their research exciting” and should start an RFP along these lines.
B. MLGA (Make LTFF great again). I’m not asking much here, but they should be faster, be calibrated on their timelines, respond to email in case of delays, offer multi-year grants.
Concrete proposal: LTFF should say “we want to fund people for multiple years at a time, and we will resign if we can’t get our grantmaking process work properly
C. At least one truly research agenda-agnostic research organisation, that will hire research leads to pursue their own research interests.
Concrete proposal: Folks should found an academic department-style research organisation that hires research leads, gets them office space and visas, and gives them a little support to apply for grants to support their teams. Of course this requires a level of interest from OpenPhil and other grantmakers in supporting this organisation.
Finally, I conclude on a personal note. As Adam knows, and other readers may deduce, I myself am a research lead underwhelmed with options (1-5). I would like to fix C (or A-B) and am excited to talk about ways of achieving this, but a big part of me just wants to leave AIS for a while, as these options are so much stronger, from a selfish perspective. Given that AIS has been this way for years, I suspect many others might leave before these issues are fixed.
(A) Call this “Request For Researchers” (RFR). OpenPhil has tried a more general version of this in the form of the Century Fellowship, but they discontinued this. That in turn is a Thiel Fellowship clone, like several other programs (e.g. Magnificent Grants). The early years of the Thiel Fellowship show that this can work, but I think it’s hard to do well, and it does not seem like OpenPhil wants to keep trying.
(B) I think it would be great for some people to get support for multiple years. PhDs work like this, and good research can be hard to do over a series of short few-month grants. But also the long durations just do make them pretty high-stakes bets, and you need to select hard not just on research skill but also the character traits that mean people don’t need external incentives.
(C) I think “agenda-agnostic” and “high quality” might be hard to combine. It seems like there are three main ways to select good people: rely on competence signals (e.g. lots of cited papers, works at a selective organisation), rely on more-or-less standardised tests (e.g. a typical programming interview, SATs), or rely on inside-view judgements of what’s good in some domain. New researchers are hard to assess by the first, I don’t think there’s a cheap programming-interview-but-for-research-in-general that spots research talent at high rates, and therefore it seems you have to rely a bunch on the third. And this is very correlated with agendas; a researcher in domain X will be good at judging ideas in that domain, but less so in others.
The style of this that I’d find most promising is:
Someone with a good overview of the field (e.g. at OpenPhil) picks a few “department chairs”, each with some agenda/topic.
Each department chair picks a few research leads who they think have promising work/ideas in the direction of their expertise.
These research leads then get collaborators/money/ops/compute through the department.
I think this would be better than a grab-bag of people selected according to credentials and generic competence, because I think an important part of the research talent selection process is the part where someone with good research taste endorses the agenda takes of someone else on agenda-specific inside-view grounds.
This is an important point. There’s a huge demand for research leads in general, but the people hiring & funding often have pretty narrow interests. If your agenda is legibly exciting to them, then you’re in a great position. Otherwise, there can be very little support for more exploratory work. And I want to emphasize the legible part here: you can do something that’s great & would be exciting to people if they understood it, but novel research is often time-consuming to understand, and these are time-constrained people who will not want to invest that time unless they have a strong signal it’s promising.
A lot of this problem is downstream of very limited grantmaker time in AI safety. I expect this to improve in the near future, but not enough to fully solve the problem.
I do like the idea of a more research agenda agnostic research organization. I’m striving to have FAR be more open-minded, but we can’t support everything so are still pretty opinionated to prioritize agendas that we’re most excited by & which are a good fit for our research style (engineering-intensive empirical work). I’d like to see another org in this space set-up to support a broader range of agendas, and am happy to advise people who’d like to set something like this up.