The LTFF is happy to renew grants so long as the applicant has been making strong progress and we believe working independently continues to be the best option for them. Examples of renewals in this round include Robert Miles, who we first funded in April 2019, and Joe Collman, who we funded in November 2019. In particular, we’d be happy to be the #1 funding source of a new EA org for several years (subject to the budget constraints Oliver mentions in his reply).
Many of the grants we make to individuals are for career transitions, such as someone retraining from one research field to another, or for one-off projects. So I would expect most grants to not be renewals. That said, the bar for renewals does tend to be higher. This is because we pursue a hits-based giving approach, so are willing to fund projects that are likely not to work out—but of course will not want to renew the grant if it is clearly not working.
I think being a risk-tolerant funder is particularly valuable since most employers are, quite rightly, risk-averse. Firing people tends to be harmful to morale; internships or probation periods can help, but take a lot of supervisory time. This means people who might be a great hire but are high-variance often don’t get hired. Funding them for a period of time to do independent work can derisk the grantee, since they’ll have a more substantial portfolio to show.
The level of excitement about long-term independent work varies between fund managers. I tend to think it’s hard for people to do great work independently. I’m still open to funding it, but I want to see a compelling case that there’s not an organisation that would be a good home for the applicant. Some other fund managers are more concerned by perverse incentives in established organisations (especially academia), so are more willing to fund independent research.
I’d be interested to hear thoughts on how we could better support our grantees here. We do sometimes forward applications on to other funders (with the applicants permission), but don’t have any systematic program to secure further funding (beyond applying for renewals). We could try something like “demo days” popular in the VC world, but I’m not sure there’s a large enough ecosystem of potential funders for this to be worth it.
I want to see a compelling case that there’s not an organisation that would be a good home for the applicant.
My impression is that it is not possible for everyone who want to help with the long term ti get hired by an org, for the simple reason that there are not enough openings at those orgs. At least in AI Safety, all entry level jobs are very competitive, meaning that not getting in is not a strong signal that one could not have done well there.
I can’t respond for Adam, but just wanted to say that I personally agree with you, which is one of the reasons I’m currently excited about funding independent work.
Thanks for picking up the thread here Asya! I think I largely agree with this, especially about the competitiveness in this space. For example, with AI PhD applications, I often see extremely talented people get rejected who I’m sure would have got an offer a few years ago.
I’m pretty happy to see the LTFF offering effectively “bridge” funding for people who don’t quite meet the hiring bar yet, but I think are likely to in the next few years. However, I’d be hesitant about heading towards a large fraction of people working independently long-term. I think there’s huge advantages from the structure and mentorship an org can provide. If orgs aren’t scaling up fast enough, then I’d prefer to focus on trying to help speed that up.
The main way I could see myself getting more excited about long-term independent research is if we saw flourishing communities forming amongst independent researchers. Efforts like LessWrong and the Alignment Forum help in terms of providing infrastructure. But right now it still seems much worse than working for an org, especially if you want to go down any of the more traditional career paths later. But I’d love to be proven wrong here.
I claim we have proof of concept. The people who started the existing AI Safety research orgs did not have AI Safety mentors. Current independent researcher have more support than they had. In a way an org is just a crystalized collaboration of previously independent researchers.
I think that there are some PR reasons why it would be good if most AI Safety researchers where part of academia or other respectable orgs (e.g. DeepMind). But I also think it is good to have a minority of researchers who are disconnected from the particular pressures of that environment.
However, being part of academia is not the same as being part of an AI Safety org. MIRI people are not part of academia, and someone doing AI Safety research as part of their PhD in a “normal” (not AI Safety focused) PhD program, is sorta an independent researcher.
The main way I could see myself getting more excited about long-term independent research is if we saw flourishing communities forming amongst independent researchers.
We are working on that. I’m not optimistic about current orgs keeping up with the growth of the field, and I don’t think it is healthy for the career to be too competitive, since this will lead to goodhearted on career intensives. But I do think a looser structure, built on personal connections rather than formal org employment, can grow in a much more flexible way, and we are experimenting with various methods to make this happen.
The LTFF is happy to renew grants so long as the applicant has been making strong progress and we believe working independently continues to be the best option for them. Examples of renewals in this round include Robert Miles, who we first funded in April 2019, and Joe Collman, who we funded in November 2019. In particular, we’d be happy to be the #1 funding source of a new EA org for several years (subject to the budget constraints Oliver mentions in his reply).
Many of the grants we make to individuals are for career transitions, such as someone retraining from one research field to another, or for one-off projects. So I would expect most grants to not be renewals. That said, the bar for renewals does tend to be higher. This is because we pursue a hits-based giving approach, so are willing to fund projects that are likely not to work out—but of course will not want to renew the grant if it is clearly not working.
I think being a risk-tolerant funder is particularly valuable since most employers are, quite rightly, risk-averse. Firing people tends to be harmful to morale; internships or probation periods can help, but take a lot of supervisory time. This means people who might be a great hire but are high-variance often don’t get hired. Funding them for a period of time to do independent work can derisk the grantee, since they’ll have a more substantial portfolio to show.
The level of excitement about long-term independent work varies between fund managers. I tend to think it’s hard for people to do great work independently. I’m still open to funding it, but I want to see a compelling case that there’s not an organisation that would be a good home for the applicant. Some other fund managers are more concerned by perverse incentives in established organisations (especially academia), so are more willing to fund independent research.
I’d be interested to hear thoughts on how we could better support our grantees here. We do sometimes forward applications on to other funders (with the applicants permission), but don’t have any systematic program to secure further funding (beyond applying for renewals). We could try something like “demo days” popular in the VC world, but I’m not sure there’s a large enough ecosystem of potential funders for this to be worth it.
My impression is that it is not possible for everyone who want to help with the long term ti get hired by an org, for the simple reason that there are not enough openings at those orgs. At least in AI Safety, all entry level jobs are very competitive, meaning that not getting in is not a strong signal that one could not have done well there.
Do you disagree with this?
I can’t respond for Adam, but just wanted to say that I personally agree with you, which is one of the reasons I’m currently excited about funding independent work.
Thanks for picking up the thread here Asya! I think I largely agree with this, especially about the competitiveness in this space. For example, with AI PhD applications, I often see extremely talented people get rejected who I’m sure would have got an offer a few years ago.
I’m pretty happy to see the LTFF offering effectively “bridge” funding for people who don’t quite meet the hiring bar yet, but I think are likely to in the next few years. However, I’d be hesitant about heading towards a large fraction of people working independently long-term. I think there’s huge advantages from the structure and mentorship an org can provide. If orgs aren’t scaling up fast enough, then I’d prefer to focus on trying to help speed that up.
The main way I could see myself getting more excited about long-term independent research is if we saw flourishing communities forming amongst independent researchers. Efforts like LessWrong and the Alignment Forum help in terms of providing infrastructure. But right now it still seems much worse than working for an org, especially if you want to go down any of the more traditional career paths later. But I’d love to be proven wrong here.
I claim we have proof of concept. The people who started the existing AI Safety research orgs did not have AI Safety mentors. Current independent researcher have more support than they had. In a way an org is just a crystalized collaboration of previously independent researchers.
I think that there are some PR reasons why it would be good if most AI Safety researchers where part of academia or other respectable orgs (e.g. DeepMind). But I also think it is good to have a minority of researchers who are disconnected from the particular pressures of that environment.
However, being part of academia is not the same as being part of an AI Safety org. MIRI people are not part of academia, and someone doing AI Safety research as part of their PhD in a “normal” (not AI Safety focused) PhD program, is sorta an independent researcher.
We are working on that. I’m not optimistic about current orgs keeping up with the growth of the field, and I don’t think it is healthy for the career to be too competitive, since this will lead to goodhearted on career intensives. But I do think a looser structure, built on personal connections rather than formal org employment, can grow in a much more flexible way, and we are experimenting with various methods to make this happen.