I’m not in favor of funding exclusively based on talent, because I think a lot of the impact of our grants is in how they affect the surrounding field, and low-quality work dilutes the quality of those fields and attracts other low-quality work.
Let’s compare the situation of the Long-Term Future Fund evaluating the quality of a grant proposal to that of the academic community evaluating the quality of a published paper. Compared to the LTFF evaluating a grant proposal, the academic community evaluating the quality of a published paper has big advantages: The work is being evaluated retrospectively instead of prospectively (i.e. it actually exists, it is not just a hypothetical project). The academic community has more time and more eyeballs. The academic community has people who are very senior in their field, and your team is relatively junior—plus, “longtermism” is a huge area that’s really hard to be an expert in all of.
Even so, the academic community doesn’t seem very good at their task. “Sleeping beauty” papers, whose quality is only recognized long after publication, seem common. Breakthroughs are denounced by scientists, or simply underappreciated, at first (often ‘correctly’ due to being less fleshed out than existing theories). This paper contains a list of 34 examples of Nobel Prize-winning work being rejected by peer review. “Science advances one funeral at a time”, they say.
Problems compound when the question of first-order quality is replaced by the question of what others will consider to be high quality. You’re funding researchers to do work that you consider to be work that others will consider to be good—based on relatively superficial assessments due to time limitations, it sounds like.
Seems like a recipe for herd behavior. But breakthroughscomefrommavericks. This funding strategy could have a negative effect by stifling innovation (filtering out contrarian thinking and contrarian researchers from the field).
Keep longtermism weird?
(I’m also a little skeptical of your “low-quality work dilutes the quality of those fields and attracts other low-quality work” fear—since high citation count is often thought of as an ipso facto measure of quality in academia, it would seem that if work attracts additional related work, it is probably not low quality. I think the most likely fate of low-quality work is to be forgotten. If people are too credulous of work which is actually low-quality, it’s unclear to me why the fund managers would be immune to this, and having more contrarians seems like the best solution to me. The general approach of “fund many perspectives and let them determine what constitutes quality through discussion” has the advantage of offloading work from the LTFF team.)
I’m also a little skeptical of your “low-quality work dilutes the quality of those fields and attracts other low-quality work” fear—since high citation count is often thought of as an ipso facto measure of quality in academia, it would seem that if work attracts additional related work, it is probably not low quality.
The difference here is that most academic fields are pretty well-established, whereas AI safety, longtermism, and longtermist subparts of most academic fields are very new. The mechanism for attracting low-quality work I’m imagining is that smart people look at existing work and think “these people seem amateurish, and I’m not interested in engaging with them”. Luke Muelhauser’s report on case studies in early field growth gives the case of cryonics, which “failed to grow [...] is not part of normal medical practice, it is regarded with great skepticism by the mainstream scientific community, and it has not been graced with much funding or scientific attention.” I doubt most low-quality work we could fund would cripple the surrounding fields this way, but I do think it would have an effect on the kind of people who were interested in doing longtermist work.
I will also say that I think somewhat different perspectives do get funded through the LTFF, partially because we’ve intentionally selected fund managers with different views, and we weigh it strongly if one fund manager is really excited about something. We’ve made many grants that didn’t cross the funding bar for one or more fund managers.
Sure. I guess I don’t have a lot of faith in your team’s ability to do this, since you/people you are funding are already saying things that seem amateurish to me. But I’m not sure that is a big deal.
Let’s compare the situation of the Long-Term Future Fund evaluating the quality of a grant proposal to that of the academic community evaluating the quality of a published paper. Compared to the LTFF evaluating a grant proposal, the academic community evaluating the quality of a published paper has big advantages: The work is being evaluated retrospectively instead of prospectively (i.e. it actually exists, it is not just a hypothetical project). The academic community has more time and more eyeballs. The academic community has people who are very senior in their field, and your team is relatively junior—plus, “longtermism” is a huge area that’s really hard to be an expert in all of.
Even so, the academic community doesn’t seem very good at their task. “Sleeping beauty” papers, whose quality is only recognized long after publication, seem common. Breakthroughs are denounced by scientists, or simply underappreciated, at first (often ‘correctly’ due to being less fleshed out than existing theories). This paper contains a list of 34 examples of Nobel Prize-winning work being rejected by peer review. “Science advances one funeral at a time”, they say.
Problems compound when the question of first-order quality is replaced by the question of what others will consider to be high quality. You’re funding researchers to do work that you consider to be work that others will consider to be good—based on relatively superficial assessments due to time limitations, it sounds like.
Seems like a recipe for herd behavior. But breakthroughs come from mavericks. This funding strategy could have a negative effect by stifling innovation (filtering out contrarian thinking and contrarian researchers from the field).
Keep longtermism weird?
(I’m also a little skeptical of your “low-quality work dilutes the quality of those fields and attracts other low-quality work” fear—since high citation count is often thought of as an ipso facto measure of quality in academia, it would seem that if work attracts additional related work, it is probably not low quality. I think the most likely fate of low-quality work is to be forgotten. If people are too credulous of work which is actually low-quality, it’s unclear to me why the fund managers would be immune to this, and having more contrarians seems like the best solution to me. The general approach of “fund many perspectives and let them determine what constitutes quality through discussion” has the advantage of offloading work from the LTFF team.)
The difference here is that most academic fields are pretty well-established, whereas AI safety, longtermism, and longtermist subparts of most academic fields are very new. The mechanism for attracting low-quality work I’m imagining is that smart people look at existing work and think “these people seem amateurish, and I’m not interested in engaging with them”. Luke Muelhauser’s report on case studies in early field growth gives the case of cryonics, which “failed to grow [...] is not part of normal medical practice, it is regarded with great skepticism by the mainstream scientific community, and it has not been graced with much funding or scientific attention.” I doubt most low-quality work we could fund would cripple the surrounding fields this way, but I do think it would have an effect on the kind of people who were interested in doing longtermist work.
I will also say that I think somewhat different perspectives do get funded through the LTFF, partially because we’ve intentionally selected fund managers with different views, and we weigh it strongly if one fund manager is really excited about something. We’ve made many grants that didn’t cross the funding bar for one or more fund managers.
Sure. I guess I don’t have a lot of faith in your team’s ability to do this, since you/people you are funding are already saying things that seem amateurish to me. But I’m not sure that is a big deal.