Totally agree there are bad incentives lots of places
I think figuring out what existing institutions have incentives that best serve your goals, and building a strategy around those incentives, is a key operation. My intent with this article was to illustrate some of that type of thinking within planning for gradschool. If I was writing a comparison between working in academia and other possible ways to do research I would definitely have flagged the many ways academic incentives are better than the alternatives! I appreciate you doing that, because it’s clearly true and important.
In that more general comparison article, I think I may have still cautioned about academic incentives in particular. Because they seem, for lack of a better word, sneakier? Like, knowing you work at a for-profit company makes it really transparently clear that your manager (or manager’s manager’s) incentives are different from yours, if you want to do directly impactful research. Whereas I’ve observed folks, in my academic niche of biological engineering, behave as if they believe a research project to be directly good when I (and others) can’t see the impact proposition, and the behavior feels best explained by publishing incentives? In more extreme cases, people will say that project A is less important to prioritize than project B because B is more impactful, but will invest way more in A (which just happens to be very publishable). I’m sure I’m also very guilty of this, but its easier to recognize in other people :P
-I’m primarily reporting on biology/ bioengineering/ bioinformatics academia here, though consume a lot of deep learning academias output. FWIW, my sense is there is actually a difference in the strength and type of incentives between ML and biology, at least. From talking with friends in DL academic labs, it seems like there is still a pressure to publish in conferences but there are also lots of other ways to get prestige currency, like putting out a well-read arxiv paper or being a primary contributor to an open source library like pytorch. In biology, from what I’ve seen, it just really really really matters that you publish in a high impact factor journal, ideally with “Science” or “Nature” on the cover.
It also matters a whole lot who your advisor is, as you mention. Having an advisor who is super bought in to the impact proposition of your research is a totally different game. I have the sense that most people are not this lucky by default, and so would want to optimize for the type of buy-in or, alternatively, laissez-faire management which I pattern match to the type of research freedom you’re describing.
All of this said, I think my biggest reaction is something like “there are ways of finding really good incentives for doing research”! Instead of working in existing institutions—academic, for-profit research labs, for-profit company—come up with a good idea for what to research and how, and just do it. More precisely: ask an altruistic funder for money, find other people to work with, make an organization if it seems good. There are small and large versions of this. On the small scale you can apply for EA grants or another org which grants to individuals, and if you’re really on to something you ask for org-scale funding. I’m not claiming that this is always a better idea: you will be missing lots of resources you might otherwise have in e.g. academia.
But compared to working with a funder who, like you, wants to solve the problem and make the world be good, any of the other institutions mentioned including academia look extremely misaligned. And IMO its worth making it clear that relative to this, almost any lab/ institute’s academic incentives suck. Once this DIY option is on the table I think it is possible to make better choices about whether you like the compromise of working at another institution or whether you will use this option to get specific resources that will make the “forge your own way” option more tractable. E.g.: don’t have any good ideas for a research agenda? Great, focus on figuring this out in your PhD. Don’t know any good people you might recruit for your project? Great, focus on building a good network in your PhD. Etc etc
I’m curious if you still feel like incentives are misaligned in this world, or whether it feels too impractical to be included in your list, or disagree with me elsewhere?
Sorry for the (very) delayed reply here. I’ll start with the most important point first.
But compared to working with a funder who, like you, wants to solve the problem and make the world be good, any of the other institutions mentioned including academia look extremely misaligned.
I think overall the incentives set up by EA funders are somewhat better than run-of-the-mill academic incentives, but I think the difference is smaller than you seem to believe, and I think we’re a long way from cracking it. I think this is something we can get better at, but it’s something that I expect will take significant infrastructure and iteration: e.g. new methods for peer review, experimenting with different granter-grantee relationships, etc.
Concretely, I think EA funders are really good (way better than most of academia or mainstream funders) at picking important problems like AI safety or biosecurity. I also think they’re better at reasoning about possible theories of change (if this project succeeds, would it actually help?) and considering a variety of paths to impact (e.g. maybe a blog post can have more impact than a paper in this case, or maybe we’d even prefer to distribute some results privately).
However, I think most EA funders are actually worse at evaluating whether the research agenda is being executed well than the traditional academic structure. I help the LTFF evaluate grants, many of which are for independent research, and while I try to understand people’s research agenda and how successful they’ve been, I think it’s fair to say I spend at least an order of magnitude less time on this per applicant than someone’s academic advisor.
Even worse, I have basically zero visibility into the process—I only see the final write-up, and maybe have an interview with the person. If I see a negative result, it’s really hard for me to tell if the person executed on the agenda well but the idea just didn’t pan out, or if they bungled the process. Whereas I find it quite easy to form an opinion on projects I advise, as I can see the project evolve over time, and how the person responds to setbacks. Of course, we can (and do) ask for references, but if they’re executing independently they may not have any, and there’s always some CoI on advisors providing a reference.
Of course, when it comes to evaluating larger research orgs, funders can do a deeper dive and the stochasticity of research matters less (as it’s averaged over a longer period of time). But this is just punting the problem to those who are running the org. In general I still think evaluating research output is a really hard problem.
I do think one huge benefit EA has is that people are mostly trying to “play fair”, whereas in academia there is sadly more adversarial behavior (on the light side, people structuring their papers to dodge reviewer criticism; on the dark side, actual collusion in peer review or academic fraud). However, this isn’t scalable, and I wouldn’t want to build systems that rely on it.
In that more general comparison article, I think I may have still cautioned about academic incentives in particular. Because they seem, for lack of a better word, sneakier?
This is a fair point. I do think people kid themselves a bit about how much “academic freedom” they really have, and this can lead to people in effect internalizing the incentives more.
I’ve observed folks [...] behave as if they believe a research project to be directly good when I (and others) can’t see the impact proposition, and the behavior feels best explained by publishing incentives.
Believing something is “directly good” when others disagree seems like a classic case of wishful thinking. There are lots of reasons why someone might be motivated to work on a project (despite it not, in fact, being “directly good”). Publication incentives are certainly a big one, and might well be the best explanation for the cases you saw. But in general I think it could also be that they just find that topic intellectually interesting, have been working on it for a while and are suffering from sunk cost fallacy, etc.
This is an excellent comment, thanks Adam.
A couple impressions:
Totally agree there are bad incentives lots of places
I think figuring out what existing institutions have incentives that best serve your goals, and building a strategy around those incentives, is a key operation. My intent with this article was to illustrate some of that type of thinking within planning for gradschool. If I was writing a comparison between working in academia and other possible ways to do research I would definitely have flagged the many ways academic incentives are better than the alternatives! I appreciate you doing that, because it’s clearly true and important.
In that more general comparison article, I think I may have still cautioned about academic incentives in particular. Because they seem, for lack of a better word, sneakier? Like, knowing you work at a for-profit company makes it really transparently clear that your manager (or manager’s manager’s) incentives are different from yours, if you want to do directly impactful research. Whereas I’ve observed folks, in my academic niche of biological engineering, behave as if they believe a research project to be directly good when I (and others) can’t see the impact proposition, and the behavior feels best explained by publishing incentives? In more extreme cases, people will say that project A is less important to prioritize than project B because B is more impactful, but will invest way more in A (which just happens to be very publishable). I’m sure I’m also very guilty of this, but its easier to recognize in other people :P -I’m primarily reporting on biology/ bioengineering/ bioinformatics academia here, though consume a lot of deep learning academias output. FWIW, my sense is there is actually a difference in the strength and type of incentives between ML and biology, at least. From talking with friends in DL academic labs, it seems like there is still a pressure to publish in conferences but there are also lots of other ways to get prestige currency, like putting out a well-read arxiv paper or being a primary contributor to an open source library like pytorch. In biology, from what I’ve seen, it just really really really matters that you publish in a high impact factor journal, ideally with “Science” or “Nature” on the cover.
It also matters a whole lot who your advisor is, as you mention. Having an advisor who is super bought in to the impact proposition of your research is a totally different game. I have the sense that most people are not this lucky by default, and so would want to optimize for the type of buy-in or, alternatively, laissez-faire management which I pattern match to the type of research freedom you’re describing.
All of this said, I think my biggest reaction is something like “there are ways of finding really good incentives for doing research”! Instead of working in existing institutions—academic, for-profit research labs, for-profit company—come up with a good idea for what to research and how, and just do it. More precisely: ask an altruistic funder for money, find other people to work with, make an organization if it seems good. There are small and large versions of this. On the small scale you can apply for EA grants or another org which grants to individuals, and if you’re really on to something you ask for org-scale funding. I’m not claiming that this is always a better idea: you will be missing lots of resources you might otherwise have in e.g. academia.
But compared to working with a funder who, like you, wants to solve the problem and make the world be good, any of the other institutions mentioned including academia look extremely misaligned. And IMO its worth making it clear that relative to this, almost any lab/ institute’s academic incentives suck. Once this DIY option is on the table I think it is possible to make better choices about whether you like the compromise of working at another institution or whether you will use this option to get specific resources that will make the “forge your own way” option more tractable. E.g.: don’t have any good ideas for a research agenda? Great, focus on figuring this out in your PhD. Don’t know any good people you might recruit for your project? Great, focus on building a good network in your PhD. Etc etc
I’m curious if you still feel like incentives are misaligned in this world, or whether it feels too impractical to be included in your list, or disagree with me elsewhere?
Thanks again :)
Sorry for the (very) delayed reply here. I’ll start with the most important point first.
I think overall the incentives set up by EA funders are somewhat better than run-of-the-mill academic incentives, but I think the difference is smaller than you seem to believe, and I think we’re a long way from cracking it. I think this is something we can get better at, but it’s something that I expect will take significant infrastructure and iteration: e.g. new methods for peer review, experimenting with different granter-grantee relationships, etc.
Concretely, I think EA funders are really good (way better than most of academia or mainstream funders) at picking important problems like AI safety or biosecurity. I also think they’re better at reasoning about possible theories of change (if this project succeeds, would it actually help?) and considering a variety of paths to impact (e.g. maybe a blog post can have more impact than a paper in this case, or maybe we’d even prefer to distribute some results privately).
However, I think most EA funders are actually worse at evaluating whether the research agenda is being executed well than the traditional academic structure. I help the LTFF evaluate grants, many of which are for independent research, and while I try to understand people’s research agenda and how successful they’ve been, I think it’s fair to say I spend at least an order of magnitude less time on this per applicant than someone’s academic advisor.
Even worse, I have basically zero visibility into the process—I only see the final write-up, and maybe have an interview with the person. If I see a negative result, it’s really hard for me to tell if the person executed on the agenda well but the idea just didn’t pan out, or if they bungled the process. Whereas I find it quite easy to form an opinion on projects I advise, as I can see the project evolve over time, and how the person responds to setbacks. Of course, we can (and do) ask for references, but if they’re executing independently they may not have any, and there’s always some CoI on advisors providing a reference.
Of course, when it comes to evaluating larger research orgs, funders can do a deeper dive and the stochasticity of research matters less (as it’s averaged over a longer period of time). But this is just punting the problem to those who are running the org. In general I still think evaluating research output is a really hard problem.
I do think one huge benefit EA has is that people are mostly trying to “play fair”, whereas in academia there is sadly more adversarial behavior (on the light side, people structuring their papers to dodge reviewer criticism; on the dark side, actual collusion in peer review or academic fraud). However, this isn’t scalable, and I wouldn’t want to build systems that rely on it.
This is a fair point. I do think people kid themselves a bit about how much “academic freedom” they really have, and this can lead to people in effect internalizing the incentives more.
Believing something is “directly good” when others disagree seems like a classic case of wishful thinking. There are lots of reasons why someone might be motivated to work on a project (despite it not, in fact, being “directly good”). Publication incentives are certainly a big one, and might well be the best explanation for the cases you saw. But in general I think it could also be that they just find that topic intellectually interesting, have been working on it for a while and are suffering from sunk cost fallacy, etc.