Sorry for the (very) delayed reply here. I’ll start with the most important point first.
But compared to working with a funder who, like you, wants to solve the problem and make the world be good, any of the other institutions mentioned including academia look extremely misaligned.
I think overall the incentives set up by EA funders are somewhat better than run-of-the-mill academic incentives, but I think the difference is smaller than you seem to believe, and I think we’re a long way from cracking it. I think this is something we can get better at, but it’s something that I expect will take significant infrastructure and iteration: e.g. new methods for peer review, experimenting with different granter-grantee relationships, etc.
Concretely, I think EA funders are really good (way better than most of academia or mainstream funders) at picking important problems like AI safety or biosecurity. I also think they’re better at reasoning about possible theories of change (if this project succeeds, would it actually help?) and considering a variety of paths to impact (e.g. maybe a blog post can have more impact than a paper in this case, or maybe we’d even prefer to distribute some results privately).
However, I think most EA funders are actually worse at evaluating whether the research agenda is being executed well than the traditional academic structure. I help the LTFF evaluate grants, many of which are for independent research, and while I try to understand people’s research agenda and how successful they’ve been, I think it’s fair to say I spend at least an order of magnitude less time on this per applicant than someone’s academic advisor.
Even worse, I have basically zero visibility into the process—I only see the final write-up, and maybe have an interview with the person. If I see a negative result, it’s really hard for me to tell if the person executed on the agenda well but the idea just didn’t pan out, or if they bungled the process. Whereas I find it quite easy to form an opinion on projects I advise, as I can see the project evolve over time, and how the person responds to setbacks. Of course, we can (and do) ask for references, but if they’re executing independently they may not have any, and there’s always some CoI on advisors providing a reference.
Of course, when it comes to evaluating larger research orgs, funders can do a deeper dive and the stochasticity of research matters less (as it’s averaged over a longer period of time). But this is just punting the problem to those who are running the org. In general I still think evaluating research output is a really hard problem.
I do think one huge benefit EA has is that people are mostly trying to “play fair”, whereas in academia there is sadly more adversarial behavior (on the light side, people structuring their papers to dodge reviewer criticism; on the dark side, actual collusion in peer review or academic fraud). However, this isn’t scalable, and I wouldn’t want to build systems that rely on it.
In that more general comparison article, I think I may have still cautioned about academic incentives in particular. Because they seem, for lack of a better word, sneakier?
This is a fair point. I do think people kid themselves a bit about how much “academic freedom” they really have, and this can lead to people in effect internalizing the incentives more.
I’ve observed folks [...] behave as if they believe a research project to be directly good when I (and others) can’t see the impact proposition, and the behavior feels best explained by publishing incentives.
Believing something is “directly good” when others disagree seems like a classic case of wishful thinking. There are lots of reasons why someone might be motivated to work on a project (despite it not, in fact, being “directly good”). Publication incentives are certainly a big one, and might well be the best explanation for the cases you saw. But in general I think it could also be that they just find that topic intellectually interesting, have been working on it for a while and are suffering from sunk cost fallacy, etc.
Sorry for the (very) delayed reply here. I’ll start with the most important point first.
I think overall the incentives set up by EA funders are somewhat better than run-of-the-mill academic incentives, but I think the difference is smaller than you seem to believe, and I think we’re a long way from cracking it. I think this is something we can get better at, but it’s something that I expect will take significant infrastructure and iteration: e.g. new methods for peer review, experimenting with different granter-grantee relationships, etc.
Concretely, I think EA funders are really good (way better than most of academia or mainstream funders) at picking important problems like AI safety or biosecurity. I also think they’re better at reasoning about possible theories of change (if this project succeeds, would it actually help?) and considering a variety of paths to impact (e.g. maybe a blog post can have more impact than a paper in this case, or maybe we’d even prefer to distribute some results privately).
However, I think most EA funders are actually worse at evaluating whether the research agenda is being executed well than the traditional academic structure. I help the LTFF evaluate grants, many of which are for independent research, and while I try to understand people’s research agenda and how successful they’ve been, I think it’s fair to say I spend at least an order of magnitude less time on this per applicant than someone’s academic advisor.
Even worse, I have basically zero visibility into the process—I only see the final write-up, and maybe have an interview with the person. If I see a negative result, it’s really hard for me to tell if the person executed on the agenda well but the idea just didn’t pan out, or if they bungled the process. Whereas I find it quite easy to form an opinion on projects I advise, as I can see the project evolve over time, and how the person responds to setbacks. Of course, we can (and do) ask for references, but if they’re executing independently they may not have any, and there’s always some CoI on advisors providing a reference.
Of course, when it comes to evaluating larger research orgs, funders can do a deeper dive and the stochasticity of research matters less (as it’s averaged over a longer period of time). But this is just punting the problem to those who are running the org. In general I still think evaluating research output is a really hard problem.
I do think one huge benefit EA has is that people are mostly trying to “play fair”, whereas in academia there is sadly more adversarial behavior (on the light side, people structuring their papers to dodge reviewer criticism; on the dark side, actual collusion in peer review or academic fraud). However, this isn’t scalable, and I wouldn’t want to build systems that rely on it.
This is a fair point. I do think people kid themselves a bit about how much “academic freedom” they really have, and this can lead to people in effect internalizing the incentives more.
Believing something is “directly good” when others disagree seems like a classic case of wishful thinking. There are lots of reasons why someone might be motivated to work on a project (despite it not, in fact, being “directly good”). Publication incentives are certainly a big one, and might well be the best explanation for the cases you saw. But in general I think it could also be that they just find that topic intellectually interesting, have been working on it for a while and are suffering from sunk cost fallacy, etc.