Hi Matt, thanks a lot for the reply! I appreciate your approach, but I do have worries, which Jonas, for instance, is very well aware of (I have been a strong critic of EAF policy and implementation of research grants, including those directed at MIRI and FRI).
My main worry is that evaluating grants aimed at research cannot be done without having them assessed by expert researchers in the given domain, that is, people who have a proven track-record in the given field of research. I think the best way to see why this matters is to take any other scientific domain: medicine, physics, etc. If we wanted to evaluate whether a certain research grant in medicine should be funded (e.g. a discovery of an important vaccine), it wouldn’t be enough to just like the objective of the grant. We would have to assess:
Methodological feasibility of the grant: are the announced methods conducive to the given goals? How will the project react to possible obstacles and which alternative methods will in such cases be employed?
Fitness of the project within the state of the art: how well the grant is informed by the relevant research in the given domain (e.g. are some important methods and insights overlooked, is another research team already working on a related topic where combining insights would increase the efficiency of the current project, etc.)
etc.
Clearly, answering these questions cannot be done by anyone who is not an expert in medicine. My point is that the same goes for the research in any other scientific domain, from philosophy to AI. Hence, if your team consists of people who are enthusiastic about the topic, and who do have experience in reading about it or who have experience in managing EA grants and non-profit organizations, that’s not the adequate expertise for evaluating research grants. The same goes for your advisers: Nick has a PhD in philosophy, but that’s not enough for being an expert e.g. in AI (it’s not enough for being an expert in many domains of philosophy either unless he has a track record of continuous research in the given domain). Jonas has a background in medicine and economics and charity evaluations, but that has nothing to do with an active engagement in research.
Inviting expert-researchers to evaluate each of the submitted projects is the only way to award research grants responsibly. That’s precisely what both academic and non-academic funding institutions do. Otherwise, how can we possibly argue that the given funded research is promising and that we have done the best we can to estimate its effectiveness? This is important not only to assure the quality of the given research, but also to handle the donors’ contributions responsibly, according to the values of EA in general.
My impression is that so far the main criterion employed when assessing the feasibility of grants is how trust-worthy the given team (proposing the grant) is, how enthusiastic they are about the topic and how much effort they are willing to put in it. But we wouldn’t take those criteria to be enough when it comes to the discovery of vaccinations. We’d also want to see the track-record of the given researchers in the field of vaccination, we’d want to hear what their peers think of the methods they wish to employ, etc. And the very same holds for the research on far future. While some may reply that the academic world is insufficiently engaged in some of these topics, or biased against them, that still doesn’t mean there are no expert researchers competent to evaluate the given grants (moreover, requests for expert evaluations can be formulated in such a way to target specific methodological questions, and minimize the effect of bias). At the end of the day, if research should have an impact, it will have to gain attention of the same academic world, in which case it is important to engage with their opinions and inform projects of possible objections early on. I could say more about these dangers of bias in case of reviews and how to mitigate the given risks, so we can come back to this topic if anyone’s interested.
Finally, I hope we can continue this conversation without prematurely closing it. I have tried to do the same with EAF and their research-related policy, but unfortunately, they have never provided any explanation for why expert reviewers are not asked to evaluate the research projects which they fund (I plan to do a separate longer post on that as soon as I catch some free time, but I’d be happy to provide further background in the meantime if anyone is interested).
Hi Matt, thanks a lot for the reply! I appreciate your approach, but I do have worries, which Jonas, for instance, is very well aware of (I have been a strong critic of EAF policy and implementation of research grants, including those directed at MIRI and FRI).
My main worry is that evaluating grants aimed at research cannot be done without having them assessed by expert researchers in the given domain, that is, people who have a proven track-record in the given field of research. I think the best way to see why this matters is to take any other scientific domain: medicine, physics, etc. If we wanted to evaluate whether a certain research grant in medicine should be funded (e.g. a discovery of an important vaccine), it wouldn’t be enough to just like the objective of the grant. We would have to assess:
Methodological feasibility of the grant: are the announced methods conducive to the given goals? How will the project react to possible obstacles and which alternative methods will in such cases be employed?
Fitness of the project within the state of the art: how well the grant is informed by the relevant research in the given domain (e.g. are some important methods and insights overlooked, is another research team already working on a related topic where combining insights would increase the efficiency of the current project, etc.)
etc.
Clearly, answering these questions cannot be done by anyone who is not an expert in medicine. My point is that the same goes for the research in any other scientific domain, from philosophy to AI. Hence, if your team consists of people who are enthusiastic about the topic, and who do have experience in reading about it or who have experience in managing EA grants and non-profit organizations, that’s not the adequate expertise for evaluating research grants. The same goes for your advisers: Nick has a PhD in philosophy, but that’s not enough for being an expert e.g. in AI (it’s not enough for being an expert in many domains of philosophy either unless he has a track record of continuous research in the given domain). Jonas has a background in medicine and economics and charity evaluations, but that has nothing to do with an active engagement in research.
Inviting expert-researchers to evaluate each of the submitted projects is the only way to award research grants responsibly. That’s precisely what both academic and non-academic funding institutions do. Otherwise, how can we possibly argue that the given funded research is promising and that we have done the best we can to estimate its effectiveness? This is important not only to assure the quality of the given research, but also to handle the donors’ contributions responsibly, according to the values of EA in general.
My impression is that so far the main criterion employed when assessing the feasibility of grants is how trust-worthy the given team (proposing the grant) is, how enthusiastic they are about the topic and how much effort they are willing to put in it. But we wouldn’t take those criteria to be enough when it comes to the discovery of vaccinations. We’d also want to see the track-record of the given researchers in the field of vaccination, we’d want to hear what their peers think of the methods they wish to employ, etc. And the very same holds for the research on far future. While some may reply that the academic world is insufficiently engaged in some of these topics, or biased against them, that still doesn’t mean there are no expert researchers competent to evaluate the given grants (moreover, requests for expert evaluations can be formulated in such a way to target specific methodological questions, and minimize the effect of bias). At the end of the day, if research should have an impact, it will have to gain attention of the same academic world, in which case it is important to engage with their opinions and inform projects of possible objections early on. I could say more about these dangers of bias in case of reviews and how to mitigate the given risks, so we can come back to this topic if anyone’s interested.
Finally, I hope we can continue this conversation without prematurely closing it. I have tried to do the same with EAF and their research-related policy, but unfortunately, they have never provided any explanation for why expert reviewers are not asked to evaluate the research projects which they fund (I plan to do a separate longer post on that as soon as I catch some free time, but I’d be happy to provide further background in the meantime if anyone is interested).