I’d be curious to hear an explication for selecting the given team for the Long Term Future Funds. If they are expected to evaluate grants including research grants, how do they plan to do that, what qualifies them for this job, and in case they are not qualified, which experts do they plan to invite on such occasions.
From their bio page I don’t see who of them should count as an expert in the field of research (and in view of which track-record), which is why I am asking. Thanks!
Hi Dunja, I’m Matt Fallshaw, Chair of the fund. This response is an attempt to be helpful, but I’m not entirely sure what, in answer to your question, would qualify as a qualification; perhaps it’s relevant that I’ve been following the field for over 10 years, I’ve been an advisor to MIRI (I joined their Board of Directors in 2014 (a position I recently had to give up) and currently spend approaching half of my time working on MIRI projects) and I’m an advisor to BERI.
I chose the expert team (in consultation with Marek Duda), and I chose them for (among other things) their intelligence, knowledge and connections (to both advisors and likely grantee orgs or individuals).
We absolutely do intend to consult with experts (including Nick and Jonas, our listed advisors, and outside experts) when we don’t feel that we have enough knowledge ourselves to properly assess a grant. Our connections span multiple continents and (when we don’t feel qualified ourselves) we will choose advisors relevant to each grant we consider.
… I’m not sure whether that response is going to be satisfying, so feel free to clarify your question and I’ll try again.
Hi Matt, thanks a lot for the reply! I appreciate your approach, but I do have worries, which Jonas, for instance, is very well aware of (I have been a strong critic of EAF policy and implementation of research grants, including those directed at MIRI and FRI).
My main worry is that evaluating grants aimed at research cannot be done without having them assessed by expert researchers in the given domain, that is, people who have a proven track-record in the given field of research. I think the best way to see why this matters is to take any other scientific domain: medicine, physics, etc. If we wanted to evaluate whether a certain research grant in medicine should be funded (e.g. a discovery of an important vaccine), it wouldn’t be enough to just like the objective of the grant. We would have to assess:
Methodological feasibility of the grant: are the announced methods conducive to the given goals? How will the project react to possible obstacles and which alternative methods will in such cases be employed?
Fitness of the project within the state of the art: how well the grant is informed by the relevant research in the given domain (e.g. are some important methods and insights overlooked, is another research team already working on a related topic where combining insights would increase the efficiency of the current project, etc.)
etc.
Clearly, answering these questions cannot be done by anyone who is not an expert in medicine. My point is that the same goes for the research in any other scientific domain, from philosophy to AI. Hence, if your team consists of people who are enthusiastic about the topic, and who do have experience in reading about it or who have experience in managing EA grants and non-profit organizations, that’s not the adequate expertise for evaluating research grants. The same goes for your advisers: Nick has a PhD in philosophy, but that’s not enough for being an expert e.g. in AI (it’s not enough for being an expert in many domains of philosophy either unless he has a track record of continuous research in the given domain). Jonas has a background in medicine and economics and charity evaluations, but that has nothing to do with an active engagement in research.
Inviting expert-researchers to evaluate each of the submitted projects is the only way to award research grants responsibly. That’s precisely what both academic and non-academic funding institutions do. Otherwise, how can we possibly argue that the given funded research is promising and that we have done the best we can to estimate its effectiveness? This is important not only to assure the quality of the given research, but also to handle the donors’ contributions responsibly, according to the values of EA in general.
My impression is that so far the main criterion employed when assessing the feasibility of grants is how trust-worthy the given team (proposing the grant) is, how enthusiastic they are about the topic and how much effort they are willing to put in it. But we wouldn’t take those criteria to be enough when it comes to the discovery of vaccinations. We’d also want to see the track-record of the given researchers in the field of vaccination, we’d want to hear what their peers think of the methods they wish to employ, etc. And the very same holds for the research on far future. While some may reply that the academic world is insufficiently engaged in some of these topics, or biased against them, that still doesn’t mean there are no expert researchers competent to evaluate the given grants (moreover, requests for expert evaluations can be formulated in such a way to target specific methodological questions, and minimize the effect of bias). At the end of the day, if research should have an impact, it will have to gain attention of the same academic world, in which case it is important to engage with their opinions and inform projects of possible objections early on. I could say more about these dangers of bias in case of reviews and how to mitigate the given risks, so we can come back to this topic if anyone’s interested.
Finally, I hope we can continue this conversation without prematurely closing it. I have tried to do the same with EAF and their research-related policy, but unfortunately, they have never provided any explanation for why expert reviewers are not asked to evaluate the research projects which they fund (I plan to do a separate longer post on that as soon as I catch some free time, but I’d be happy to provide further background in the meantime if anyone is interested).
Update: this is all the more important in view of common ways one may accidentally cause harm by trying to do good, which I’ve just learned about through DavidNash’s post). As the article points out, having an informed opinion of experts, and a dense network with them can decrease chances of harmful impacts, such as reputational harm or locking in on suboptimal choices.
What would you say qualifies as expertise in these fields? It’s ambiguous, because it’s not like universities are offering Ph.D.‘s in ‘Safeguarding the Long-Term Future.’
That should always depend on the project at hand: if the project is primarily in a specific domain of AI research, then you need reviewers working precisely in that particular domain of AI; if it’s in ethics, then you need experts working in ethics; if it’s interdisciplinary, then you try to get reviewers from the respective fields. This also shows that it will be rather difficult (if not impossible) to have an expert team competent to evaluate each candidate project. Instead, the team should be competent in selecting the adequate expert reviewers (similarly to journal editors who invite expert reviewers for individual papers submitted to the journal). Of course, the team can do the pre-selection of projects, determining which are worthy of sending for expert review, but for that, it’s usually useful to have at least some experience with research in one of the relevant domains, as well as with research proposals.
I’d be curious to hear an explication for selecting the given team for the Long Term Future Funds. If they are expected to evaluate grants including research grants, how do they plan to do that, what qualifies them for this job, and in case they are not qualified, which experts do they plan to invite on such occasions.
From their bio page I don’t see who of them should count as an expert in the field of research (and in view of which track-record), which is why I am asking. Thanks!
Hi Dunja, I’m Matt Fallshaw, Chair of the fund. This response is an attempt to be helpful, but I’m not entirely sure what, in answer to your question, would qualify as a qualification; perhaps it’s relevant that I’ve been following the field for over 10 years, I’ve been an advisor to MIRI (I joined their Board of Directors in 2014 (a position I recently had to give up) and currently spend approaching half of my time working on MIRI projects) and I’m an advisor to BERI. I chose the expert team (in consultation with Marek Duda), and I chose them for (among other things) their intelligence, knowledge and connections (to both advisors and likely grantee orgs or individuals). We absolutely do intend to consult with experts (including Nick and Jonas, our listed advisors, and outside experts) when we don’t feel that we have enough knowledge ourselves to properly assess a grant. Our connections span multiple continents and (when we don’t feel qualified ourselves) we will choose advisors relevant to each grant we consider. … I’m not sure whether that response is going to be satisfying, so feel free to clarify your question and I’ll try again.
Hi Matt, thanks a lot for the reply! I appreciate your approach, but I do have worries, which Jonas, for instance, is very well aware of (I have been a strong critic of EAF policy and implementation of research grants, including those directed at MIRI and FRI).
My main worry is that evaluating grants aimed at research cannot be done without having them assessed by expert researchers in the given domain, that is, people who have a proven track-record in the given field of research. I think the best way to see why this matters is to take any other scientific domain: medicine, physics, etc. If we wanted to evaluate whether a certain research grant in medicine should be funded (e.g. a discovery of an important vaccine), it wouldn’t be enough to just like the objective of the grant. We would have to assess:
Methodological feasibility of the grant: are the announced methods conducive to the given goals? How will the project react to possible obstacles and which alternative methods will in such cases be employed?
Fitness of the project within the state of the art: how well the grant is informed by the relevant research in the given domain (e.g. are some important methods and insights overlooked, is another research team already working on a related topic where combining insights would increase the efficiency of the current project, etc.)
etc.
Clearly, answering these questions cannot be done by anyone who is not an expert in medicine. My point is that the same goes for the research in any other scientific domain, from philosophy to AI. Hence, if your team consists of people who are enthusiastic about the topic, and who do have experience in reading about it or who have experience in managing EA grants and non-profit organizations, that’s not the adequate expertise for evaluating research grants. The same goes for your advisers: Nick has a PhD in philosophy, but that’s not enough for being an expert e.g. in AI (it’s not enough for being an expert in many domains of philosophy either unless he has a track record of continuous research in the given domain). Jonas has a background in medicine and economics and charity evaluations, but that has nothing to do with an active engagement in research.
Inviting expert-researchers to evaluate each of the submitted projects is the only way to award research grants responsibly. That’s precisely what both academic and non-academic funding institutions do. Otherwise, how can we possibly argue that the given funded research is promising and that we have done the best we can to estimate its effectiveness? This is important not only to assure the quality of the given research, but also to handle the donors’ contributions responsibly, according to the values of EA in general.
My impression is that so far the main criterion employed when assessing the feasibility of grants is how trust-worthy the given team (proposing the grant) is, how enthusiastic they are about the topic and how much effort they are willing to put in it. But we wouldn’t take those criteria to be enough when it comes to the discovery of vaccinations. We’d also want to see the track-record of the given researchers in the field of vaccination, we’d want to hear what their peers think of the methods they wish to employ, etc. And the very same holds for the research on far future. While some may reply that the academic world is insufficiently engaged in some of these topics, or biased against them, that still doesn’t mean there are no expert researchers competent to evaluate the given grants (moreover, requests for expert evaluations can be formulated in such a way to target specific methodological questions, and minimize the effect of bias). At the end of the day, if research should have an impact, it will have to gain attention of the same academic world, in which case it is important to engage with their opinions and inform projects of possible objections early on. I could say more about these dangers of bias in case of reviews and how to mitigate the given risks, so we can come back to this topic if anyone’s interested.
Finally, I hope we can continue this conversation without prematurely closing it. I have tried to do the same with EAF and their research-related policy, but unfortunately, they have never provided any explanation for why expert reviewers are not asked to evaluate the research projects which they fund (I plan to do a separate longer post on that as soon as I catch some free time, but I’d be happy to provide further background in the meantime if anyone is interested).
Update: this is all the more important in view of common ways one may accidentally cause harm by trying to do good, which I’ve just learned about through DavidNash’s post). As the article points out, having an informed opinion of experts, and a dense network with them can decrease chances of harmful impacts, such as reputational harm or locking in on suboptimal choices.
What would you say qualifies as expertise in these fields? It’s ambiguous, because it’s not like universities are offering Ph.D.‘s in ‘Safeguarding the Long-Term Future.’
That should always depend on the project at hand: if the project is primarily in a specific domain of AI research, then you need reviewers working precisely in that particular domain of AI; if it’s in ethics, then you need experts working in ethics; if it’s interdisciplinary, then you try to get reviewers from the respective fields. This also shows that it will be rather difficult (if not impossible) to have an expert team competent to evaluate each candidate project. Instead, the team should be competent in selecting the adequate expert reviewers (similarly to journal editors who invite expert reviewers for individual papers submitted to the journal). Of course, the team can do the pre-selection of projects, determining which are worthy of sending for expert review, but for that, it’s usually useful to have at least some experience with research in one of the relevant domains, as well as with research proposals.