Could we re-open this discussion in view of MIRI’s achievements over the course of a year?
A recent trend of providing relatively high research grants (relative to some of the most prestigious research grants across EU, such as for instance ERC starting grants ~ 1.5 mil EUR) to projects on AI risks and safety made me curious, and so I looked a bit more into this topic. What struck me as especially curious is the lack of transparency when it comes to the criteria used to evaluate the projects and to decide how to allocate the funds. Now, for the sake of this question, I am assuming that the research topic of AI risks and safety is important and should be funded (to which extent it actually is, is beside the point I’m writing here and deserves a discussion on its own; so let’s just say it is among the most pursuit worthy problems in view of both epistemic and non-epistemic criteria).
Note that in the last 2 years since I first saw this paper online, the very same paper has not been published in any peer-reviewed journal. Moreover, if you check MIRI’s publications (https://intelligence.org/all-publications/) you find not a single journal article since 2015 (or an article published in prestigous AI conference proceedings, for that matter).
It suffices to say that I was surpised. So I decided to contact both MIRI asking if perhaps their publications haven’t been updated on their website, and OPP asking for the evaluative criteria used when awarding this grant.
MIRI has never replied (email sent on February 8).
OPP took a while to reply, and today I’ve received the following email:
“Hi Dunja,
Thanks for your patience. Our assessment of this grant was based largely on the expert reviewer’s reasoning in reviewing MIRI’s work. Unfortunately, we don’t have permission to share the reviewer’s identity or reasoning. I’m sorry not to be more helpful with this, and do wish you the best of luck with your research.
“Creating detailed reports on the causes we’re investigating.
Sharing notes from our information-gathering conversations.
Publishing writeups and updates on a number of our grants, including our reasoning and reservations before making a grant, and any setbacks and challenges we encounter.”
However, the main problem here is not a mere lack of transparency, but a lack of effective and efficient funding policy. The question, how to decide which projects to fund in order to achieve effective and efficient knowledge acquisition, has been researched within philosophy of science and science policy for decades now. Yet, these very basic criteria seem absent from cases such as the above mentioned one. Not only are the criteria used non-transparent, but an open call for various research groups to submit their projects, where the funding agency then decides (in view of an expert panel—rather than a single reviewer) which project is the most promising one, has never happened. The markers of reliability, over the course of research, are extremely important if we want to advance effective research. The panel of experts (rather than a single expert) is extremely important in assuring procedural objectivity of the given assessment.
Altogether, this is not just surprising, but disturbing. Perhaps the biggest danger is that this falls into the hands of press and ends up being an argument for the point that organizations close to effective altruism are not effective at all.
Could we re-open this discussion in view of MIRI’s achievements over the course of a year?
A recent trend of providing relatively high research grants (relative to some of the most prestigious research grants across EU, such as for instance ERC starting grants ~ 1.5 mil EUR) to projects on AI risks and safety made me curious, and so I looked a bit more into this topic. What struck me as especially curious is the lack of transparency when it comes to the criteria used to evaluate the projects and to decide how to allocate the funds. Now, for the sake of this question, I am assuming that the research topic of AI risks and safety is important and should be funded (to which extent it actually is, is beside the point I’m writing here and deserves a discussion on its own; so let’s just say it is among the most pursuit worthy problems in view of both epistemic and non-epistemic criteria).
Particularly surpising was a sudden grant of 3.75 mil USD by Open Philanropy Project (OPP) to MIRI (https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017). Note that the funding is more than double the amount given to ERC starting grantees. Previously, OPP awarded MIRI with 500.000 USD and provided an extensive explanation of this decision (https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support). So, one would expect that for a grant more than 7 times higher, we’d find at least as much. But what we do find is an extremely brief explanation saying that an anonymous expert reviewer has evaluated MIRI’s work as highly promising in view of their paper “Logical Induction”.
Note that in the last 2 years since I first saw this paper online, the very same paper has not been published in any peer-reviewed journal. Moreover, if you check MIRI’s publications (https://intelligence.org/all-publications/) you find not a single journal article since 2015 (or an article published in prestigous AI conference proceedings, for that matter). It suffices to say that I was surpised. So I decided to contact both MIRI asking if perhaps their publications haven’t been updated on their website, and OPP asking for the evaluative criteria used when awarding this grant.
MIRI has never replied (email sent on February 8). OPP took a while to reply, and today I’ve received the following email:
“Hi Dunja,
Thanks for your patience. Our assessment of this grant was based largely on the expert reviewer’s reasoning in reviewing MIRI’s work. Unfortunately, we don’t have permission to share the reviewer’s identity or reasoning. I’m sorry not to be more helpful with this, and do wish you the best of luck with your research.
Best,
[name blinded in this public post]”
All this is very surprising given that OPP prides itself on transparency. As stated on their website (https://www.openphilanthropy.org/what-open-means-us):
“Creating detailed reports on the causes we’re investigating. Sharing notes from our information-gathering conversations. Publishing writeups and updates on a number of our grants, including our reasoning and reservations before making a grant, and any setbacks and challenges we encounter.”
However, the main problem here is not a mere lack of transparency, but a lack of effective and efficient funding policy. The question, how to decide which projects to fund in order to achieve effective and efficient knowledge acquisition, has been researched within philosophy of science and science policy for decades now. Yet, these very basic criteria seem absent from cases such as the above mentioned one. Not only are the criteria used non-transparent, but an open call for various research groups to submit their projects, where the funding agency then decides (in view of an expert panel—rather than a single reviewer) which project is the most promising one, has never happened. The markers of reliability, over the course of research, are extremely important if we want to advance effective research. The panel of experts (rather than a single expert) is extremely important in assuring procedural objectivity of the given assessment.
Altogether, this is not just surprising, but disturbing. Perhaps the biggest danger is that this falls into the hands of press and ends up being an argument for the point that organizations close to effective altruism are not effective at all.