I think the part which is lacking in your understanding is part of MIRIs intellectual DNA.
In it you can find lot of Eliezer Yudkowsky’s thought—I would recommend reading his latest book ” Inadequate equilibria” where he explains some of the reasons why the normal reserach environment may be inadequte in some respects.
MIRI is explicitly founded on the premise to free some people from the “publish or perish” pressure, which is severely limiting what people in normal academia work on and care about. If you give enough probability to this beeing approach worth taking, it does make sense to base decisions on funding MIRI on different criteria.
Hi Jan, I am aware of the fact that “publish or perish” environment may be problematic (and that MIRI isn’t very fond of it), but we should make a difference between publishing as many papers as possible, and publishing at least some papers in high impact journals.
Now, if we don’t want to base our assessment of effectiveness and efficiency on any publications, then we need something else. So what would be these different criteria you mention? How do we assess the research project as effective? And how do we assess that the project has shown to be effective over the course of time?
What I would do when evaluating potentially high-impact, high uncertainty “moonshot type” research project would be to ask some trusted highly knowledgeable researcher to assess the thing. I would not evaluate publication output, but whether the effort looks sensible, people working on it are good, and some progress is made (even if in discovering things which do not work)
why not asking at least 2-3 experts? Surely, one of them could be (unintentionally) biased or misinformed, or she/he may simply omit an important point in the project and assess it too negatively or too positively?
if we don’t assess the publication output of the project initiator(s), how do we assure that these very people, rather than some other scholars would pursue the given project most effectively and efficiently? Surely, some criteria will matter: for example, if I have a PhD in philosophy, I will be quite unqualified to conduct a project in the domain of experimental physics. So some competence seems necessary. How do we assure it, and why not care about effectiveness and efficiency in this step?
I agree that negative results are valuable, and that some progress should be made. So what is the progress MIRI has shown over the course of last 3 years, such that this can be identified as efficient and effective research?
Finally, don’t you think making an open call for projects on the given topic, and awarding the one(s) that seem most promising is a method that would be more reliable in view of possible errors in judgment than just evaluating whoever is the first to apply for the grant?
I think the part which is lacking in your understanding is part of MIRIs intellectual DNA.
In it you can find lot of Eliezer Yudkowsky’s thought—I would recommend reading his latest book ” Inadequate equilibria” where he explains some of the reasons why the normal reserach environment may be inadequte in some respects.
MIRI is explicitly founded on the premise to free some people from the “publish or perish” pressure, which is severely limiting what people in normal academia work on and care about. If you give enough probability to this beeing approach worth taking, it does make sense to base decisions on funding MIRI on different criteria.
Hi Jan, I am aware of the fact that “publish or perish” environment may be problematic (and that MIRI isn’t very fond of it), but we should make a difference between publishing as many papers as possible, and publishing at least some papers in high impact journals.
Now, if we don’t want to base our assessment of effectiveness and efficiency on any publications, then we need something else. So what would be these different criteria you mention? How do we assess the research project as effective? And how do we assess that the project has shown to be effective over the course of time?
What I would do when evaluating potentially high-impact, high uncertainty “moonshot type” research project would be to ask some trusted highly knowledgeable researcher to assess the thing. I would not evaluate publication output, but whether the effort looks sensible, people working on it are good, and some progress is made (even if in discovering things which do not work)
OK, but then, why not the following:
why not asking at least 2-3 experts? Surely, one of them could be (unintentionally) biased or misinformed, or she/he may simply omit an important point in the project and assess it too negatively or too positively?
if we don’t assess the publication output of the project initiator(s), how do we assure that these very people, rather than some other scholars would pursue the given project most effectively and efficiently? Surely, some criteria will matter: for example, if I have a PhD in philosophy, I will be quite unqualified to conduct a project in the domain of experimental physics. So some competence seems necessary. How do we assure it, and why not care about effectiveness and efficiency in this step?
I agree that negative results are valuable, and that some progress should be made. So what is the progress MIRI has shown over the course of last 3 years, such that this can be identified as efficient and effective research?
Finally, don’t you think making an open call for projects on the given topic, and awarding the one(s) that seem most promising is a method that would be more reliable in view of possible errors in judgment than just evaluating whoever is the first to apply for the grant?