Disclosure: Iâm both a direct and indirect beneficiary of Open Phil funding. I am also a donor to MIRI, albeit an unorthodox one
[I]f you check MIRIâs publications you find not a single journal article since 2015 (or an article published in prestigious AI conference proceedings, for that matter).
I have a 2 year out-of-date rough draft on bibliometrics re. MIRI, which likely wonât get updated due to being superceded by Larkâs excellent work and other constraints on my time. That said:
My impression of computer science academia was that (unlike most other fields) conference presentations are significantly more important than journal publications. Further when one looks at work on MIRIâs page from 2016-2018, I see 2 papers at Uncertainty of AI, which this site suggests is a âtop-tierâ conference. (Granted, for one of these neither or the authors have a MIRI institutional affiliation, although âmany people at MIRIâ are acknowledged).
Also, parts of their logical induction paper were published/âpresented at TARK-2017, which is a reasonable fit for the paper, and a respectable though not a top conference.
Oh I havenât seen that publication on their website. If it was a peer-reviewed publication, that would indeed be something (and a kind of stuff Iâve been looking for). Could you please link to the publication?
Thanks for the comment, Gregory!
I must say though that I donât agree with you that conference presentations are significantly more important than journal publications in the field of AI (or humanities for that matter). We could discuss this in terms of personal experiences, but Iâd go for a more objective criterion: effectiveness in terms of citations. Only once your research results are published in a peer-reviewed journal (including peer-reviewed conference proceedings) can other scholars in the field take them as a (minimally) reliable source for further research that would build on it. By the way, many prestigious AI conferences actually come with a peer-reviewed proceedings (take e.g. AAAI or IJCAI), so you canât even present at the conference without submitting a paper.
Again, MIRI might be doing excellent work. All i am asking is: in view of which criteria can we judge this to be the case? What are the criteria of assessment, which EA community finds extremely important when it comes to the assessment of charities, and which I think we should find just as important when it comes to the funding of scientific research?
I must say though that I donât agree with you that conference presentations are significantly more important than journal publications in the field of AI (or humanities for that matter). We could discuss this in terms of personal experiences, but Iâd go for a more objective criterion: effectiveness in terms of citations.
Technical research on AI generally (although not exclusively) falls under the heading of computer science. In this field, it is not only the prevailing (but not universal) view of practitioners that conference presentations are academically âbetterâ (here, here, etc.), but that they tend to have similar citation counts too.
Oh but you are confusing conference presentations with conference publications. Check the links youâve just sent me: they discuss the latter, nor the former. You cannot cite conference presentation (or thatâs not whatâs usually understood under âcitationsâ, and definitely not in the links from your post), but only a publication. Conference publications in the field of AI are usually indeed peer-reviewed and yes, indeed, they are often even more relevant than journal publications, at least if published in prestigious conference proceedings (as I stated above).
Now, on MIRIâs publication page there are no conference publications in 2017, and for 2016 there are mainly technical reports, which is fine, but should again not be confused with regular (conference) publications, at least according to the information provided by the publisher. Note that this doesnât mean technical reports are of no value! To the contrary. I am just making an overall analysis of the state of the art of MIRIâs publications, and trying to figure out what theyâve published, and then how this compares with a publication record of similarly sized research groups in a similar domain. If I am wrong in any of these points, Iâll be happy to revise my opinion!
Sure :) I saw that one on their website as well. But a few papers over the course of 2-3 years isnât very representative for an effective research group, is it? If you look at groups by scholars who do get (way smaller) grants in the field of AI, their output is way more effective. But even if we donât count publications, but speak in terms of effectiveness of a few publications, I am not seeing anything. If you are, maybe you can explain it to me?
I regret I donât have much insight to offer on the general point. When I was looking into the bibliometrics myself, very broad comparison to (e.g.) Norwegian computer scientists gave figures like â~0.5 to 1 paper per person yearâ, which MIRIâs track record seemed about on par if we look at peer reviewed technical work. I wouldnât be surprised to find better performing research groups (in terms of papers/âhighly cited papers), but slightly moreso if these groups were doing AI safety work.
I think the part which is lacking in your understanding is part of MIRIs intellectual DNA.
In it you can find lot of Eliezer Yudkowskyâs thoughtâI would recommend reading his latest book â Inadequate equilibriaâ where he explains some of the reasons why the normal reserach environment may be inadequte in some respects.
MIRI is explicitly founded on the premise to free some people from the âpublish or perishâ pressure, which is severely limiting what people in normal academia work on and care about. If you give enough probability to this beeing approach worth taking, it does make sense to base decisions on funding MIRI on different criteria.
Hi Jan, I am aware of the fact that âpublish or perishâ environment may be problematic (and that MIRI isnât very fond of it), but we should make a difference between publishing as many papers as possible, and publishing at least some papers in high impact journals.
Now, if we donât want to base our assessment of effectiveness and efficiency on any publications, then we need something else. So what would be these different criteria you mention? How do we assess the research project as effective? And how do we assess that the project has shown to be effective over the course of time?
What I would do when evaluating potentially high-impact, high uncertainty âmoonshot typeâ research project would be to ask some trusted highly knowledgeable researcher to assess the thing. I would not evaluate publication output, but whether the effort looks sensible, people working on it are good, and some progress is made (even if in discovering things which do not work)
why not asking at least 2-3 experts? Surely, one of them could be (unintentionally) biased or misinformed, or she/âhe may simply omit an important point in the project and assess it too negatively or too positively?
if we donât assess the publication output of the project initiator(s), how do we assure that these very people, rather than some other scholars would pursue the given project most effectively and efficiently? Surely, some criteria will matter: for example, if I have a PhD in philosophy, I will be quite unqualified to conduct a project in the domain of experimental physics. So some competence seems necessary. How do we assure it, and why not care about effectiveness and efficiency in this step?
I agree that negative results are valuable, and that some progress should be made. So what is the progress MIRI has shown over the course of last 3 years, such that this can be identified as efficient and effective research?
Finally, donât you think making an open call for projects on the given topic, and awarding the one(s) that seem most promising is a method that would be more reliable in view of possible errors in judgment than just evaluating whoever is the first to apply for the grant?
Disclosure: Iâm both a direct and indirect beneficiary of Open Phil funding. I am also a donor to MIRI, albeit an unorthodox one
I have a 2 year out-of-date rough draft on bibliometrics re. MIRI, which likely wonât get updated due to being superceded by Larkâs excellent work and other constraints on my time. That said:
My impression of computer science academia was that (unlike most other fields) conference presentations are significantly more important than journal publications. Further when one looks at work on MIRIâs page from 2016-2018, I see 2 papers at Uncertainty of AI, which this site suggests is a âtop-tierâ conference. (Granted, for one of these neither or the authors have a MIRI institutional affiliation, although âmany people at MIRIâ are acknowledged).
Also, parts of their logical induction paper were published/âpresented at TARK-2017, which is a reasonable fit for the paper, and a respectable though not a top conference.
Oh I havenât seen that publication on their website. If it was a peer-reviewed publication, that would indeed be something (and a kind of stuff Iâve been looking for). Could you please link to the publication?
Hereâs a link: http://ââeptcs.web.cse.unsw.edu.au/ââpaper.cgi?TARK2017.16
Cool, thanks!
Thanks for the comment, Gregory! I must say though that I donât agree with you that conference presentations are significantly more important than journal publications in the field of AI (or humanities for that matter). We could discuss this in terms of personal experiences, but Iâd go for a more objective criterion: effectiveness in terms of citations. Only once your research results are published in a peer-reviewed journal (including peer-reviewed conference proceedings) can other scholars in the field take them as a (minimally) reliable source for further research that would build on it. By the way, many prestigious AI conferences actually come with a peer-reviewed proceedings (take e.g. AAAI or IJCAI), so you canât even present at the conference without submitting a paper.
Again, MIRI might be doing excellent work. All i am asking is: in view of which criteria can we judge this to be the case? What are the criteria of assessment, which EA community finds extremely important when it comes to the assessment of charities, and which I think we should find just as important when it comes to the funding of scientific research?
Technical research on AI generally (although not exclusively) falls under the heading of computer science. In this field, it is not only the prevailing (but not universal) view of practitioners that conference presentations are academically âbetterâ (here, here, etc.), but that they tend to have similar citation counts too.
Oh but you are confusing conference presentations with conference publications. Check the links youâve just sent me: they discuss the latter, nor the former. You cannot cite conference presentation (or thatâs not whatâs usually understood under âcitationsâ, and definitely not in the links from your post), but only a publication. Conference publications in the field of AI are usually indeed peer-reviewed and yes, indeed, they are often even more relevant than journal publications, at least if published in prestigious conference proceedings (as I stated above).
Now, on MIRIâs publication page there are no conference publications in 2017, and for 2016 there are mainly technical reports, which is fine, but should again not be confused with regular (conference) publications, at least according to the information provided by the publisher. Note that this doesnât mean technical reports are of no value! To the contrary. I am just making an overall analysis of the state of the art of MIRIâs publications, and trying to figure out what theyâve published, and then how this compares with a publication record of similarly sized research groups in a similar domain. If I am wrong in any of these points, Iâll be happy to revise my opinion!
This paper was in 2016, and is included in the proceedings of the UAI conference that year. Does this not count?
Sure :) I saw that one on their website as well. But a few papers over the course of 2-3 years isnât very representative for an effective research group, is it? If you look at groups by scholars who do get (way smaller) grants in the field of AI, their output is way more effective. But even if we donât count publications, but speak in terms of effectiveness of a few publications, I am not seeing anything. If you are, maybe you can explain it to me?
I regret I donât have much insight to offer on the general point. When I was looking into the bibliometrics myself, very broad comparison to (e.g.) Norwegian computer scientists gave figures like â~0.5 to 1 paper per person yearâ, which MIRIâs track record seemed about on par if we look at peer reviewed technical work. I wouldnât be surprised to find better performing research groups (in terms of papers/âhighly cited papers), but slightly moreso if these groups were doing AI safety work.
I think the part which is lacking in your understanding is part of MIRIs intellectual DNA.
In it you can find lot of Eliezer Yudkowskyâs thoughtâI would recommend reading his latest book â Inadequate equilibriaâ where he explains some of the reasons why the normal reserach environment may be inadequte in some respects.
MIRI is explicitly founded on the premise to free some people from the âpublish or perishâ pressure, which is severely limiting what people in normal academia work on and care about. If you give enough probability to this beeing approach worth taking, it does make sense to base decisions on funding MIRI on different criteria.
Hi Jan, I am aware of the fact that âpublish or perishâ environment may be problematic (and that MIRI isnât very fond of it), but we should make a difference between publishing as many papers as possible, and publishing at least some papers in high impact journals.
Now, if we donât want to base our assessment of effectiveness and efficiency on any publications, then we need something else. So what would be these different criteria you mention? How do we assess the research project as effective? And how do we assess that the project has shown to be effective over the course of time?
What I would do when evaluating potentially high-impact, high uncertainty âmoonshot typeâ research project would be to ask some trusted highly knowledgeable researcher to assess the thing. I would not evaluate publication output, but whether the effort looks sensible, people working on it are good, and some progress is made (even if in discovering things which do not work)
OK, but then, why not the following:
why not asking at least 2-3 experts? Surely, one of them could be (unintentionally) biased or misinformed, or she/âhe may simply omit an important point in the project and assess it too negatively or too positively?
if we donât assess the publication output of the project initiator(s), how do we assure that these very people, rather than some other scholars would pursue the given project most effectively and efficiently? Surely, some criteria will matter: for example, if I have a PhD in philosophy, I will be quite unqualified to conduct a project in the domain of experimental physics. So some competence seems necessary. How do we assure it, and why not care about effectiveness and efficiency in this step?
I agree that negative results are valuable, and that some progress should be made. So what is the progress MIRI has shown over the course of last 3 years, such that this can be identified as efficient and effective research?
Finally, donât you think making an open call for projects on the given topic, and awarding the one(s) that seem most promising is a method that would be more reliable in view of possible errors in judgment than just evaluating whoever is the first to apply for the grant?