Nate, my thanks for your reply. I regret I may not have expressed myself well enough for your reply to precisely target the worries I expressed; I also regret insofar as you reply overcomes my poor expression, it make my worries grow deeper.
If I read your approach to the Open Phil review correctly, you submitted some of the more technically unimpressive papers for review because they demonstrated the lead author developing some interesting ideas for research direction, and that they in some sense lead up to the âbig resultâ (Logical Induction). If so, this looks like a pretty surprising error: one of the standard worries facing MIRI given its fairly slender publication record is the technical quality of the work, and it seemed pretty clear that was the objective behind sending them out for evaluation. Under whatever constraints Open Phil provided, Iâd have sent the âbest by academic lightsâ papers I had.
In candour, I think âMIRI barking up the wrong treeâ and/âor (worse) âMIRI not doing that much good research)â is a much better explanation for what is going on than âinferential distanceâ. I struggle to imagine a fairer (or more propitious-to-MIRI) hearing than the Open Phil review: it involved two people (Dewey and Christiano) who previously worked with you guys, Dewey spent over 100 hours trying to understand the value of your work, they comissioned external experts in the field to review your work.
Suggesting that the fairly adverse review that results may be a product of lack of understanding makes MIRI seem more like a mystical tradition than a research group. If MIRI is unable to convince someone like Dewey, the prospects of it making the necessary collaborations or partnerships with the wider AI community look grim.
I donât think weâve ever worked with Scott Aaronson, though weâre obviously on good terms with him. Also, our approach to decision theory stirred up a lot of interest from professional decision theorists at last yearâs Cambridge conference; expect more about this in the next few months.
I had Aaronson down as within MIRIâs sphere of influence, but if I overstate I apologize (I am correct in that Yuan previously worked for you, right?)
I look forward to seeing MIRI producing or germinating some concrete results in decision theory. The âunderwhelming blockbusterâ I referred to above was the TDT/âUDT etc. stuff, which MIRI widely hyped but has since then languised in obscurity.
There are a lot of reasons donors might be retracting; Iâd be concerned if the reason is that theyâre expecting Open Phil to handle MIRIâs funding on their own, or that theyâre interpreting some action of Open Philâs as a signal that Open Phil wants broadly Open-Phil-aligned donors to scale back support for MIRI.
It may simply be the usual (albeit regrettable) trait of donors jockeying to be âlast resortâ - I guess it would depend what the usual distribution of donations are with respect to fundraising deadlines.
If donors are retracting, I would speculate Open Philâs report may be implicated. One potential model would be donors interpreting Open Philâs fairly critical support to be an argument against funding further growth by MIRI, thus pulling back so MIRIs overall revenue hovers at previous year levels (I donât read in the Open Phil a report a particular revenus target they wanted you guys to have). Perhaps a simpler explanation would be having a large and respected org do a fairly in depth review and give a fairly mixed review makes previously enthusiastic donors update to be more tepid, and perhaps direct their donations to other players in the AI space.
With respect, I doubt I will change my mind due to MIRI giving further write-ups, and if donors are pulling back in part âdue toâ Open Phil, I doubt it will change their minds either. It may be that âHigh quality non-standard formal insightsâ is what you guys do, but the value of that is pretty illegible on its own: it needs to be converted into tangible accomplishments (e.g. good papers, esteem from others in the field, interactions in industry) first to convince people there is actually something there, but also as this probably the plausible route to this comparative advantage having any impact.
Thus far this has not happened to a degree commensurate with MIRIâs funding base. I wrote four-and-a-half years ago that I was disappointed in MIRIâs lack of tangible accomplishments: I am even more disappointed that I find my remarks now follow fairly similar lines. Happily it can be fixedâif the logical induction result âtakes offâ as I infer you guys hope it does, it will likely fix itself. Unless and until then, I remain sceptical about MIRIâs value.
Under whatever constraints Open Phil provided, Iâd have sent the âbest by academic lightsâ papers I had.
We originally sent Nick Beckstead what we considered our four most important 2015 results, at his request; these were (1) the incompatibility of the âInductive Coherenceâ framework and the âAsymptotic Convergence in Online Learning with Unbounded Delaysâ framework; (2) the demonstration in âProof-Producing Reflection for HOLâ that a non-pathological form of self-referential reasoning is possible in a certain class of theorem-provers; (3) the reflective oracles result presented in âA Formal Solution to the Grain of Truth Problem,â âReflective Variants of Solomonoff Induction and AIXI,â and âReflective Oraclesâ; (4) and Vadim Kosoyâs âOptimal Predictorsâ work. The papers we listed under 1, 2, and 4 then got used in an external review process they probably werenât very well-suited for.
I think this was more or less just an honest miscommunication. I told Nick in advance that I only assigned an 8% probability to external reviewers thinking the âAsymptotic ConvergenceâŚâ result was âgoodâ on its own (and only a 20% probability for âInductive Coherenceâ). My impression of what happened is that Open Phil staff interpreted my pushback as saying that I thought the external reviews wouldnât carry much Bayesian evidence (but that the internal reviews still would), where what I was trying to communicate was that I thought the papers didnât carry very much Bayesian evidence about our technical output (and that I thought the internal reviewers would need to speak to us about technical specifics in order to understand why we thought they were important). Thus, we were surprised when their grant decision and write-up put significant weight on the internal reviews of those papers (and they were surprised that we were surprised). This is obviously really unfortunate, and another good sign that I should have committed more time and care to clearly communicating my thinking from the outset.
Regarding picking better papers for external review: We only put out 10 papers directly related to our technical agendas between Jan 2015 and Mar 2016, so the option space is pretty limited, especially given the multiple constraints Open Phil wanted to meet. Optimizing for technical impressiveness and non-obviousness as a stand-alone result, I might have instead gone with Critchâs bounded LĂśb paper and the grain of truth problem paper over the AC/âIC results. We did submit the grain of truth problem paper to Open Phil, but they decided not to review it because it didnât meet other criteria they were interested in.
If MIRI is unable to convince someone like Dewey, the prospects of it making the necessary collaborations or partnerships with the wider AI community look grim.
Iâm less pessimistic about building collaborations and partnerships, in part because weâre already on pretty good terms with other folks in the community, and in part because I think we have different models of how technical ideas spread. Regardless, I expect that with more and better communication, we can (upon re-evaluation) raise the probability of Open Phil staff that the work weâre doing is important.
More generally, though, I expect this task to get easier over time as we get better at communicating about our research. Thereâs already a body of AI alignment research (and, perhaps, methodology) that requires the equivalent of multiple university courses to understand, but there arenât curricula or textbooks for teaching it. If we can convince a small pool of researchers to care about the research problems we think are important, this will let us bootstrap to the point where we have more resources for communicating information that requires a lot of background and sustained scholarship, as well as more of the institutional signals that this stuff warrants a time investment.
I can maybe make the time expenditure thus far less mysterious if I mention a couple more ways I erred in trying to communicate my model of MIRIâs research agenda:
My early discussion with Daniel was framed around questions like âWhat specific failure mode do you expect to be exhibited by advanced AI systems iff their programmers donât understand logical uncertainty?â I made the mistake of attempting to give straight/ânon-evasive answers to those sorts of questions and let the discussion focus on that evaluation criterion, rather than promptly saying âMIRIâs research directions mostly arenât chosen to directly address a specific failure mode in a notional software systemâ and âI donât think thatâs a good heuristic for identifying research thatâs likely to be relevant to long-run AI safety.â
I fell prey to the transparency illusion pretty hard, and that was completely my fault. Mid-way through the process, Daniel made a write-up of what he had gathered so far; this write-up revealed a large number of miscommunications and places where I thought I had transmitted a concept of mine but Daniel had come away with a very different concept. Itâs clear in retrospect that we should have spent a lot more time with me having Daniel try to explain what he thought I meant, and I had all the tools to predict this in foresight; but I foolishly assumed that wouldnât be necessary in this case.
(I plan to blog more about the details of these later.)
I think these are important mistakes that show I hadnât sufficiently clarified several concepts in my own head, or spent enough time understanding Danielâs position. My hope is that I can do a much better job of avoiding these sorts of failures in the next round of discussion, now that I have a better model of where Open Philâs staff and advisors are coming from and what the review process looks like.
(I am correct in that Yuan previously worked for you, right?)
Yeah, though that was before my time. He did an unpaid internship with us in the summer of 2013, and weâve occasionally contracted him to tutor MIRI staff. Qiaochuâs also a lot socially closer to MIRI; he attended three of our early research workshops.
Unless and until then, I remain sceptical about MIRIâs value.
I think thatâs a reasonable stance to take, and that there are other possible reasonable stances here too. Some of the variables I expect EAs to vary on include âlevel of starting confidence in MIRIâs mathematical intuitions about complicated formal questionsâ and âgeneral risk tolerance.â A relatively risk-intolerant donor is right to wait until we have clearer demonstrations of success; and a relatively risk-tolerant donor who starts without a very high confidence in MIRIâs intuitions about formal systems might be pushed under a donation threshold by learning that an important disagreement has opened up between us and Daniel Dewey (or between us and other people at Open Phil).
Also, thanks for laying out your thinking in so much detailâI suspect there are other people who had more or less the same reaction to Open Philâs grant write-up but havenât spoken up about it. Iâd be happy to talk more about this over email, too, including answering Qs from anyone else who wants more of my thoughts on this.
Relevant update: Daniel Dewey and Nick Beckstead of Open Phil have listed MIRI as one of ten âreasonably strong options in causes of interestâ for individuals looking for places to donate this year.
Nate, my thanks for your reply. I regret I may not have expressed myself well enough for your reply to precisely target the worries I expressed; I also regret insofar as you reply overcomes my poor expression, it make my worries grow deeper.
If I read your approach to the Open Phil review correctly, you submitted some of the more technically unimpressive papers for review because they demonstrated the lead author developing some interesting ideas for research direction, and that they in some sense lead up to the âbig resultâ (Logical Induction). If so, this looks like a pretty surprising error: one of the standard worries facing MIRI given its fairly slender publication record is the technical quality of the work, and it seemed pretty clear that was the objective behind sending them out for evaluation. Under whatever constraints Open Phil provided, Iâd have sent the âbest by academic lightsâ papers I had.
In candour, I think âMIRI barking up the wrong treeâ and/âor (worse) âMIRI not doing that much good research)â is a much better explanation for what is going on than âinferential distanceâ. I struggle to imagine a fairer (or more propitious-to-MIRI) hearing than the Open Phil review: it involved two people (Dewey and Christiano) who previously worked with you guys, Dewey spent over 100 hours trying to understand the value of your work, they comissioned external experts in the field to review your work.
Suggesting that the fairly adverse review that results may be a product of lack of understanding makes MIRI seem more like a mystical tradition than a research group. If MIRI is unable to convince someone like Dewey, the prospects of it making the necessary collaborations or partnerships with the wider AI community look grim.
I had Aaronson down as within MIRIâs sphere of influence, but if I overstate I apologize (I am correct in that Yuan previously worked for you, right?)
I look forward to seeing MIRI producing or germinating some concrete results in decision theory. The âunderwhelming blockbusterâ I referred to above was the TDT/âUDT etc. stuff, which MIRI widely hyped but has since then languised in obscurity.
It may simply be the usual (albeit regrettable) trait of donors jockeying to be âlast resortâ - I guess it would depend what the usual distribution of donations are with respect to fundraising deadlines.
If donors are retracting, I would speculate Open Philâs report may be implicated. One potential model would be donors interpreting Open Philâs fairly critical support to be an argument against funding further growth by MIRI, thus pulling back so MIRIs overall revenue hovers at previous year levels (I donât read in the Open Phil a report a particular revenus target they wanted you guys to have). Perhaps a simpler explanation would be having a large and respected org do a fairly in depth review and give a fairly mixed review makes previously enthusiastic donors update to be more tepid, and perhaps direct their donations to other players in the AI space.
With respect, I doubt I will change my mind due to MIRI giving further write-ups, and if donors are pulling back in part âdue toâ Open Phil, I doubt it will change their minds either. It may be that âHigh quality non-standard formal insightsâ is what you guys do, but the value of that is pretty illegible on its own: it needs to be converted into tangible accomplishments (e.g. good papers, esteem from others in the field, interactions in industry) first to convince people there is actually something there, but also as this probably the plausible route to this comparative advantage having any impact.
Thus far this has not happened to a degree commensurate with MIRIâs funding base. I wrote four-and-a-half years ago that I was disappointed in MIRIâs lack of tangible accomplishments: I am even more disappointed that I find my remarks now follow fairly similar lines. Happily it can be fixedâif the logical induction result âtakes offâ as I infer you guys hope it does, it will likely fix itself. Unless and until then, I remain sceptical about MIRIâs value.
We originally sent Nick Beckstead what we considered our four most important 2015 results, at his request; these were (1) the incompatibility of the âInductive Coherenceâ framework and the âAsymptotic Convergence in Online Learning with Unbounded Delaysâ framework; (2) the demonstration in âProof-Producing Reflection for HOLâ that a non-pathological form of self-referential reasoning is possible in a certain class of theorem-provers; (3) the reflective oracles result presented in âA Formal Solution to the Grain of Truth Problem,â âReflective Variants of Solomonoff Induction and AIXI,â and âReflective Oraclesâ; (4) and Vadim Kosoyâs âOptimal Predictorsâ work. The papers we listed under 1, 2, and 4 then got used in an external review process they probably werenât very well-suited for.
I think this was more or less just an honest miscommunication. I told Nick in advance that I only assigned an 8% probability to external reviewers thinking the âAsymptotic ConvergenceâŚâ result was âgoodâ on its own (and only a 20% probability for âInductive Coherenceâ). My impression of what happened is that Open Phil staff interpreted my pushback as saying that I thought the external reviews wouldnât carry much Bayesian evidence (but that the internal reviews still would), where what I was trying to communicate was that I thought the papers didnât carry very much Bayesian evidence about our technical output (and that I thought the internal reviewers would need to speak to us about technical specifics in order to understand why we thought they were important). Thus, we were surprised when their grant decision and write-up put significant weight on the internal reviews of those papers (and they were surprised that we were surprised). This is obviously really unfortunate, and another good sign that I should have committed more time and care to clearly communicating my thinking from the outset.
Regarding picking better papers for external review: We only put out 10 papers directly related to our technical agendas between Jan 2015 and Mar 2016, so the option space is pretty limited, especially given the multiple constraints Open Phil wanted to meet. Optimizing for technical impressiveness and non-obviousness as a stand-alone result, I might have instead gone with Critchâs bounded LĂśb paper and the grain of truth problem paper over the AC/âIC results. We did submit the grain of truth problem paper to Open Phil, but they decided not to review it because it didnât meet other criteria they were interested in.
Iâm less pessimistic about building collaborations and partnerships, in part because weâre already on pretty good terms with other folks in the community, and in part because I think we have different models of how technical ideas spread. Regardless, I expect that with more and better communication, we can (upon re-evaluation) raise the probability of Open Phil staff that the work weâre doing is important.
More generally, though, I expect this task to get easier over time as we get better at communicating about our research. Thereâs already a body of AI alignment research (and, perhaps, methodology) that requires the equivalent of multiple university courses to understand, but there arenât curricula or textbooks for teaching it. If we can convince a small pool of researchers to care about the research problems we think are important, this will let us bootstrap to the point where we have more resources for communicating information that requires a lot of background and sustained scholarship, as well as more of the institutional signals that this stuff warrants a time investment.
I can maybe make the time expenditure thus far less mysterious if I mention a couple more ways I erred in trying to communicate my model of MIRIâs research agenda:
My early discussion with Daniel was framed around questions like âWhat specific failure mode do you expect to be exhibited by advanced AI systems iff their programmers donât understand logical uncertainty?â I made the mistake of attempting to give straight/ânon-evasive answers to those sorts of questions and let the discussion focus on that evaluation criterion, rather than promptly saying âMIRIâs research directions mostly arenât chosen to directly address a specific failure mode in a notional software systemâ and âI donât think thatâs a good heuristic for identifying research thatâs likely to be relevant to long-run AI safety.â
I fell prey to the transparency illusion pretty hard, and that was completely my fault. Mid-way through the process, Daniel made a write-up of what he had gathered so far; this write-up revealed a large number of miscommunications and places where I thought I had transmitted a concept of mine but Daniel had come away with a very different concept. Itâs clear in retrospect that we should have spent a lot more time with me having Daniel try to explain what he thought I meant, and I had all the tools to predict this in foresight; but I foolishly assumed that wouldnât be necessary in this case.
(I plan to blog more about the details of these later.)
I think these are important mistakes that show I hadnât sufficiently clarified several concepts in my own head, or spent enough time understanding Danielâs position. My hope is that I can do a much better job of avoiding these sorts of failures in the next round of discussion, now that I have a better model of where Open Philâs staff and advisors are coming from and what the review process looks like.
Yeah, though that was before my time. He did an unpaid internship with us in the summer of 2013, and weâve occasionally contracted him to tutor MIRI staff. Qiaochuâs also a lot socially closer to MIRI; he attended three of our early research workshops.
I think thatâs a reasonable stance to take, and that there are other possible reasonable stances here too. Some of the variables I expect EAs to vary on include âlevel of starting confidence in MIRIâs mathematical intuitions about complicated formal questionsâ and âgeneral risk tolerance.â A relatively risk-intolerant donor is right to wait until we have clearer demonstrations of success; and a relatively risk-tolerant donor who starts without a very high confidence in MIRIâs intuitions about formal systems might be pushed under a donation threshold by learning that an important disagreement has opened up between us and Daniel Dewey (or between us and other people at Open Phil).
Also, thanks for laying out your thinking in so much detailâI suspect there are other people who had more or less the same reaction to Open Philâs grant write-up but havenât spoken up about it. Iâd be happy to talk more about this over email, too, including answering Qs from anyone else who wants more of my thoughts on this.
Relevant update: Daniel Dewey and Nick Beckstead of Open Phil have listed MIRI as one of ten âreasonably strong options in causes of interestâ for individuals looking for places to donate this year.