3c. Other research, especially “learning to reason from humans,” looks more promising than HRAD (75%?)
From the perspective of an observer who can only judge from what’s published online, I’m worried that Paul’s approach only looks more promising than MIRI’s because it’s less “mature”, having received less scrutiny and criticism from others. I’m not sure what’s happening internally in various research groups, but the amount of online discussion about Paul’s approach has to be at least an order of magnitude less than what MIRI’s approach has received.
(Looking at the thread cited by Rob Bensinger, various people including MIRI people have apparently looked into Paul’s approach but have not written down their criticisms. I’ve been trying to better understand Paul’s ideas myself and point out somedifficulties that others may have overlooked, but this is hampered by the fact that Paul seems to be the only person who is working on the approach and can participate on the other side of the discussion.)
I think Paul’s approach is certainly one of the most promising approaches we currently have, and I wish people paid more attention to it (and/or wrote down their thoughts about it more), but it seems much too early to cite it as an example of an approach that is more promising than HRAD and therefore makes MIRI’s work less valuable.
I agree with this basic point, but I think on the other side there is a large gap in concreteness that makes makes it much easier to usefully criticize my approach (I’m at the stage of actually writing pseudocode and code which we can critique).
So far I think that the problems in my approach will also appear for MIRI’s approach. For example:
Solomonoff induction or logical inductors have reliability problems that are analogous to reliability problems for machine learning. So to carry out MIRI’s agenda either you need to formulate induction differently, or you need to somehow solve these problems. (And as far as I can tell, the most promising approaches to this problem apply both to MIRI’s version and the mainstream ML version.) I think Eliezer has long understood this problem and has alluded to it, but it hasn’t been the topic of much discussion (I think largely because MIRI/Eliezer have so many other problems on their plates).
Capability amplification requires breaking cognitive work down into smaller steps. MIRI’s approach also requires such a breakdown. Capability amplification is easier in a simple formal sense (that if you solve the agent foundations you will definitely solve capability amplification, but not the other way around).
I’ve given some concrete definitions of deliberation/extrapolation, and there’s been public argument about whether they really capture human values. I think CEV has avoided those criticisms not because it solves the problem, but because it is sufficiently vague that it’s hard to criticize along these lines (and there are sufficiently many other problems that this one isn’t even at the top of the list). If you want to actually give a satisfying definition of CEV, I feel you are probably going to have to go down the same path that started with this post. I suspect Eliezer has some ideas for how to avoid these problems, but at this point those ideas have been subject to even less public discussion than my approach.
I agree there are further problems in my agenda that will be turned up by my discussion. But I’m not sure there are fewer such problems than for the MIRI agenda, since I think that being closer to concreteness may more than outweigh the smaller amount of discussion.
If you agree that many of my problems also come up eventually for MIRI’s agenda, that’s good news about the general applicability of MIRI’s research (e.g. the reliability problems for Solomonoff induction may provide a good bridge between MIRI’s work and mainstream ML), but I think it would also be a good reason to focus on the difficulties that are common to both approaches rather than to problems like decision theory / self-reference / logical uncertainty / naturalistic agents / ontology identification / multi-level world models / etc.
And as far as I can tell, the most promising approaches to this problem apply both to MIRI’s version and the mainstream ML version.
I’m not sure which approaches you’re referring to. Can you link to some details on this?
Capability amplification requires breaking cognitive work down into smaller steps. MIRI’s approach also requires such a breakdown. Capability amplification is easier in a simple formal sense (that if you solve the agent foundations you will definitely solve capability amplification, but not the other way around).
I don’t understand how this is true. I can see how solving FAI implies solving capability amplification (just emulate the FAI at a low level *), but if all you had was a solution that allows a specific kind of agent (e.g., with values well-defined apart from its implementation details) keep those values as it self-modifies, how does that help a group of short-lived humans who don’t know their own values break down an arbitrary cognitive task and perform it safely and as well as an arbitrary competitor?
(* Actually, even this isn’t really true. In MIRI’s approach, an FAI does not need to be competitive in performance with every AI design in every domain. I think the idea is to either convert mainstream AI research into using the same FAI design, or gain a decisive strategic advantage via superiority in some set of particularly important domains.)
My understanding is, MIRI’s approach is to figure out how to safely increase capability by designing a base agent that can make safe use of arbitrary amounts of computing power and can safely improve itself by modifying its own design/code. The capability amplification approach is to figure out how to safely increase capability by taking a short-lived human as the given base agent, making copies of it and and organize how the copies work together. These seem like very different problems with their own difficulties.
I think CEV has avoided those criticisms not because it solves the problem, but because it is sufficiently vague that it’s hard to criticize along these lines (and there are sufficiently many other problems that this one isn’t even at the top of the list).
I agree that in this area MIRI’s approach and yours face similar difficulties. People (including me) have criticized CEV for being vague and likely very difficult to define/implement though, so MIRI is not exactly getting a free pass by being vague. (I.e., I assume Daniel already took this into account.)
But I’m not sure there are fewer such problems than for the MIRI agenda, since I think that being closer to concreteness may more than outweigh the smaller amount of discussion.
This seems like a fair point, and I’m not sure how to weight these factors either. Given that discussion isn’t particularly costly relative to the potential benefits, an obvious solution is just to encourage more of it. Someone ought to hold a workshop to talk about your ideas, for example.
I think it would also be a good reason to focus on the difficulties that are common to both approaches
MIRI’s traditional goal would allow you to break cognition down into steps that we can describe explicitly and implement on transistors, things like “perform a step of logical deduction,” “adjust the probability of this hypothesis,” “do a step of backwards chaining,” etc. This division does not need to be competitive, but it needs to be reasonably close (close enough to obtain a decisive advantage).
Capability amplification requires breaking cognition down into steps that humans can implement. This decomposition does not need to be competitive, but it needs to be efficient enough that it can be implemented during training. Humans can obviously implement more than transistors, the main difference is that in the agent foundations case you need to figure out every response in advance (but then can have a correspondingly greater reason to think that the decomposition will work / will preserve alignment).
I can talk in more detail about the reduction from (capability amplification --> agent foundations) if it’s not clear whether it is possible and it would have an effect on your view.
On competitiveness:
I would prefer be competitive with non-aligned AI, rather than count on forming a singleton, but this isn’t really a requirement of my approach. When comparing difficulty of two approaches you should presumably compare the difficulty of achieving a fixed goal with one approach or the other.
On reliability:
On the agent foundations side, it seems like plausible approaches involve figuring out how to peer inside the previously-opaque hypotheses, or understanding what characteristic of hypotheses can lead to catastrophic generalization failures and then excluding those from induction. Both of these seem likely applicable to ML models, though would depend on how exactly they play out.
On the ML side, I think the other promising approaches involve either adversarial training, ensembling / unanimous votes, which could be applied to the agent foundations problem.
I can talk in more detail about the reduction from (capability amplification --> agent foundations) if it’s not clear whether it is possible and it would have an effect on your view.
Yeah, this is still not clear. Suppose we had a solution to agent foundations, I don’t see how that necessarily helps me figure out what to do as H in capability amplification. For example the agent foundations solution could say, use (some approximation of) exhaustive search in the following way, with your utility function as the objective function, but that doesn’t help me because I don’t have a utility function.
When comparing difficulty of two approaches you should presumably compare the difficulty of achieving a fixed goal with one approach or the other.
My point was that HRAD potentially enables the strategy of pushing mainstream AI research away from opaque designs (which are hard to compete with while maintaining alignment, because you don’t understand how they work and you can’t just blindly copy the computation that they do without risking safety), whereas in your approach you always have to worry about “how do I compete with with an AI that doesn’t have an overseer or has an overseer who doesn’t care about safety and just lets the AI use whatever opaque and potentially dangerous technique it wants”.
On the agent foundations side, it seems like plausible approaches involve figuring out how to peer inside the previously-opaque hypotheses, or understanding what characteristic of hypotheses can lead to catastrophic generalization failures and then excluding those from induction.
Oh I see. In my mind the problems with Solomonoff Induction means that it’s probably not the right way to define how induction should be done as an ideal, so we should look for something kind of like Solomonoff Induction but better, not try to patch it by doing additional things on top of it. (Like instead of trying to figure out exactly when CDT would make wrong decisions and add more complexity on top of it to handle those cases, replace it with UDT.)
My point was that HRAD potentially enables the strategy of pushing mainstream AI research away from opaque designs (which are hard to compete with while maintaining alignment, because you don’t understand how they work and you can’t just blindly copy the computation that they do without risking safety), whereas in your approach you always have to worry about “how do I compete with with an AI that doesn’t have an overseer or has an overseer who doesn’t care about safety and just lets the AI use whatever opaque and potentially dangerous technique it wants”.
I think both approaches potentially enable this, but are VERY unlikely to deliver. MIRI seems more bullish that fundamental insights will yield AI that is just plain better (Nate gave me the analogy of Judea Pearl coming up with Causal PGMs as such an insight), whereas Paul just seems optimistic that we can get a somewhat negligible performance hit for safe vs. unsafe AI.
But I don’t think MIRI has given very good arguments for why we might expect this; it would be great if someone can articulate or reference the best available arguments.
I have a very strong intuition that dauntingly large safety-performance trade-offs are extremely likely to persist in practice, thus the only answer to the “how do I compete” question seems to be “be the front-runner”.
Shouldn’t this cut both ways? Paul has also spent far fewer words justifying his approach to others, compared to MIRI.
Personally, I feel like I understand Paul’s approach better than I understand MIRI’s approach, despite having spent more time on the latter. I actually do have some objections to it, but I feel it is likely to be significantly useful even if (as I, obviously, expect) my objections end up having teeth.
Shouldn’t this cut both ways? Paul has also spent far fewer words justifying his approach to others, compared to MIRI.
The fact that Paul hasn’t had a chance to hear from many of his (would-be) critics and answer them means we don’t have a lot of information about how promising his approach is, hence my “too early to call it more promising than HRAD” conclusion.
I actually do have some objections to it, but I feel it is likely to be significantly useful even if (as I, obviously, expect) my objections end up having teeth.
Have you written down these objections somewhere? My worry is basically that different people looked at Paul’s approach and each thought of a different set of objections, and they think, “that’s not so bad”, without knowing that there’s actually a whole bunch of other objections out there, including additional ones that people would find if they thought and talked about Paul’s ideas more.
To add onto Jacob and Paul’s comments, I think that while HRAD is more mature in the sense that more work has gone into solving HRAD problems and critiquing possible solutions, the gap seems much smaller to me when it comes to the justification for thinking HRAD is promising vs justification for Paul’s approach being promising. In fact, I think the arguments for Paul’s work being promising are more solid than those for HRAD, despite it only being Paul making those arguments—I’ve had a much harder time understanding anything more nuanced than the basic case for HRAD I gave above, and a much easier time understanding why Paul thinks his approach is promising.
[ETA: By the end of 2016 this problem no longer seems like the most serious.]
…
[ETA: while robust learning remains a traditional AI challenge, it is not at all clear that it is possible. And meta-execution actually seems like the ingredient furthest from existing ML practice, as well as having non-obvious feasibility.]
My interpretation of this is that between March 2016 and the end of 2016, Paul updated the difficulty of his approach upwards. (I think given the context, he means that other problems, namely robust learning and meta-execution, are harder, not that informed oversight has become easier.) I wanted to point this out to make sure you updated on his update. Clearly Paul still thinks his approach is more promising than HRAD, but perhaps not by as much as before.
the gap seems much smaller to me when it comes to the justification for thinking HRAD is promising vs justification for Paul’s approach being promising
This seems wrong to me. For example, in the “learning to reason from human” approaches, the goal isn’t just to learn to reason from humans, but to do it in a way that maintains competitiveness with unaligned AIs. Suppose a human overseer disapproves of their AI using some set of potentially dangerous techniques, how can we then ensure that the resulting AI is still competitive? Once someone points this out, proponents of the approach, to continue thinking their approach is promising, would need to give some details about how they intend to solve this problem. Subsequently, justification for thinking the approach is promising is more subtle and harder to understand. I think conversations like this have occurred for MIRI’s approach far more than Paul’s, which may be a large part of why you find Paul’s justifications easier to understand.
This doesn’t match my experience of why I find Paul’s justifications easier to understand. In particular, I’ve been following MIRI since 2011, and my experience has been that I didn’t find MIRI’s arguments (about specific research directions) convincing in 2011*, and since then have had a lot of people try to convince me from a lot of different angles. I think pretty much all of the objections I have are ones I generated myself, or would have generated myself. Although, the one major objection I didn’t generate myself is the one that I feel most applies to Paul’s agenda.
( * There was a brief period shortly after reading the sequences that I found them extremely convincing, but I think I was much more credulous then than I am now. )
I think the argument along these lines that I’m most sympathetic to is that Paul’s agenda fits more into the paradigm of typical ML research, and so is more likely to fail for reasons that are in many people’s collective blind spot (because we’re all blinded by the same paradigm).
That actually didn’t cross my mind before, so thanks for pointing it out. After reading your comment, I decided to look into Open Phil’s recent grants to MIRI and OpenAI, and noticed that of the 4 technical advisors Open Phil used for the MIRI grant investigation (Paul Christiano, Jacob Steinhardt, Christopher Olah, and Dario Amodei), all either have a ML background or currently advocate a ML-based approach to AI alignment. For the OpenAI grant however, Open Phil didn’t seem to have similarly engaged technical advisors who might be predisposed to be critical of the potential grantee (e.g., HRAD researchers), and in fact two of the Open Phil technical advisors are also employees of OpenAI (Paul Christiano and Dario Amodei). I have to say this doesn’t look very good for Open Phil in terms of making an effort to avoid potential blind spots and bias.
(Speaking for myself, not OpenPhil, who I wouldn’t be able to speak for anyways.)
For what it’s worth, I’m pretty critical of deep learning, which is the approach OpenAI wants to take, and still think the grant to OpenAI was a pretty good idea; and I can’t really think of anyone more familiar with MIRI’s work than Paul who isn’t already at MIRI (note that Paul started out pursuing MIRI’s approach and shifted in an ML direction over time).
That being said, I agree that the public write-up on the OpenAI grant doesn’t reflect that well on OpenPhil, and it seems correct for people like you to demand better moving forward (although I’m not sure that adding HRAD researchers as TAs is the solution; also note that OPP does consult regularly with MIRI staff, though I don’t know if they did for the OpenAI grant).
I can’t really think of anyone more familiar with MIRI’s work than Paul who isn’t already at MIRI (note that Paul started out pursuing MIRI’s approach and shifted in an ML direction over time).
The Agent Foundations Forum would have been a good place to look for more people familiar with MIRI’s work. Aside from Paul, I see Stuart Armstrong, Abram Demski, Vadim Kosoy, Tsvi Benson-Tilsen, Sam Eisenstat, Vladimir Slepnev, Janos Kramar, Alex Mennen, and many others. (Abram, Tsvi, and Sam have since joined MIRI, but weren’t employees of it at the time of the Open Phil grant.)
That being said, I agree that the public write-up on the OpenAI grant doesn’t reflect that well on OpenPhil, and it seems correct for people like you to demand better moving forward
I had previously seen some complaints about the way the OpenAI grant was made, but until your comment, hadn’t thought of a possible group blind spot due to a common ML perspective. If you have any further insights on this and related issues (like why you’re critical of deep learning but still think the grant to OpenAI was a pretty good idea, what are your objections to Paul’s AI alignment approach, how could Open Phil have done better), would you please write them down somewhere?
From the perspective of an observer who can only judge from what’s published online, I’m worried that Paul’s approach only looks more promising than MIRI’s because it’s less “mature”, having received less scrutiny and criticism from others. I’m not sure what’s happening internally in various research groups, but the amount of online discussion about Paul’s approach has to be at least an order of magnitude less than what MIRI’s approach has received.
(Looking at the thread cited by Rob Bensinger, various people including MIRI people have apparently looked into Paul’s approach but have not written down their criticisms. I’ve been trying to better understand Paul’s ideas myself and point out some difficulties that others may have overlooked, but this is hampered by the fact that Paul seems to be the only person who is working on the approach and can participate on the other side of the discussion.)
I think Paul’s approach is certainly one of the most promising approaches we currently have, and I wish people paid more attention to it (and/or wrote down their thoughts about it more), but it seems much too early to cite it as an example of an approach that is more promising than HRAD and therefore makes MIRI’s work less valuable.
I agree with this basic point, but I think on the other side there is a large gap in concreteness that makes makes it much easier to usefully criticize my approach (I’m at the stage of actually writing pseudocode and code which we can critique).
So far I think that the problems in my approach will also appear for MIRI’s approach. For example:
Solomonoff induction or logical inductors have reliability problems that are analogous to reliability problems for machine learning. So to carry out MIRI’s agenda either you need to formulate induction differently, or you need to somehow solve these problems. (And as far as I can tell, the most promising approaches to this problem apply both to MIRI’s version and the mainstream ML version.) I think Eliezer has long understood this problem and has alluded to it, but it hasn’t been the topic of much discussion (I think largely because MIRI/Eliezer have so many other problems on their plates).
Capability amplification requires breaking cognitive work down into smaller steps. MIRI’s approach also requires such a breakdown. Capability amplification is easier in a simple formal sense (that if you solve the agent foundations you will definitely solve capability amplification, but not the other way around).
I’ve given some concrete definitions of deliberation/extrapolation, and there’s been public argument about whether they really capture human values. I think CEV has avoided those criticisms not because it solves the problem, but because it is sufficiently vague that it’s hard to criticize along these lines (and there are sufficiently many other problems that this one isn’t even at the top of the list). If you want to actually give a satisfying definition of CEV, I feel you are probably going to have to go down the same path that started with this post. I suspect Eliezer has some ideas for how to avoid these problems, but at this point those ideas have been subject to even less public discussion than my approach.
I agree there are further problems in my agenda that will be turned up by my discussion. But I’m not sure there are fewer such problems than for the MIRI agenda, since I think that being closer to concreteness may more than outweigh the smaller amount of discussion.
If you agree that many of my problems also come up eventually for MIRI’s agenda, that’s good news about the general applicability of MIRI’s research (e.g. the reliability problems for Solomonoff induction may provide a good bridge between MIRI’s work and mainstream ML), but I think it would also be a good reason to focus on the difficulties that are common to both approaches rather than to problems like decision theory / self-reference / logical uncertainty / naturalistic agents / ontology identification / multi-level world models / etc.
I’m not sure which approaches you’re referring to. Can you link to some details on this?
I don’t understand how this is true. I can see how solving FAI implies solving capability amplification (just emulate the FAI at a low level *), but if all you had was a solution that allows a specific kind of agent (e.g., with values well-defined apart from its implementation details) keep those values as it self-modifies, how does that help a group of short-lived humans who don’t know their own values break down an arbitrary cognitive task and perform it safely and as well as an arbitrary competitor?
(* Actually, even this isn’t really true. In MIRI’s approach, an FAI does not need to be competitive in performance with every AI design in every domain. I think the idea is to either convert mainstream AI research into using the same FAI design, or gain a decisive strategic advantage via superiority in some set of particularly important domains.)
My understanding is, MIRI’s approach is to figure out how to safely increase capability by designing a base agent that can make safe use of arbitrary amounts of computing power and can safely improve itself by modifying its own design/code. The capability amplification approach is to figure out how to safely increase capability by taking a short-lived human as the given base agent, making copies of it and and organize how the copies work together. These seem like very different problems with their own difficulties.
I agree that in this area MIRI’s approach and yours face similar difficulties. People (including me) have criticized CEV for being vague and likely very difficult to define/implement though, so MIRI is not exactly getting a free pass by being vague. (I.e., I assume Daniel already took this into account.)
This seems like a fair point, and I’m not sure how to weight these factors either. Given that discussion isn’t particularly costly relative to the potential benefits, an obvious solution is just to encourage more of it. Someone ought to hold a workshop to talk about your ideas, for example.
This makes sense.
On capability amplification:
MIRI’s traditional goal would allow you to break cognition down into steps that we can describe explicitly and implement on transistors, things like “perform a step of logical deduction,” “adjust the probability of this hypothesis,” “do a step of backwards chaining,” etc. This division does not need to be competitive, but it needs to be reasonably close (close enough to obtain a decisive advantage).
Capability amplification requires breaking cognition down into steps that humans can implement. This decomposition does not need to be competitive, but it needs to be efficient enough that it can be implemented during training. Humans can obviously implement more than transistors, the main difference is that in the agent foundations case you need to figure out every response in advance (but then can have a correspondingly greater reason to think that the decomposition will work / will preserve alignment).
I can talk in more detail about the reduction from (capability amplification --> agent foundations) if it’s not clear whether it is possible and it would have an effect on your view.
On competitiveness:
I would prefer be competitive with non-aligned AI, rather than count on forming a singleton, but this isn’t really a requirement of my approach. When comparing difficulty of two approaches you should presumably compare the difficulty of achieving a fixed goal with one approach or the other.
On reliability:
On the agent foundations side, it seems like plausible approaches involve figuring out how to peer inside the previously-opaque hypotheses, or understanding what characteristic of hypotheses can lead to catastrophic generalization failures and then excluding those from induction. Both of these seem likely applicable to ML models, though would depend on how exactly they play out.
On the ML side, I think the other promising approaches involve either adversarial training, ensembling / unanimous votes, which could be applied to the agent foundations problem.
Yeah, this is still not clear. Suppose we had a solution to agent foundations, I don’t see how that necessarily helps me figure out what to do as H in capability amplification. For example the agent foundations solution could say, use (some approximation of) exhaustive search in the following way, with your utility function as the objective function, but that doesn’t help me because I don’t have a utility function.
My point was that HRAD potentially enables the strategy of pushing mainstream AI research away from opaque designs (which are hard to compete with while maintaining alignment, because you don’t understand how they work and you can’t just blindly copy the computation that they do without risking safety), whereas in your approach you always have to worry about “how do I compete with with an AI that doesn’t have an overseer or has an overseer who doesn’t care about safety and just lets the AI use whatever opaque and potentially dangerous technique it wants”.
Oh I see. In my mind the problems with Solomonoff Induction means that it’s probably not the right way to define how induction should be done as an ideal, so we should look for something kind of like Solomonoff Induction but better, not try to patch it by doing additional things on top of it. (Like instead of trying to figure out exactly when CDT would make wrong decisions and add more complexity on top of it to handle those cases, replace it with UDT.)
I think both approaches potentially enable this, but are VERY unlikely to deliver. MIRI seems more bullish that fundamental insights will yield AI that is just plain better (Nate gave me the analogy of Judea Pearl coming up with Causal PGMs as such an insight), whereas Paul just seems optimistic that we can get a somewhat negligible performance hit for safe vs. unsafe AI.
But I don’t think MIRI has given very good arguments for why we might expect this; it would be great if someone can articulate or reference the best available arguments.
I have a very strong intuition that dauntingly large safety-performance trade-offs are extremely likely to persist in practice, thus the only answer to the “how do I compete” question seems to be “be the front-runner”.
Shouldn’t this cut both ways? Paul has also spent far fewer words justifying his approach to others, compared to MIRI.
Personally, I feel like I understand Paul’s approach better than I understand MIRI’s approach, despite having spent more time on the latter. I actually do have some objections to it, but I feel it is likely to be significantly useful even if (as I, obviously, expect) my objections end up having teeth.
The fact that Paul hasn’t had a chance to hear from many of his (would-be) critics and answer them means we don’t have a lot of information about how promising his approach is, hence my “too early to call it more promising than HRAD” conclusion.
Have you written down these objections somewhere? My worry is basically that different people looked at Paul’s approach and each thought of a different set of objections, and they think, “that’s not so bad”, without knowing that there’s actually a whole bunch of other objections out there, including additional ones that people would find if they thought and talked about Paul’s ideas more.
I think there’s something to this—thanks.
To add onto Jacob and Paul’s comments, I think that while HRAD is more mature in the sense that more work has gone into solving HRAD problems and critiquing possible solutions, the gap seems much smaller to me when it comes to the justification for thinking HRAD is promising vs justification for Paul’s approach being promising. In fact, I think the arguments for Paul’s work being promising are more solid than those for HRAD, despite it only being Paul making those arguments—I’ve had a much harder time understanding anything more nuanced than the basic case for HRAD I gave above, and a much easier time understanding why Paul thinks his approach is promising.
Daniel, while re-reading one of Paul’s posts from March 2016, I just noticed the following:
My interpretation of this is that between March 2016 and the end of 2016, Paul updated the difficulty of his approach upwards. (I think given the context, he means that other problems, namely robust learning and meta-execution, are harder, not that informed oversight has become easier.) I wanted to point this out to make sure you updated on his update. Clearly Paul still thinks his approach is more promising than HRAD, but perhaps not by as much as before.
This seems wrong to me. For example, in the “learning to reason from human” approaches, the goal isn’t just to learn to reason from humans, but to do it in a way that maintains competitiveness with unaligned AIs. Suppose a human overseer disapproves of their AI using some set of potentially dangerous techniques, how can we then ensure that the resulting AI is still competitive? Once someone points this out, proponents of the approach, to continue thinking their approach is promising, would need to give some details about how they intend to solve this problem. Subsequently, justification for thinking the approach is promising is more subtle and harder to understand. I think conversations like this have occurred for MIRI’s approach far more than Paul’s, which may be a large part of why you find Paul’s justifications easier to understand.
This doesn’t match my experience of why I find Paul’s justifications easier to understand. In particular, I’ve been following MIRI since 2011, and my experience has been that I didn’t find MIRI’s arguments (about specific research directions) convincing in 2011*, and since then have had a lot of people try to convince me from a lot of different angles. I think pretty much all of the objections I have are ones I generated myself, or would have generated myself. Although, the one major objection I didn’t generate myself is the one that I feel most applies to Paul’s agenda.
( * There was a brief period shortly after reading the sequences that I found them extremely convincing, but I think I was much more credulous then than I am now. )
I think the argument along these lines that I’m most sympathetic to is that Paul’s agenda fits more into the paradigm of typical ML research, and so is more likely to fail for reasons that are in many people’s collective blind spot (because we’re all blinded by the same paradigm).
That actually didn’t cross my mind before, so thanks for pointing it out. After reading your comment, I decided to look into Open Phil’s recent grants to MIRI and OpenAI, and noticed that of the 4 technical advisors Open Phil used for the MIRI grant investigation (Paul Christiano, Jacob Steinhardt, Christopher Olah, and Dario Amodei), all either have a ML background or currently advocate a ML-based approach to AI alignment. For the OpenAI grant however, Open Phil didn’t seem to have similarly engaged technical advisors who might be predisposed to be critical of the potential grantee (e.g., HRAD researchers), and in fact two of the Open Phil technical advisors are also employees of OpenAI (Paul Christiano and Dario Amodei). I have to say this doesn’t look very good for Open Phil in terms of making an effort to avoid potential blind spots and bias.
(Speaking for myself, not OpenPhil, who I wouldn’t be able to speak for anyways.)
For what it’s worth, I’m pretty critical of deep learning, which is the approach OpenAI wants to take, and still think the grant to OpenAI was a pretty good idea; and I can’t really think of anyone more familiar with MIRI’s work than Paul who isn’t already at MIRI (note that Paul started out pursuing MIRI’s approach and shifted in an ML direction over time).
That being said, I agree that the public write-up on the OpenAI grant doesn’t reflect that well on OpenPhil, and it seems correct for people like you to demand better moving forward (although I’m not sure that adding HRAD researchers as TAs is the solution; also note that OPP does consult regularly with MIRI staff, though I don’t know if they did for the OpenAI grant).
The Agent Foundations Forum would have been a good place to look for more people familiar with MIRI’s work. Aside from Paul, I see Stuart Armstrong, Abram Demski, Vadim Kosoy, Tsvi Benson-Tilsen, Sam Eisenstat, Vladimir Slepnev, Janos Kramar, Alex Mennen, and many others. (Abram, Tsvi, and Sam have since joined MIRI, but weren’t employees of it at the time of the Open Phil grant.)
I had previously seen some complaints about the way the OpenAI grant was made, but until your comment, hadn’t thought of a possible group blind spot due to a common ML perspective. If you have any further insights on this and related issues (like why you’re critical of deep learning but still think the grant to OpenAI was a pretty good idea, what are your objections to Paul’s AI alignment approach, how could Open Phil have done better), would you please write them down somewhere?