“Some experts downvote Yudkowsky’s standing to opine” is not a reasonable standard; some experts think vaccines cause autism. You can usually find someone with credentials in a field who will say almost anything.
The responsible thing to do (EDIT: if you’re deferring at all, as opposed to evaluating the situation for yourself) is to go look at the balance of what experts in a field are saying, and in this case, they’re fairly split, with plenty of respected big names (including many who disagree with Eliezer on many questions) saying he knows enough of what he’s talking about to be worth listening to. I get that Yarrow is not convinced, but I trust Hinton, who has reservations of his own but not of the form “Eliezer should be dismissed out of hand for lack of some particular technical expertise.”
Also: when the experts in a field are split, and the question is one of existential danger, it seems that the splitness itself is not reassuring. Experts in nuclear physics do not drastically diverge in their predictions about what will happen inside a bomb or reactor, because we understand nuclear physics. When experts in the field of artificial intelligence have wildly different predictions and the disagreement cannot be conclusively resolved, this is a sign of looseness in everyone’s understanding, and when you ask normal people on the street “hey, if one expert says an invention will kill everyone, and another says it won’t, and you ask the one who says it won’t where their confidence comes from, and they say ‘because I’m pretty sure we’ll muddle our way through, with unproven techniques that haven’t been invented yet, the risk of killing everyone is probably under 5%,’ how do you feel?”
they tend to feel alarmed.
And that characterization is not uncharitable—the optimists in this debate do not have an actual concrete plan. You can just go check. It all ultimately boils down to handwaving and platitudes and “I’m sure we’ll stay ahead of capabilities [for no explicable reason].”
And we’re intentionally aiming at something that exceeds us along the very axis that led us to dominate the planet, so … ?
Another way of saying this: it’s very, very weird that the burden of proof on this brand-new and extremely powerful technology is “make an airtight case that it’s dangerous” instead of “make an airtight case that it’s a good idea.” Even a 50⁄50shared burden would be better than the status quo.
I’ll note that
In response to any sort of criticism or disagreement, Yudkowsky and other folks’ default response seems to be to fly into a rage and to try to attack or humiliate the person making the criticism/expressing the disagreement.
The responsible thing to do is to go look at the balance of what experts in a field are saying, and in this case, they’re fairly split
This is not a crux for me. I think if you were paying attention, it was not hard to be convinced that AI extinction risk was a big deal in 2005–2015, when the expert consensus was something like “who cares, ASI is a long way off.” Most people in my college EA group were concerned about AI risk well before ML experts were concerned about it. If today’s ML experts were still dismissive of AI risk, that wouldn’t make me more optimistic.
Oh, I agree that if one feels equipped to go actually look at the arguments, one doesn’t need any argument-from-consensus. This is just, like, “if you are going to defer, defer reasonably.” Thanks for your comment; I feel similarly/endorse.
This seems like a motte-and-bailey. The question at hand is not about experts’ opinions on the general topic of existential risk from AGI, but specifically their assessment of Yudkowsky’s competence at understanding deep learning. You can believe that deep learning-based AGI is a serious existential risk within the next 20 years and also believe that Yudkowsky is not competent to understand the topic at a technical level.
As far as I know, Geoffrey Hinton has only commented on Yudkowsky’s high-level comments about existential risk from AGI — which is a concern Hinton shares — and not said anything about Yudkowsky’s technical competence on deep learning.
If you know any examples of prominent experts in deep learning vouching for Yudkowsky’s technical competence in deep learning, specifically, I invite you to give citations.
Yudkowsky has said he believes he’s by far the smartest person in the world at least when it comes to AI alignment/safety — as in, the second smartest doesn’t come close — and maybe the smartest person in the world in general. AI alignment/safety has been his life’s work since before he decided — seemingly sometime in the mid-to-late 2010s — deep learning was likely to lead to AGI. MIRI pays him about $600,000 a year to do research. By now, he’s had plenty of opportunity to learn about deep learning. Given this, shouldn’t he show a good grasp on concepts in deep learning? Shouldn’t he be competent at making technical arguments about deep learning? Shouldn’t he be able to clearly, coherently explain his reasoning?
It seems like Yudkowsky must at least be wrong about his own intelligence because if he really were as intelligent as he thinks, he wouldn’t struggle with basic concepts in deep learning or have such a hard time defending the technical points he wants to make about deep learning. He would just be able to make a clear, coherent case, demonstrating an understanding of the definitions of widely-used terms and concepts. Since he can’t do that, he must be overestimating his own abilities by quite a lot.
In domains other than AI, such as Japanese monetary policy, he has expressed views with a similar level of confidence and self-assurance as what he says about deep learning that turned out to be wrong, but, notably, never acknowledged the mistake. This speaks to Clara Collier’s point about not updating his views based on his new evidence. It’s not clear that any amount of evidence would (at least publicly) change his mind about any topic where he would lose face if he admitted being wrong. (He’s been wrong many times in the past. Has this ever happened before?) And if he doesn’t understand deep learning in the first place, then the public shouldn’t care whether he changes his mind or not.
You would most likely get fired for agreeing with me about this, so I can’t reasonably expect you to agree, but I might as well say the things that people on the payroll of a Yudkowsky-founded organization can’t say. For me, the cost isn’t losing a job, it’s just a bit of negative karma on a forum.
Sorry to be so blunt, but you’re asking for $6M to $10M to be redirected from possibly the world’s poorest people or animals in factory farms — or even other organizations working on AI safety — to your organization, led by Yudkowsky, so that you can try to influence policy on a national U.S. and international scale. Yudkowsky has indicated if his preferred policy were enacted at an international scale, it might increase the risk of wars. This calls for a high level of scrutiny. No one should accept weak, flimsy, hand-wavy arguments about this. No one should tiptoe around Yudkowsky’s track record of false or extremely dubious claims, or avoid questioning his technical competence in deep learning, which is in serious doubt, out of fear or politeness. If you, MIRI, or Yudkowsky don’t want this level of scrutiny, don’t ask for donations from the EA community and don’t try to influence policy.
Again speaking more for the broad audience:
“Some experts downvote Yudkowsky’s standing to opine” is not a reasonable standard; some experts think vaccines cause autism. You can usually find someone with credentials in a field who will say almost anything.
The responsible thing to do (EDIT: if you’re deferring at all, as opposed to evaluating the situation for yourself) is to go look at the balance of what experts in a field are saying, and in this case, they’re fairly split, with plenty of respected big names (including many who disagree with Eliezer on many questions) saying he knows enough of what he’s talking about to be worth listening to. I get that Yarrow is not convinced, but I trust Hinton, who has reservations of his own but not of the form “Eliezer should be dismissed out of hand for lack of some particular technical expertise.”
Also: when the experts in a field are split, and the question is one of existential danger, it seems that the splitness itself is not reassuring. Experts in nuclear physics do not drastically diverge in their predictions about what will happen inside a bomb or reactor, because we understand nuclear physics. When experts in the field of artificial intelligence have wildly different predictions and the disagreement cannot be conclusively resolved, this is a sign of looseness in everyone’s understanding, and when you ask normal people on the street “hey, if one expert says an invention will kill everyone, and another says it won’t, and you ask the one who says it won’t where their confidence comes from, and they say ‘because I’m pretty sure we’ll muddle our way through, with unproven techniques that haven’t been invented yet, the risk of killing everyone is probably under 5%,’ how do you feel?”
they tend to feel alarmed.
And that characterization is not uncharitable—the optimists in this debate do not have an actual concrete plan. You can just go check. It all ultimately boils down to handwaving and platitudes and “I’m sure we’ll stay ahead of capabilities [for no explicable reason].”
And we’re intentionally aiming at something that exceeds us along the very axis that led us to dominate the planet, so … ?
Another way of saying this: it’s very, very weird that the burden of proof on this brand-new and extremely powerful technology is “make an airtight case that it’s dangerous” instead of “make an airtight case that it’s a good idea.” Even a 50⁄50 shared burden would be better than the status quo.
I’ll note that
...seems false.
This is not a crux for me. I think if you were paying attention, it was not hard to be convinced that AI extinction risk was a big deal in 2005–2015, when the expert consensus was something like “who cares, ASI is a long way off.” Most people in my college EA group were concerned about AI risk well before ML experts were concerned about it. If today’s ML experts were still dismissive of AI risk, that wouldn’t make me more optimistic.
Oh, I agree that if one feels equipped to go actually look at the arguments, one doesn’t need any argument-from-consensus. This is just, like, “if you are going to defer, defer reasonably.” Thanks for your comment; I feel similarly/endorse.
Made a small edit to reflect.
This seems like a motte-and-bailey. The question at hand is not about experts’ opinions on the general topic of existential risk from AGI, but specifically their assessment of Yudkowsky’s competence at understanding deep learning. You can believe that deep learning-based AGI is a serious existential risk within the next 20 years and also believe that Yudkowsky is not competent to understand the topic at a technical level.
As far as I know, Geoffrey Hinton has only commented on Yudkowsky’s high-level comments about existential risk from AGI — which is a concern Hinton shares — and not said anything about Yudkowsky’s technical competence on deep learning.
If you know any examples of prominent experts in deep learning vouching for Yudkowsky’s technical competence in deep learning, specifically, I invite you to give citations.
Yudkowsky has said he believes he’s by far the smartest person in the world at least when it comes to AI alignment/safety — as in, the second smartest doesn’t come close — and maybe the smartest person in the world in general. AI alignment/safety has been his life’s work since before he decided — seemingly sometime in the mid-to-late 2010s — deep learning was likely to lead to AGI. MIRI pays him about $600,000 a year to do research. By now, he’s had plenty of opportunity to learn about deep learning. Given this, shouldn’t he show a good grasp on concepts in deep learning? Shouldn’t he be competent at making technical arguments about deep learning? Shouldn’t he be able to clearly, coherently explain his reasoning?
It seems like Yudkowsky must at least be wrong about his own intelligence because if he really were as intelligent as he thinks, he wouldn’t struggle with basic concepts in deep learning or have such a hard time defending the technical points he wants to make about deep learning. He would just be able to make a clear, coherent case, demonstrating an understanding of the definitions of widely-used terms and concepts. Since he can’t do that, he must be overestimating his own abilities by quite a lot.
In domains other than AI, such as Japanese monetary policy, he has expressed views with a similar level of confidence and self-assurance as what he says about deep learning that turned out to be wrong, but, notably, never acknowledged the mistake. This speaks to Clara Collier’s point about not updating his views based on his new evidence. It’s not clear that any amount of evidence would (at least publicly) change his mind about any topic where he would lose face if he admitted being wrong. (He’s been wrong many times in the past. Has this ever happened before?) And if he doesn’t understand deep learning in the first place, then the public shouldn’t care whether he changes his mind or not.
You would most likely get fired for agreeing with me about this, so I can’t reasonably expect you to agree, but I might as well say the things that people on the payroll of a Yudkowsky-founded organization can’t say. For me, the cost isn’t losing a job, it’s just a bit of negative karma on a forum.
Sorry to be so blunt, but you’re asking for $6M to $10M to be redirected from possibly the world’s poorest people or animals in factory farms — or even other organizations working on AI safety — to your organization, led by Yudkowsky, so that you can try to influence policy on a national U.S. and international scale. Yudkowsky has indicated if his preferred policy were enacted at an international scale, it might increase the risk of wars. This calls for a high level of scrutiny. No one should accept weak, flimsy, hand-wavy arguments about this. No one should tiptoe around Yudkowsky’s track record of false or extremely dubious claims, or avoid questioning his technical competence in deep learning, which is in serious doubt, out of fear or politeness. If you, MIRI, or Yudkowsky don’t want this level of scrutiny, don’t ask for donations from the EA community and don’t try to influence policy.