This seems like a motte-and-bailey. The question at hand is not about experts’ opinions on the general topic of existential risk from AGI, but specifically their assessment of Yudkowsky’s competence at understanding deep learning. You can believe that deep learning-based AGI is a serious existential risk within the next 20 years and also believe that Yudkowsky is not competent to understand the topic at a technical level.
As far as I know, Geoffrey Hinton has only commented on Yudkowsky’s high-level comments about existential risk from AGI — which is a concern Hinton shares — and not said anything about Yudkowsky’s technical competence on deep learning.
If you know any examples of prominent experts in deep learning vouching for Yudkowsky’s technical competence in deep learning, specifically, I invite you to give citations.
Yudkowsky has said he believes he’s by far the smartest person in the world at least when it comes to AI alignment/​safety — as in, the second smartest doesn’t come close — and maybe the smartest person in the world in general. AI alignment/​safety has been his life’s work since before he decided — seemingly sometime in the mid-to-late 2010s — deep learning was likely to lead to AGI. MIRI pays him about $600,000 a year to do research. By now, he’s had plenty of opportunity to learn about deep learning. Given this, shouldn’t he show a good grasp on concepts in deep learning? Shouldn’t he be competent at making technical arguments about deep learning? Shouldn’t he be able to clearly, coherently explain his reasoning?
It seems like Yudkowsky must at least be wrong about his own intelligence because if he really were as intelligent as he thinks, he wouldn’t struggle with basic concepts in deep learning or have such a hard time defending the technical points he wants to make about deep learning. He would just be able to make a clear, coherent case, demonstrating an understanding of the definitions of widely-used terms and concepts. Since he can’t do that, he must be overestimating his own abilities by quite a lot.
In domains other than AI, such as Japanese monetary policy, he has expressed views with a similar level of confidence and self-assurance as what he says about deep learning that turned out to be wrong, but, notably, never acknowledged the mistake. This speaks to Clara Collier’s point about not updating his views based on his new evidence. It’s not clear that any amount of evidence would (at least publicly) change his mind about any topic where he would lose face if he admitted being wrong. (He’s been wrong many times in the past. Has this ever happened before?) And if he doesn’t understand deep learning in the first place, then the public shouldn’t care whether he changes his mind or not.
You would most likely get fired for agreeing with me about this, so I can’t reasonably expect you to agree, but I might as well say the things that people on the payroll of a Yudkowsky-founded organization can’t say. For me, the cost isn’t losing a job, it’s just a bit of negative karma on a forum.
Sorry to be so blunt, but you’re asking for $6M to $10M to be redirected from possibly the world’s poorest people or animals in factory farms — or even other organizations working on AI safety — to your organization, led by Yudkowsky, so that you can try to influence policy on a national U.S. and international scale. Yudkowsky has indicated if his preferred policy were enacted at an international scale, it might increase the risk of wars. This calls for a high level of scrutiny. No one should accept weak, flimsy, hand-wavy arguments about this. No one should tiptoe around Yudkowsky’s track record of false or extremely dubious claims, or avoid questioning his technical competence in deep learning, which is in serious doubt, out of fear or politeness. If you, MIRI, or Yudkowsky don’t want this level of scrutiny, don’t ask for donations from the EA community and don’t try to influence policy.
This seems like a motte-and-bailey. The question at hand is not about experts’ opinions on the general topic of existential risk from AGI, but specifically their assessment of Yudkowsky’s competence at understanding deep learning. You can believe that deep learning-based AGI is a serious existential risk within the next 20 years and also believe that Yudkowsky is not competent to understand the topic at a technical level.
As far as I know, Geoffrey Hinton has only commented on Yudkowsky’s high-level comments about existential risk from AGI — which is a concern Hinton shares — and not said anything about Yudkowsky’s technical competence on deep learning.
If you know any examples of prominent experts in deep learning vouching for Yudkowsky’s technical competence in deep learning, specifically, I invite you to give citations.
Yudkowsky has said he believes he’s by far the smartest person in the world at least when it comes to AI alignment/​safety — as in, the second smartest doesn’t come close — and maybe the smartest person in the world in general. AI alignment/​safety has been his life’s work since before he decided — seemingly sometime in the mid-to-late 2010s — deep learning was likely to lead to AGI. MIRI pays him about $600,000 a year to do research. By now, he’s had plenty of opportunity to learn about deep learning. Given this, shouldn’t he show a good grasp on concepts in deep learning? Shouldn’t he be competent at making technical arguments about deep learning? Shouldn’t he be able to clearly, coherently explain his reasoning?
It seems like Yudkowsky must at least be wrong about his own intelligence because if he really were as intelligent as he thinks, he wouldn’t struggle with basic concepts in deep learning or have such a hard time defending the technical points he wants to make about deep learning. He would just be able to make a clear, coherent case, demonstrating an understanding of the definitions of widely-used terms and concepts. Since he can’t do that, he must be overestimating his own abilities by quite a lot.
In domains other than AI, such as Japanese monetary policy, he has expressed views with a similar level of confidence and self-assurance as what he says about deep learning that turned out to be wrong, but, notably, never acknowledged the mistake. This speaks to Clara Collier’s point about not updating his views based on his new evidence. It’s not clear that any amount of evidence would (at least publicly) change his mind about any topic where he would lose face if he admitted being wrong. (He’s been wrong many times in the past. Has this ever happened before?) And if he doesn’t understand deep learning in the first place, then the public shouldn’t care whether he changes his mind or not.
You would most likely get fired for agreeing with me about this, so I can’t reasonably expect you to agree, but I might as well say the things that people on the payroll of a Yudkowsky-founded organization can’t say. For me, the cost isn’t losing a job, it’s just a bit of negative karma on a forum.
Sorry to be so blunt, but you’re asking for $6M to $10M to be redirected from possibly the world’s poorest people or animals in factory farms — or even other organizations working on AI safety — to your organization, led by Yudkowsky, so that you can try to influence policy on a national U.S. and international scale. Yudkowsky has indicated if his preferred policy were enacted at an international scale, it might increase the risk of wars. This calls for a high level of scrutiny. No one should accept weak, flimsy, hand-wavy arguments about this. No one should tiptoe around Yudkowsky’s track record of false or extremely dubious claims, or avoid questioning his technical competence in deep learning, which is in serious doubt, out of fear or politeness. If you, MIRI, or Yudkowsky don’t want this level of scrutiny, don’t ask for donations from the EA community and don’t try to influence policy.