I fully agree with the title of this post, although I do think Yudkowsky can be valuable if you treat him as an “interesting idea generator”, as long as you treat said ideas with a very skeptical eye.
I’ve only had time to comprehensively debunk one of his overconfident mistakes, but there are a more mistakes or flaws I’ve noticed but haven’t gotten around to fleshing out in depth, which I’ll just list here:
Yudkowsky treats his case for the “many worlds hypothesis” as a slam-dunk that proves the triumph of Bayes, but in fact it is only half-done. He presents good arguments against “collapse is real”, but fails to argue that this means many worlds is the truth, rather than one of the other many interpretations which do not involve a real collapse. stating that he’s solved the problem is flatly ridiculous.
The description of Aumanns agreement theorem in “defy the data” is false, leaving behind important caveats that render his use of it incorrect.
In general, Yudkowsky talks about bayes theorem a lot, but his descriptions of practical bayesianism are firmly stuck in the 101 level, lacking, for example, any discussion on how to deal with uncertain priors or uncertain likelihood ratios. I don’t know if he is unaware of how bayesian statistics are actually used or if he just thinks it was too complicated to explain, but it has lead to a lot of rationalists adopting a form of “pseudo-bayesianism” that bears little resemblance to how it is used in science.
Yud talks a lot about “Einsteins arrogance”, in a way that obsfucates the actual evidence of Einsteins belief, and if i recall he has implied that using bayes theorem can justifiably get you to the same level of arrogance. In fact, general relativity was a natural extension of special relativity (which had a ton of empirical evidence in it’s favour). Einsteins arrogance was justified by the nature of laws of physics and is in no way comparable to the type of speculative forecasts used by Yud and company.
The implications of the “AI box experiment” have been severely overstated. It does not at all prove that an AGI cannot be boxed, only that a subset of people are highly persuadable. “rationalists are gullible” fits the evidence provided just as well.
I haven’t even touched his twitter account, which I feel is just low-hanging fruit.
I fully agree with the title of this post, although I do think Yudkowsky can be valuable if you treat him as an “interesting idea generator”, as long as you treat said ideas with a very skeptical eye.
Fwiw I think the rule thinkers in philosophy popular in EA and rat circles has itself been quite harmful. Yeah, there’s some variance in how good extremely smart people are at coming up with original takes, but for the demonstrably smart people I think ‘interesting idea generation’ is more of a case of ‘we can see them reasoning hypercarefully and scrupulously about their area of competence almost all the time they speak on it, sometimes they also come up with genuinely novel ideas, and when those ideas are outside their realm of expertise maybe they slightly underresearch and overindex on them’. I’m thinking of uncontroversially great thinkers like Feynman, Einstein, Newton, as well as more controversially great thinkers like Bryan Caplan, Elon Musk, here.
There is an opportunity cost to noise, and that cost is higher to a community the louder and more prominently it’s broadcast within that community. You, the OP and many others have gone to substantial lengths to debunk almost casually thrown out views by EY that, as others have said, have made their way into rat circles almost unquestioned. Yet the cycle keeps repeating because ‘interesting idea generation’ gets so much shrift.
Meanwhile, there are many more good ideas than there is bandwidth to look into them. In practice, this means for every bad idea a Yudkowsky or Hanson overconfidently throws out, some reasonable idea generated by someone more scrupulous but less good at self-marketing gets lost.
Actually, I think a comparison to Musk is pretty apt here. I frequently see Musk saying very incorrect things, and I don’t think his object level knowledge of engineering is very good. But he is a good at selling ideas and building hype, which has translated into funding for actual engineers to build rockets and electric cars in a way that probably wouldn’t have happened without his hype skills.
In the same way, Yud’s skills at persuasive writing have accelerated both AI research and AI safety research (see Altman credited him for boosting openAI). The problem is that he is not actually very good at AI safety research himself (or any sub-set of the problems), and his beliefs and ideas on the subject are generally flawed. It would be like if you hired elon musk directly to build a car in your garage.
At this point, I think the field of AI safety is big enough that you should stick to spokespeople who are actual experts in AI, and don’t make grand incorrect statements on an almost weekly basis.
I fully agree with the title of this post, although I do think Yudkowsky can be valuable if you treat him as an “interesting idea generator”, as long as you treat said ideas with a very skeptical eye.
I’ve only had time to comprehensively debunk one of his overconfident mistakes, but there are a more mistakes or flaws I’ve noticed but haven’t gotten around to fleshing out in depth, which I’ll just list here:
Yudkowsky treats his case for the “many worlds hypothesis” as a slam-dunk that proves the triumph of Bayes, but in fact it is only half-done. He presents good arguments against “collapse is real”, but fails to argue that this means many worlds is the truth, rather than one of the other many interpretations which do not involve a real collapse. stating that he’s solved the problem is flatly ridiculous.
The description of Aumanns agreement theorem in “defy the data” is false, leaving behind important caveats that render his use of it incorrect.
In general, Yudkowsky talks about bayes theorem a lot, but his descriptions of practical bayesianism are firmly stuck in the 101 level, lacking, for example, any discussion on how to deal with uncertain priors or uncertain likelihood ratios. I don’t know if he is unaware of how bayesian statistics are actually used or if he just thinks it was too complicated to explain, but it has lead to a lot of rationalists adopting a form of “pseudo-bayesianism” that bears little resemblance to how it is used in science.
Yud talks a lot about “Einsteins arrogance”, in a way that obsfucates the actual evidence of Einsteins belief, and if i recall he has implied that using bayes theorem can justifiably get you to the same level of arrogance. In fact, general relativity was a natural extension of special relativity (which had a ton of empirical evidence in it’s favour). Einsteins arrogance was justified by the nature of laws of physics and is in no way comparable to the type of speculative forecasts used by Yud and company.
The implications of the “AI box experiment” have been severely overstated. It does not at all prove that an AGI cannot be boxed, only that a subset of people are highly persuadable. “rationalists are gullible” fits the evidence provided just as well.
I haven’t even touched his twitter account, which I feel is just low-hanging fruit.
Fwiw I think the rule thinkers in philosophy popular in EA and rat circles has itself been quite harmful. Yeah, there’s some variance in how good extremely smart people are at coming up with original takes, but for the demonstrably smart people I think ‘interesting idea generation’ is more of a case of ‘we can see them reasoning hypercarefully and scrupulously about their area of competence almost all the time they speak on it, sometimes they also come up with genuinely novel ideas, and when those ideas are outside their realm of expertise maybe they slightly underresearch and overindex on them’. I’m thinking of uncontroversially great thinkers like Feynman, Einstein, Newton, as well as more controversially great thinkers like Bryan Caplan, Elon Musk, here.
There is an opportunity cost to noise, and that cost is higher to a community the louder and more prominently it’s broadcast within that community. You, the OP and many others have gone to substantial lengths to debunk almost casually thrown out views by EY that, as others have said, have made their way into rat circles almost unquestioned. Yet the cycle keeps repeating because ‘interesting idea generation’ gets so much shrift.
Meanwhile, there are many more good ideas than there is bandwidth to look into them. In practice, this means for every bad idea a Yudkowsky or Hanson overconfidently throws out, some reasonable idea generated by someone more scrupulous but less good at self-marketing gets lost.
Actually, I think a comparison to Musk is pretty apt here. I frequently see Musk saying very incorrect things, and I don’t think his object level knowledge of engineering is very good. But he is a good at selling ideas and building hype, which has translated into funding for actual engineers to build rockets and electric cars in a way that probably wouldn’t have happened without his hype skills.
In the same way, Yud’s skills at persuasive writing have accelerated both AI research and AI safety research (see Altman credited him for boosting openAI). The problem is that he is not actually very good at AI safety research himself (or any sub-set of the problems), and his beliefs and ideas on the subject are generally flawed. It would be like if you hired elon musk directly to build a car in your garage.
At this point, I think the field of AI safety is big enough that you should stick to spokespeople who are actual experts in AI, and don’t make grand incorrect statements on an almost weekly basis.