I fully agree with the title of this post, although I do think Yudkowsky can be valuable if you treat him as an “interesting idea generator”, as long as you treat said ideas with a very skeptical eye.
Fwiw I think the rule thinkers in philosophy popular in EA and rat circles has itself been quite harmful. Yeah, there’s some variance in how good extremely smart people are at coming up with original takes, but for the demonstrably smart people I think ‘interesting idea generation’ is more of a case of ‘we can see them reasoning hypercarefully and scrupulously about their area of competence almost all the time they speak on it, sometimes they also come up with genuinely novel ideas, and when those ideas are outside their realm of expertise maybe they slightly underresearch and overindex on them’. I’m thinking of uncontroversially great thinkers like Feynman, Einstein, Newton, as well as more controversially great thinkers like Bryan Caplan, Elon Musk, here.
There is an opportunity cost to noise, and that cost is higher to a community the louder and more prominently it’s broadcast within that community. You, the OP and many others have gone to substantial lengths to debunk almost casually thrown out views by EY that, as others have said, have made their way into rat circles almost unquestioned. Yet the cycle keeps repeating because ‘interesting idea generation’ gets so much shrift.
Meanwhile, there are many more good ideas than there is bandwidth to look into them. In practice, this means for every bad idea a Yudkowsky or Hanson overconfidently throws out, some reasonable idea generated by someone more scrupulous but less good at self-marketing gets lost.
Actually, I think a comparison to Musk is pretty apt here. I frequently see Musk saying very incorrect things, and I don’t think his object level knowledge of engineering is very good. But he is a good at selling ideas and building hype, which has translated into funding for actual engineers to build rockets and electric cars in a way that probably wouldn’t have happened without his hype skills.
In the same way, Yud’s skills at persuasive writing have accelerated both AI research and AI safety research (see Altman credited him for boosting openAI). The problem is that he is not actually very good at AI safety research himself (or any sub-set of the problems), and his beliefs and ideas on the subject are generally flawed. It would be like if you hired elon musk directly to build a car in your garage.
At this point, I think the field of AI safety is big enough that you should stick to spokespeople who are actual experts in AI, and don’t make grand incorrect statements on an almost weekly basis.
Fwiw I think the rule thinkers in philosophy popular in EA and rat circles has itself been quite harmful. Yeah, there’s some variance in how good extremely smart people are at coming up with original takes, but for the demonstrably smart people I think ‘interesting idea generation’ is more of a case of ‘we can see them reasoning hypercarefully and scrupulously about their area of competence almost all the time they speak on it, sometimes they also come up with genuinely novel ideas, and when those ideas are outside their realm of expertise maybe they slightly underresearch and overindex on them’. I’m thinking of uncontroversially great thinkers like Feynman, Einstein, Newton, as well as more controversially great thinkers like Bryan Caplan, Elon Musk, here.
There is an opportunity cost to noise, and that cost is higher to a community the louder and more prominently it’s broadcast within that community. You, the OP and many others have gone to substantial lengths to debunk almost casually thrown out views by EY that, as others have said, have made their way into rat circles almost unquestioned. Yet the cycle keeps repeating because ‘interesting idea generation’ gets so much shrift.
Meanwhile, there are many more good ideas than there is bandwidth to look into them. In practice, this means for every bad idea a Yudkowsky or Hanson overconfidently throws out, some reasonable idea generated by someone more scrupulous but less good at self-marketing gets lost.
Actually, I think a comparison to Musk is pretty apt here. I frequently see Musk saying very incorrect things, and I don’t think his object level knowledge of engineering is very good. But he is a good at selling ideas and building hype, which has translated into funding for actual engineers to build rockets and electric cars in a way that probably wouldn’t have happened without his hype skills.
In the same way, Yud’s skills at persuasive writing have accelerated both AI research and AI safety research (see Altman credited him for boosting openAI). The problem is that he is not actually very good at AI safety research himself (or any sub-set of the problems), and his beliefs and ideas on the subject are generally flawed. It would be like if you hired elon musk directly to build a car in your garage.
At this point, I think the field of AI safety is big enough that you should stick to spokespeople who are actual experts in AI, and don’t make grand incorrect statements on an almost weekly basis.