I feel like there are two types of thinkers, the first we might call innovators and the second systematizers. Innovators are the kinds of people who think of wacky, out-of-the-box ideas, but are less likely to be right. They enrich the state of discourse by being clever, creative, and coming up with new ideas, rather than being right about everything. A paradigm example is Robin Hanson—no one feels comfortable just deferring to Robin Hanson across the board, but Robin Hanson has some of the most ingenious ideas.
Systematizers, in contrast, are the kinds of people who reliably generate true beliefs on lots of topics. A good example is Scott Alexander. I didn’t research Ivermectin, but I feel confident that Scott’s post on Ivermectin is at least mostly right.
I think people think of Eliezer as a systematizer. And this is a mistake, because he just makes too many errors. He’s too confident about things he’s totally ignorant about. But he’s still a great innovator. He has lots of interesting, clever ideas that are worth hearing out. In general, however, the fact that Eliezer believes something is not especially probative. Eliezer’s skill lies in good writing and ingenious argumentation, not forming true beliefs.
Although this is a pretty gross oversimplification, it does touch on some valuable points.
I don’t think it makes sense to assume an equivalence between Hanson and Yudkowsky. Hanson has bumbled into the AI arena in a way that Yudkowsky never would. It’s possible that Hanson has only developed skill at making groundbreaking discoveries in economics and sociology, and stumbled instead of hitting the ground running when he ventured into AI, and that Yud did a similar thing with consciousness ethics. But even if true, that’s where the similarities would end, because both human neurotypes, and Hanson and Scott Alexander and Yudkowsky’s personal histories and skills, are each vastly more complicated than the oversimpified innovator-systematizer dichotomy that this post leaves as its concluding argument.
Although this is a pretty gross oversimplification, it does touch on some valuable points.
I don’t think it makes sense to assume an equivalence between Hanson and Yudkowsky. Hanson has bumbled into the AI arena in a way that Yudkowsky never would. It’s possible that Hanson has only developed skill at making groundbreaking discoveries in economics and sociology, and stumbled instead of hitting the ground running when he ventured into AI, and that Yud did a similar thing with consciousness ethics. But even if true, that’s where the similarities would end, because both human neurotypes, and Hanson and Scott Alexander and Yudkowsky’s personal histories and skills, are each vastly more complicated than the oversimpified innovator-systematizer dichotomy that this post leaves as its concluding argument.
I tend to think Hanson more reliably generates true beliefs than Eliezer.