Maybe the examples are ambiguous but they don’t seem cherrypicked to me. Aren’t these some of the topics Yudskowky is most known for discussing? It seems to me that the cherrypicking criticism would apply to opinions about, I don’t know, monetary policy, not issues central to AI and cognitive science.
None of these issues are “central” to AI or the cognitive science that’s relevant to AI, AI alignment, or human upskilling. The author’s area of interest is more about consciousness, animal welfare, and qualia.
The issues are the sole thing justifying Omnizoid’s rather heated indictments against Yudkowsky, such as:
Eliezer has swindled many of the smartest people into believing a whole host of wildly implausible things. Some of my favorite writers—e.g. Scott Alexander—seem to revere Eliezer. It’s about time someone exposed the mountain of falsehoods on which his arguments rest. If one of the world’s most influential thinkers is just demonstrably wrong about lots of topics, often in ways so egregious that they demonstrate very basic misunderstandings, then that’s quite newsworthy, just as it would be if a presidential candidate supported a slate of terrible policies.
Most readers will only read the accusations in the introduction and then bounce off the evidence backing them; because all of them are topics that, like string theory, only a handful of people on earth are capable of engaging with them. It just so happens that the author is one of them. Virtually nobody can read the actual arguments behind this post without dedicating >4 hours of their life to it, which makes it pretty well optimized to attract attention and damage Yudkowsky’s reputation as much as possible with effectively zero accountability.
I tried very hard to phrase everything as clearly as possible. But if people’s takeaway is “people who know about philosophy of mind and decision theory find Eliezer’s views there deeply implausible and indicative of basic misunderstandings,” then I don’t think that’s the end of the world. Of course, some would disagree.
Maybe the examples are ambiguous but they don’t seem cherrypicked to me. Aren’t these some of the topics Yudskowky is most known for discussing? It seems to me that the cherrypicking criticism would apply to opinions about, I don’t know, monetary policy, not issues central to AI and cognitive science.
None of these issues are “central” to AI or the cognitive science that’s relevant to AI, AI alignment, or human upskilling. The author’s area of interest is more about consciousness, animal welfare, and qualia.
The issues are the sole thing justifying Omnizoid’s rather heated indictments against Yudkowsky, such as:
Most readers will only read the accusations in the introduction and then bounce off the evidence backing them; because all of them are topics that, like string theory, only a handful of people on earth are capable of engaging with them. It just so happens that the author is one of them. Virtually nobody can read the actual arguments behind this post without dedicating >4 hours of their life to it, which makes it pretty well optimized to attract attention and damage Yudkowsky’s reputation as much as possible with effectively zero accountability.
I tried very hard to phrase everything as clearly as possible. But if people’s takeaway is “people who know about philosophy of mind and decision theory find Eliezer’s views there deeply implausible and indicative of basic misunderstandings,” then I don’t think that’s the end of the world. Of course, some would disagree.
If I was trying to list central historical claims that Eliezer made which were controversial at the time I would start with things like:
AGI is possible.
AI alignment is the most important issue in the world.
Alignment will not be easy.
People will let AGIs out of the box.