If you’re going to claim he is ‘egregiously’ wrong I would hope for clearer examples, like that he said the population of China was 100 million, or that the median apartment in Brooklyn cost $100k, or something like that. These three examples seem both cherrypicked—anyone with a long career as a genuine intellectual innovator will make claims on a wide variety of subjects, so three is nothing like what is required to claim ‘frequent’ - and ambiguous.
That seems correct to me. Perhaps not by coincidence, I also think the case against FDT is the weakest of his three, with some of the counterexamples being cases where I’m happy to bite the bullet, and the others seeming no worse than the objections to CDT, EDT, TDT, UDT etc.
Maybe the examples are ambiguous but they don’t seem cherrypicked to me. Aren’t these some of the topics Yudskowky is most known for discussing? It seems to me that the cherrypicking criticism would apply to opinions about, I don’t know, monetary policy, not issues central to AI and cognitive science.
If you’re going to claim he is ‘egregiously’ wrong I would hope for clearer examples, like that he said the population of China was 100 million, or that the median apartment in Brooklyn cost $100k, or something like that. These three examples seem both cherrypicked—anyone with a long career as a genuine intellectual innovator will make claims on a wide variety of subjects, so three is nothing like what is required to claim ‘frequent’ - and ambiguous.
FDT isn’t cherry-picked as Eliezer has described himself as a decision theorist and his main contribution is TDT (which latter developed into FDT).
That seems correct to me. Perhaps not by coincidence, I also think the case against FDT is the weakest of his three, with some of the counterexamples being cases where I’m happy to bite the bullet, and the others seeming no worse than the objections to CDT, EDT, TDT, UDT etc.
Maybe the examples are ambiguous but they don’t seem cherrypicked to me. Aren’t these some of the topics Yudskowky is most known for discussing? It seems to me that the cherrypicking criticism would apply to opinions about, I don’t know, monetary policy, not issues central to AI and cognitive science.
If I was trying to list central historical claims that Eliezer made which were controversial at the time I would start with things like:
AGI is possible.
AI alignment is the most important issue in the world.
Alignment will not be easy.
People will let AGIs out of the box.