A general reflection: I wonder if one at least minor contributing factor to disagreement, around whether this post is worthwhile, is different understandings about who the relevant audience is.
I mostly have in mind people who have read and engaged a little bit with AI risk debates, but not yet in a very deep way, and would overall be disinclined to form strong independent views on the basis of (e.g.) simply reading Yudkowsky’s and Christiano’s most recent posts. I think the info I’ve included in this post could be pretty relevant to these people, since in practice they’re often going to rely a lot—consciously or unconsciously; directly or indirectly—on cues about how much weight to give different prominent figures’ views. I also think that the majority of members of the existential risk community are in this reference class.
I think the info in this post isn’t nearly as relevant to people who’ve consumed and reflected on the relevant debates very deeply. The more you’ve engaged with and reflected on an issue, the less you should be inclined to defer—and therefore the less relevant track records become.
(The limited target audience might be something I don’t do a good enough job communicating in the post.)
I think that insofar as people are deferring on matters of AGI risk etc., Yudkowsky is in the top 10 people in the world to defer to based on his track record, and arguably top 1. Nobody who has been talking about these topics for 20+ years has a similarly good track record. If you restrict attention to the last 10 years, then Bostrom does and Carl Shulman and maybe some other people too (Gwern?), and if you restrict attention to the last 5 years then arguably about a dozen people have a somewhat better track record than him.
(To my knowledge. I think I’m probably missing a handful of people who I don’t know as much about because their writings aren’t as prominent in the stuff I’ve read, sorry!)
He’s like Szilard. Szilard wasn’t right about everything (e.g. he predicted there would be a war and the Nazis would win) but he was right about a bunch of things including that there would be a bomb, that this put all of humanity in danger, etc. and importantly he was the first to do so by several years.
I think if I were to write a post cautioning people against deferring to Yudkowsky, I wouldn’t talk about his excellent track record but rather about his arrogance, inability to clearly explain his views and argue for them (at least on some important topics, he’s clear on others), seeming bias towards pessimism, ridiculously high (and therefore seemingly overconfident) credences in things like p(doom), etc. These are the reasons I would reach for (and do reach for) when arguing against deferring to Yudkowsky.
[ETA: I wish to reemphasize, but more strongly, that Yudkowsky seems pretty overconfident not just now but historically. Anyone deferring to him should keep this in mind; maybe directly update towards his credences but don’t adopt his credences. E.g. think “we’re probably doomed” but not “99% chance of doom” Also, Yudkowsky doesn’t seem to be listening to others and understanding their positions well. So his criticisms of other views should be listened to but not deferred to, IMO.]
“Nobody who has been talking about these topics for 20+ years has a similarly good track record.”
Really? We know EY made a bunch of mispredictions “A certain teenaged futurist, who, for example, said in 1999, “The most realistic estimate for a seed AI transcendence is 2020; nanowar, before 2015.” What are his good predictions? I can’t see a single example in this thread.
Ironically, one of the two predictions you quote as example of bad prediction, is in fact an example of a good prediction: “The most realistic estimate for a seed AI transcendence is 2020.”
Currently it seems that AGI/superintelligence/singularity/etc. will happen sometime in the 2020′s. Yudkowsky’s median estimate in 1999 was 2020 apparently, so he probably had something like 30% of his probability mass in the 2020s, and maybe 15% of it in the 2025-2030 period when IMO it’s most likely to happen.
Now let’s compare to what other people would have been saying at the time. They would almost all have been saying 0%, and then maybe the smarter and more rational ones would have been saying things like 1%, for the 2025-2030 period.
To put it in nonquantitative terms, almost everyone else in 1999 would have been saying “AGI? Singularity? That’s not a thing, don’t be ridiculous.” The smarter and more rational ones would have been saying “OK it might happen eventually but it’s nowhere in sight, it’s silly to start thinking about it now.” Yudkowsky said “It’s about 21 years away, give or take; we should start thinking about it now.” Now with the benefit of 24 years of hindsight, Yudkowsky was a lot closer to the truth than all those other people.
Also, you didn’t reply to my claim. Who else has been talking about AGI etc. for 20+ years and has a similarly good track record? Which of them managed to only make correct predictions when they were teenagers? Certainly not Kurzweil.
A general reflection: I wonder if one at least minor contributing factor to disagreement, around whether this post is worthwhile, is different understandings about who the relevant audience is.
I mostly have in mind people who have read and engaged a little bit with AI risk debates, but not yet in a very deep way, and would overall be disinclined to form strong independent views on the basis of (e.g.) simply reading Yudkowsky’s and Christiano’s most recent posts. I think the info I’ve included in this post could be pretty relevant to these people, since in practice they’re often going to rely a lot—consciously or unconsciously; directly or indirectly—on cues about how much weight to give different prominent figures’ views. I also think that the majority of members of the existential risk community are in this reference class.
I think the info in this post isn’t nearly as relevant to people who’ve consumed and reflected on the relevant debates very deeply. The more you’ve engaged with and reflected on an issue, the less you should be inclined to defer—and therefore the less relevant track records become.
(The limited target audience might be something I don’t do a good enough job communicating in the post.)
I think that insofar as people are deferring on matters of AGI risk etc., Yudkowsky is in the top 10 people in the world to defer to based on his track record, and arguably top 1. Nobody who has been talking about these topics for 20+ years has a similarly good track record. If you restrict attention to the last 10 years, then Bostrom does and Carl Shulman and maybe some other people too (Gwern?), and if you restrict attention to the last 5 years then arguably about a dozen people have a somewhat better track record than him.
(To my knowledge. I think I’m probably missing a handful of people who I don’t know as much about because their writings aren’t as prominent in the stuff I’ve read, sorry!)
He’s like Szilard. Szilard wasn’t right about everything (e.g. he predicted there would be a war and the Nazis would win) but he was right about a bunch of things including that there would be a bomb, that this put all of humanity in danger, etc. and importantly he was the first to do so by several years.
I think if I were to write a post cautioning people against deferring to Yudkowsky, I wouldn’t talk about his excellent track record but rather about his arrogance, inability to clearly explain his views and argue for them (at least on some important topics, he’s clear on others), seeming bias towards pessimism, ridiculously high (and therefore seemingly overconfident) credences in things like p(doom), etc. These are the reasons I would reach for (and do reach for) when arguing against deferring to Yudkowsky.
[ETA: I wish to reemphasize, but more strongly, that Yudkowsky seems pretty overconfident not just now but historically. Anyone deferring to him should keep this in mind; maybe directly update towards his credences but don’t adopt his credences. E.g. think “we’re probably doomed” but not “99% chance of doom” Also, Yudkowsky doesn’t seem to be listening to others and understanding their positions well. So his criticisms of other views should be listened to but not deferred to, IMO.]
“Nobody who has been talking about these topics for 20+ years has a similarly good track record.”
Really? We know EY made a bunch of mispredictions “A certain teenaged futurist, who, for example, said in 1999, “The most realistic estimate for a seed AI transcendence is 2020; nanowar, before 2015.” What are his good predictions? I can’t see a single example in this thread.
Ironically, one of the two predictions you quote as example of bad prediction, is in fact an example of a good prediction: “The most realistic estimate for a seed AI transcendence is 2020.”
Currently it seems that AGI/superintelligence/singularity/etc. will happen sometime in the 2020′s. Yudkowsky’s median estimate in 1999 was 2020 apparently, so he probably had something like 30% of his probability mass in the 2020s, and maybe 15% of it in the 2025-2030 period when IMO it’s most likely to happen.
Now let’s compare to what other people would have been saying at the time. They would almost all have been saying 0%, and then maybe the smarter and more rational ones would have been saying things like 1%, for the 2025-2030 period.
To put it in nonquantitative terms, almost everyone else in 1999 would have been saying “AGI? Singularity? That’s not a thing, don’t be ridiculous.” The smarter and more rational ones would have been saying “OK it might happen eventually but it’s nowhere in sight, it’s silly to start thinking about it now.” Yudkowsky said “It’s about 21 years away, give or take; we should start thinking about it now.” Now with the benefit of 24 years of hindsight, Yudkowsky was a lot closer to the truth than all those other people.
Also, you didn’t reply to my claim. Who else has been talking about AGI etc. for 20+ years and has a similarly good track record? Which of them managed to only make correct predictions when they were teenagers? Certainly not Kurzweil.