About the unexplained shift of focus from symbolic AI, which Yudkowsky was still claiming as of around 2015 or 2016
This is made up, as far as I can tell (at least re: symbolic AI as described in the wikipedia article you link). See Logical or Connectionist AI? (2008):
As it so happens, I do believe that the type of systems usually termed GOFAI will not yield general intelligence, even if you run them on a computer the size of the moon.
Nope, it was Yudkowsky in a Facebook group about AI x-risk around 2015 or 2016. He specifically said he didn’t think deep learning was the royal road to AGI.
I’m not sure what sort of system Yudkowsky had in mind, specifically, when he said he thought symbolic AI approaches were more likely to get there. He didn’t give more details. Maybe he saw a difference between symbolic AI and “the type of systems usually termed GOFAI”. Some people seem to draw a distinction between the two, whereas other people say they’re just the same thing. Maybe he changed his mind about GOFAI between 2008 and whenever that discussion took place. I don’t know. I’m only going off of what he said, and he didn’t explain this.
In any case, at some point after 2008 and before 2023 he changed his mind about deep learning, and he never explained why, as far as I can tell. This seems like an important topic to discuss, and not a minor detail or an afterthought. What gives?
Again, it’s not even clear that Yudkowsky even understands deep learning particularly well, so to apply his pre-deep learning theory to deep learning specifically, we need a substantive explanation from him. It’s the kind of thing that if you were writing a book about this topic (or many long-form posts over the span of years), you would probably want to address.
The review of Collier’s review that you linked does not, in my view, adequately address the point that I raised from Collier’s review. The author of the review’s review does not demonstrate to me that they understand Collier’s point. They might understand it or they might not, but it’s not clear from the review whether they do or don’t, so there isn’t the basis for a convincing reply, there.
By the way, the last time we interacted on the EA Forum, you refused to retract a false accusation against me after I disproved it. I gave you the opportunity to apologize and try to have a good faith discussion from that point, but you didn’t apologize and you didn’t retract the accusation. Given this, I don’t particularly have much patience for engaging with you further. Take care.
Nope, it was Yudkowsky in a Facebook group about AI x-risk around 2015 or 2016. He specifically said he didn’t think deep learning was the royal road to AGI.
Nope, it was Yudkowsky in a Facebook group about AI x-risk around 2015 or 2016. He specifically said he didn’t think deep learning was the royal road to AGI.
Would you be able to locate the post in question? If Yudkowsky did indeed say that, I would agree that it would constitute a relevant negative update about his overall prediction track record.
This is made up, as far as I can tell (at least re: symbolic AI as described in the wikipedia article you link). See Logical or Connectionist AI? (2008):
Wikipedia, on GOFAI (reformatted, bolding mine):
Even earlier is Levels of Organization in General Intelligence. It is difficult to excerpt a quote but it is not favorable to the traditional “symbolic AI” paradigm.
I really struggle to see how you could possibly have come to this conclusion, given the above.
And see here re: Collier’s review.
Nope, it was Yudkowsky in a Facebook group about AI x-risk around 2015 or 2016. He specifically said he didn’t think deep learning was the royal road to AGI.
I’m not sure what sort of system Yudkowsky had in mind, specifically, when he said he thought symbolic AI approaches were more likely to get there. He didn’t give more details. Maybe he saw a difference between symbolic AI and “the type of systems usually termed GOFAI”. Some people seem to draw a distinction between the two, whereas other people say they’re just the same thing. Maybe he changed his mind about GOFAI between 2008 and whenever that discussion took place. I don’t know. I’m only going off of what he said, and he didn’t explain this.
In any case, at some point after 2008 and before 2023 he changed his mind about deep learning, and he never explained why, as far as I can tell. This seems like an important topic to discuss, and not a minor detail or an afterthought. What gives?
Again, it’s not even clear that Yudkowsky even understands deep learning particularly well, so to apply his pre-deep learning theory to deep learning specifically, we need a substantive explanation from him. It’s the kind of thing that if you were writing a book about this topic (or many long-form posts over the span of years), you would probably want to address.
The review of Collier’s review that you linked does not, in my view, adequately address the point that I raised from Collier’s review. The author of the review’s review does not demonstrate to me that they understand Collier’s point. They might understand it or they might not, but it’s not clear from the review whether they do or don’t, so there isn’t the basis for a convincing reply, there.
By the way, the last time we interacted on the EA Forum, you refused to retract a false accusation against me after I disproved it. I gave you the opportunity to apologize and try to have a good faith discussion from that point, but you didn’t apologize and you didn’t retract the accusation. Given this, I don’t particularly have much patience for engaging with you further. Take care.
This is a narrow point[1] but I want to point out that [not deep learning] is extremely broad, and the usage of the term “good old-fashioned AI” has been moving around between [not deep learning] and [deduction on Lisp symbols], and I think there’s a huge space of techniques inbetween (probabilistic programming, program induction/synthesis, support vector machines, dimensionality reduction à la t-SNE/UMAP, evolutionary methods…).
A hobby-horse of mine.
Would you be able to locate the post in question? If Yudkowsky did indeed say that, I would agree that it would constitute a relevant negative update about his overall prediction track record.