Nope, it was Yudkowsky in a Facebook group about AI x-risk around 2015 or 2016. He specifically said he didnāt think deep learning was the royal road to AGI.
Iām not sure what sort of system Yudkowsky had in mind, specifically, when he said he thought symbolic AI approaches were more likely to get there. He didnāt give more details. Maybe he saw a difference between symbolic AI and āthe type of systems usually termed GOFAIā. Some people seem to draw a distinction between the two, whereas other people say theyāre just the same thing. Maybe he changed his mind about GOFAI between 2008 and whenever that discussion took place. I donāt know. Iām only going off of what he said, and he didnāt explain this.
In any case, at some point after 2008 and before 2023 he changed his mind about deep learning, and he never explained why, as far as I can tell. This seems like an important topic to discuss, and not a minor detail or an afterthought. What gives?
Again, itās not even clear that Yudkowsky even understands deep learning particularly well, so to apply his pre-deep learning theory to deep learning specifically, we need a substantive explanation from him. Itās the kind of thing that if you were writing a book about this topic (or many long-form posts over the span of years), you would probably want to address.
The review of Collierās review that you linked does not, in my view, adequately address the point that I raised from Collierās review. The author of the reviewās review does not demonstrate to me that they understand Collierās point. They might understand it or they might not, but itās not clear from the review whether they do or donāt, so there isnāt the basis for a convincing reply, there.
By the way, the last time we interacted on the EA Forum, you refused to retract a false accusation against me after I disproved it. I gave you the opportunity to apologize and try to have a good faith discussion from that point, but you didnāt apologize and you didnāt retract the accusation. Given this, I donāt particularly have much patience for engaging with you further. Take care.
Nope, it was Yudkowsky in a Facebook group about AI x-risk around 2015 or 2016. He specifically said he didnāt think deep learning was the royal road to AGI.
Nope, it was Yudkowsky in a Facebook group about AI x-risk around 2015 or 2016. He specifically said he didnāt think deep learning was the royal road to AGI.
Would you be able to locate the post in question? If Yudkowsky did indeed say that, I would agree that it would constitute a relevant negative update about his overall prediction track record.
Nope, it was Yudkowsky in a Facebook group about AI x-risk around 2015 or 2016. He specifically said he didnāt think deep learning was the royal road to AGI.
Iām not sure what sort of system Yudkowsky had in mind, specifically, when he said he thought symbolic AI approaches were more likely to get there. He didnāt give more details. Maybe he saw a difference between symbolic AI and āthe type of systems usually termed GOFAIā. Some people seem to draw a distinction between the two, whereas other people say theyāre just the same thing. Maybe he changed his mind about GOFAI between 2008 and whenever that discussion took place. I donāt know. Iām only going off of what he said, and he didnāt explain this.
In any case, at some point after 2008 and before 2023 he changed his mind about deep learning, and he never explained why, as far as I can tell. This seems like an important topic to discuss, and not a minor detail or an afterthought. What gives?
Again, itās not even clear that Yudkowsky even understands deep learning particularly well, so to apply his pre-deep learning theory to deep learning specifically, we need a substantive explanation from him. Itās the kind of thing that if you were writing a book about this topic (or many long-form posts over the span of years), you would probably want to address.
The review of Collierās review that you linked does not, in my view, adequately address the point that I raised from Collierās review. The author of the reviewās review does not demonstrate to me that they understand Collierās point. They might understand it or they might not, but itās not clear from the review whether they do or donāt, so there isnāt the basis for a convincing reply, there.
By the way, the last time we interacted on the EA Forum, you refused to retract a false accusation against me after I disproved it. I gave you the opportunity to apologize and try to have a good faith discussion from that point, but you didnāt apologize and you didnāt retract the accusation. Given this, I donāt particularly have much patience for engaging with you further. Take care.
This is a narrow point[1] but I want to point out that [not deep learning] is extremely broad, and the usage of the term āgood old-fashioned AIā has been moving around between [not deep learning] and [deduction on Lisp symbols], and I think thereās a huge space of techniques inbetween (probabilistic programming, program induction/āsynthesis, support vector machines, dimensionality reduction Ć la t-SNE/āUMAP, evolutionary methodsā¦).
A hobby-horse of mine.
Would you be able to locate the post in question? If Yudkowsky did indeed say that, I would agree that it would constitute a relevant negative update about his overall prediction track record.