My argument defeats the Browning and LeCun position that DL and gradient descent learning supply an alternative that can dispense with the Physical Symbol System Hypothesis. This undermines the main reason to believe that DL approaches will deliver on their promises any more than the systems of the 50s did.
The argument was hard for me to follow, and at the end your conclusion was hard to determine. You discussed it with several people in the comments, but even after some changes, I think it could use a redo.
So there it is. People who believe that AGI is imminent, do so because the prevailing winds are saying that we are finally onto something that is a closer match to human cognition than anything we have ever tried. The winds sound like a storm, but they are really just a whimper.
I understand what you mean, but the FTX fund is serious, and so are the industry giants interested in this last mile of automation. They don’t have to listen to the wind, they are throwing money at this stuff in one way or other. While I understand your argument against the hype, I doubt that makes a satisfying answer to the overall question of timing and dangers. Machine learning technology is a trend in the software industry where it has immediate applications. However, research organizations are savvy. They will look at other ideas and probably are right now.
I would like to see your thinking put to writing about the dangers of AI, particularly if you can provide historical context for a convincing argument about simpler AI still leading to existential crises.
Hi Noah, thanks for the comment. I think there are a lot of possible questions that I did not tackle. My main interest was to show people an argument that AI won’t proceed past the pattern recognition stage in the foreseeable future, no matter how much money is thrown at it by serious people. As I showed in another post, I have good reason to believe that the argument is solid.
The dangers of current AI are real but I am not really involved in trying to estimate that risk.
The argument was hard for me to follow, and at the end your conclusion was hard to determine. You discussed it with several people in the comments, but even after some changes, I think it could use a redo.
I understand what you mean, but the FTX fund is serious, and so are the industry giants interested in this last mile of automation. They don’t have to listen to the wind, they are throwing money at this stuff in one way or other. While I understand your argument against the hype, I doubt that makes a satisfying answer to the overall question of timing and dangers. Machine learning technology is a trend in the software industry where it has immediate applications. However, research organizations are savvy. They will look at other ideas and probably are right now.
I would like to see your thinking put to writing about the dangers of AI, particularly if you can provide historical context for a convincing argument about simpler AI still leading to existential crises.
Hi Noah, thanks for the comment. I think there are a lot of possible questions that I did not tackle. My main interest was to show people an argument that AI won’t proceed past the pattern recognition stage in the foreseeable future, no matter how much money is thrown at it by serious people. As I showed in another post, I have good reason to believe that the argument is solid.
The dangers of current AI are real but I am not really involved in trying to estimate that risk.