I literally read your post for over ~30 minutes to try to figure out what is going on. I don’t think what I wrote above is relevant/the issue anymore.
Basically, I think what you did was write a narration to yourself, with things that are individually basically true, but that no one claims is important. You also slip in claims like “human cognition must resemble AGI for AGI to happen”, but without making a tight argument[1].
You then point this resulting reasoning at your final point: “We have no good reason, only faith and marketing, to believe that we will accomplish AGI by pursuing the DL based AI route.”.
Also, it’s really hard to follow this, there’s things in this argument that seem to be like a triple negative.
Honestly, both my decision to read this and my subsequent performance in untangling this, makes me think I’m pretty dumb.
For example, you say that “DL is much more successful than symbolic AI because it’s closer to the human brain”, and you say this is “defeated” later. Ok. That seems fine.
Later you “defeat” the claim that:
“DL-symbol systems (whatever those are) will be much better because DL has already shown that classical symbol systems are not the right way to model cognitive abilities.”
You say this means:
We don’t know that DL-symbol systems (whatever those are) will be much better than classical AI because DL has not shown anything about the nature of human cognition.
But no one is talking about the nature of human cognition being related to AI?
This is your final point before claiming that AGI can’t come from DL or “symbol-DL”.
Charles, thanks for spending so much time trying to understand my argument. I hope my previous answer helps. Also I added a paragraph to clarify my stance before I give my points.
Also you say that “You also slip in claims like “human cognition must resemble AGI for AGI to happen”″. I don’t think I said that. If I did I must correct it.
I literally read your post for over ~30 minutes to try to figure out what is going on. I don’t think what I wrote above is relevant/the issue anymore.
Basically, I think what you did was write a narration to yourself, with things that are individually basically true, but that no one claims is important. You also slip in claims like “human cognition must resemble AGI for AGI to happen”, but without making a tight argument[1].
You then point this resulting reasoning at your final point: “We have no good reason, only faith and marketing, to believe that we will accomplish AGI by pursuing the DL based AI route.”.
Also, it’s really hard to follow this, there’s things in this argument that seem to be like a triple negative.
Honestly, both my decision to read this and my subsequent performance in untangling this, makes me think I’m pretty dumb.
For example, you say that “DL is much more successful than symbolic AI because it’s closer to the human brain”, and you say this is “defeated” later. Ok. That seems fine.
Later you “defeat” the claim that:
You say this means:
But no one is talking about the nature of human cognition being related to AI?
This is your final point before claiming that AGI can’t come from DL or “symbol-DL”.
Charles, thanks for spending so much time trying to understand my argument. I hope my previous answer helps. Also I added a paragraph to clarify my stance before I give my points.
Also you say that “You also slip in claims like “human cognition must resemble AGI for AGI to happen”″. I don’t think I said that. If I did I must correct it.