Your part 1 seems to say that cognitive theories (models) of the human mind (human intelligence), when programmed into computer hardware, fail to produce machines with capabilities sufficiently analogous to the human mind to qualify as AGI. You use the example of symbolic reasoning.
The early lisp machines were supposed to evolve into humanlike intelligences, based on faith in a model of human cognition, that is, symbolic reasoning. Lisp let you do symbol processing and symbol processing let a machine do symbolic reasoning and symbolic reasoning was what we humans were supposedly doing (in theory). As the lisp machines improved, their symbolic reasoning skills would eventually reach human levels of symbolic reasoning capabilities and thus exhibit human intelligence. Except that advancement never happened. Efforts to create general intelligences using symbolic reasoning techniques were stymied.
Your part 2 seems to say that deep learning fails to represent the human mind with enough completeness to allow creation of an AGI using only deep learning (DL) techniques. You conclude in part 2 that any combination of symbolic reasoning and deep learning will also fail to reach human-level intelligence.
Your discussion of python requires a bit of interpretation on my part. I think what you mean is that because deep learning succeeds in generating valid python (from a written description?), it shows production capabilities that contradict any known cognitive theories for how symbolic reasoning is performed. A neural network model that does well at generating program code should be impossible because generating program code requires symbolic reasoning. Neural network models do not resemble classical symbolic reasoning models. Yet we have deep learning software that can generate program code. Therefore, we cannot assume that there is an isomorphic relationship between programmable cognitive models and human performance of cognitive tasks. Instead, we should assume that at best programmable cognitive models are suitable only for describing programs that mimic certain aspects of human intelligence. Their development bears no necessary resemblance to development of human intelligence.
In fact, machines can mimic some aspects of human intelligence at levels of performance much greater than humans already, but most people in the industry don’t seem to think that demonstrates human-like general intelligence (though there’s some turing tests out there being passed, apparently).
If I have understood you correctly, then your conclusion is a familiar one. There’s been an acknowledgement in AI research for a long time that software models do not represent actual human cognition. It usually shows in how the software fails. Deep learning pattern recognition and generation fail in weird and obvious ways. Symbolic reasoning programs are obviously dumb. They don’t “understand” anything, they only perform logical operations. AI do narrowly-defined tasks that their algorithms allow.
So in general you disagree that:
1. If we believe in a model of human cognition (a cognitive theory), and we can program the model into hardware, then we can program a machine that cognates like a human. 2. we programmed a model of human cognition into hardware. 3. we wrote software that cognates like a human in some respect. 4. we can advance the software until it is as smart as a human.
and instead think:
1. If software models fail to capture the actual operations of human cognition, then they will fail to resemble human level intelligence. 2. Software models of human cognition fail to capture the actual operations of human cognition. 3. Software models of human cognitive production fail to resemble human-level intelligence. 4. We should concentrate on amplifying human cognitive ability instead.
I don’t agree that models that fail to capture human cognitive operation accurately will necessarily fail to resemble human-level intelligence.
Take a list of features of human intelligence, such as:
1. language processing 2. pattern recognition and generation 3. learning 4. planning 5. sensory processing 6. motor control
As the sophistication of software and hardware models/designs improve, machines start to resemble human beings in their capabilities. There is no reason to require for the sake of argument that the machines actually employ a cognitive model that correctly describes how humans process information. All the argument should involve is whether the machines exhibit some level of intelligence comparable to a human. And that might be possible using cognitive models that are very different from how we humans think.
Now I am not trying to trivialize the judgement of level of intelligence in any being. I think that is a hard judgement to make, and requires understanding their context much more than we seem able to do. This shows up in trying to understand the intelligence of animals. We don’t understand what they intend to do and what they care about and so of course we think they’re stupid when they don’t do what we expect them to do “if they are smart”. I think it’s humans that fail to judge intelligence well. We call other species stupid too easily.
The state of the art in AI progresses over time.
1. symbol processing (some language processing and planning) 2. deep learning algorithms (learning and pattern recognition and generation) 3. symbol processing + neural networks (learning, pattern recognition, pattern generation, planning) 4. ???
Meanwhile, in robotics you have advances in: 1. sensory processing 2. motor control 3. ???
and companies trying to use advanced software models for robots with the goal of helping them develop sensory processing and motor control. As those efforts improve or go into other areas, it will be easier for humans to see that they are in fact making intelligent machines, maybe even AGI level. In other words, humans will recognize the AGI more easily as having consciousness.
In addition, here’s a quote:
Some claim that by[sic] building complete robots, which can interact with their surroundings in much the same way as humans can, will result in more progress towards understanding human intelligence and producing more general-purpose intelligent systems. This approach contrasts with the more traditional approach in Artificial Intelligence of focusing on a particular task, such as planning or vision, and developing sophisticated isolated programs able to deal with complex but idealized problems. … to automate human intelligence, it is better to start by building a complete human-like system with the abilities of a human baby (or even an insect!), and progress from there, rather than concentrate on ‘adult’ versions of isolated skills, and then hope that we will be able to eventually glue the various components together.… a robot able to interact with its environment in … ways that a human can will be able to learn the more advanced skills...
-from page 176 of Essence of Artifical Intelligence, by Alison Cawsey, 1998
That learning ability, embodied in a capable humanoid body outfitted with cameras, microphones, and tactile sensors, might someday soon seem human enough to qualify as “AGI”.
You claim I say that “A neural network model that does well at generating program code should be impossible because generating program code requires symbolic reasoning.” I did not say that. Writing complex code requires symbolic reasoning, but apparently making relatively simple productions does not. This is in fact the problem.
“If I have understood you correctly, then your conclusion is a familiar one. There’s been an acknowledgement in AI research for a long time that software models do not represent actual human cognition.” This is not my conclusion. I added a paragraph before my points that could clarify. Of course no one believes that a particular software model represents actual human cognition. What they believe is that the principles behind the software model are in principle the same ones that enable human cognition. This is what my argument is meant to defeat.
I agree with some of your concluding comments but confused by some. For example you say
“I don’t agree that models that fail to capture human cognitive operation accurately will necessarily fail to resemble human-level intelligence.
Take a list of features of human intelligence, such as:
1. language processing 2. pattern recognition and generation 3. learning 4. planning 5. sensory processing 6. motor control
As the sophistication of software and hardware models/designs improve, machines start to resemble human beings in their capabilities.”
But this does not seem to be true. Take point 3, learning. ML theorists make a lot of this because learning is one of their core advantages. But there is no ML theory of human language acquisition. ML language learning couldn’t be further from the human ability to learn language, and no one (as far I know) is even silly enough to make a serious theory to link the two. B.F. Skinner tried to d that in the 50s and it was one of the main reasons for the downfall of Behaviorism.
As for the other points, it is the human abilities that rely more on pattern completion that are showing success. But this is to be expected.
I mean I can agree with many of your points, but they do have a “let us wait and see” flavour to them. My point would be that there is no principled reason to think that something like general human intelligence will pop up any time soon
Well, I meant to communicate what I took you to mean, that is, that no software model (or set of principles from a cognitive theory that has a programmatic representation) so far used in AI software represents actual human cognition. And that therefore, as I understood you to mean, there’s no reason to believe that AI software in future will achieve a resemblance to human cognitive capabilities. If you meant something far different, that’s OK.
AI researchers typically satisfy themselves with creating the functional equivalent of a human cognitive capability. They might not care that the models of information processing that they employ to design their software don’t answer questions about how human cognition works.
Lets keep on agreeing that there is not an isomorphism between: * software/hardware models/designs that enable AI capabilities. * theoretical models of human cognitive processes, human biology, or human behaviors.
In other words, AI researchers don’t have to theorize about processes like human language acquisition. All that matters is that the AI capabilities that they develop meet or exceed some standard of intelligence. That standard might be set by human performance or by some other set of metrics for intelligence. Either way, nobody expects that reaching that standard first requires that AI researchers understand how humans perform cognitive tasks (like human language acquisition).
It is obvious to me that advances in AI will continue and that contributions will come from theoretical models as well as hardware designs. Regardless of whether anyone ever explains the processes behind human cognitive capabilities, humans could make some suspiciously intelligent machines.
I follow one timeline for the development of world events. I can look at it over long or short time spans or block out some parts while concentrating on others, but the reality is that other issues than AGI shape my big picture view of what’s important in future. Yes, I do wait and see what happens in AGI development, but that’s because I have so little influence on the outcomes involved, and no real interest in producing AGI myself. If I ever make robots or write software agents, they will be simple, reliable, and do the minimum that I need them to do.
I have a hunch that humanoid robot development will be a faster approach to developing recognizably-intelligent AGI as opposed to concentrating on purely software models for AGI.
Improving humanoid robot designs could leapfrog issues potentially involved in:
how researchers frame or define AGI capabilities.
software compute vs hardware design requirements (especially of sensors and sensory data processing).
actuator training and use.
information representation in robots.
but I think people already know that.
Conversely, concentrating on AI with narrow functionality serves purposes like:
automating some aspects of expert-level work.
substituting for undesirable forms of human labor.
concentrating the same amount of work into fewer people’s efforts.
augmenting individual human labor to improve a human’s work capacity.
and people already know that too.
AI with narrow functionality will cause technological unemployment just like AGI could. I think there’s a future for AI with either specific or more general capabilities, but only because powerful financial interests will support that future.
And look, this contest that FTX is putting on has a big pot most of which might not get awarded. Not only that, but not all submissions will even be “engaged with”. That means that no one will necessarily read them. They will be filtered out. The FTX effort is a clever way to crowd-source but it is counterproductive.
If you review the specifications and the types of questions, you don’t see questions like:
when will we have safety models that protect us from out-of-control AGI?
what options are there to avoid systemic risks from AGI?
Those questions are less sexy than questions about chances of doom or great wealth. So the FTX contest questions are about the timing of capability development. Now, if I want to be paid to time that development, I could go study engineering for awhile and then try for a good job at the right company where I also drive that development. The fact is, you have to know something about capability development in order to time it. And that’s what throws this whole contest off.
If you look at the supposed upside of AGI, underpaid AGI (25$/hr? Really?) doing work in an AI company, you don’t see any discussion of AGI rights or autonomy. The whole scenario is implausible from several perspectives.
All to say, if you think that AGI fears are a distracting sideshow, or based on overblown marketing, then I am curious about your thoughts on the economic goals behind AGI pursuit. Are those goals also hype? Are they overblown? Are they appropriate? I wonder what you think.
Thanks Noah for your really interesting piece. I actually think we agree on most things. I certainly agree that AI can produce powerful systems without enlightening us about human cognition, or following the same principles. I think chess playing programs were among the first to demonstrate that, because they used massive search trees and lookahead algorithms which no human could do.
Where we diverge I think is when we talk about more general skills like what people envision when they talk about “AGI”. Here I think the purely engineering approach won’t work because it won’t find the solution by learning from observation. For example consider adductive reasoning: finding an argument to the best explanation of some things you observe. For example: “Walking along the beach, you see what looks like a picture of Winston Churchill in the sand. It could be that, as in the opening pages of Hilary Putnam’s (1981), what you see is actually the trace of an ant crawling on the beach. The much simpler, and therefore (you think) much better, explanation is that someone intentionally drew a picture of Churchill in the sand. That, in any case, is what you come away believing.” (https://stanford.library.sydney.edu.au/archives/spr2013/entries/abduction/)
To be sure, no symbol based theory can answer the question of how we perform adductive reasoning. But, as Jerry Fodor argues in his book “The mind doesn’t work that way”, connectionist theories can’t even ask the question.
Another example follows from my logic example in my first post. That is, we can have complex formulas of prepositional logic, whose truth values are determined by the truth values of their constituents. The question of satisfiability is to see if there is any assignment of truth values to the constituents which will render the whole formula true. Another case where DL can’t even ask the question.
For these examples I really do think we have to have machines which, to some extent, rely on similar principles as the human mind. I think this is also true for complex planning, etc,
As for the last part, I am a little sad about the economic motives of AI. I mean at the very beginning the biggest use of the technology was to figure out which link people would click. Advertising is the biggest initial driver of this magic technology. Fortunately we have had more important uses for it in fields like medical technology, farming, and a few other applications I have hard of. Mainly where image recognition is important. That was a significant step forward. Self driving cars are a telling story—very good in conditions where image recognition is all you need, but totally fail in more complex situations where, for example, abductive reasoning is needed.
But still a lot of the monetary drivers are from companies like Facebook and Google who want to support their advertising revenue in one way or another.
For these examples I really do think we have to have machines which, to some extent, rely on similar principles as the human mind. I think this is also true for complex planning, etc,
Ah, OK.
I’m finding evidence of software techniques useful for simulating abductive thinking. There are uses in automated software quality testing, some stuff in symbolic reasoning tools (related to backchaining), and I think some newer science tools that do hypothesis generation and testing.
I suspect that an obstacle to creating a tool that appears to have common-sense is its lack of a world model, fwiw, but as I review what I’ve come across on the topic of developing AI with common-sense, I suspect that there’s multiple paths to simulating common-sense, depending on what satisfices for demonstrating “common-sense”, for example, whether its common-sense in discussing the world versus interacting with the world.
I read through your posts to date and comments here and on lesswrong. You got a lot of engagement and interest in exploring details of your claims, in particular the examples you supply from GPT-3. You went into some depth with your examples from Python and got some pushback. Your submissions probably will get read by FTX.
Accordingly, I agree with Linch’s idea that you could answer a question about the danger of an AI tool developed this century, whether it meets your criteria for a true AGI or not. Your answer would probably get some interest.
I understand your belief is that there might be another AI winter if too many people buy into AI hype about DL, but I don’t foresee that happening. Contributions from robotics will prevent that result, if nothing else does.
I also agree that the “AI winter” will be different this time. Simply because the current AI summer has provided useful tools for dealing with big data, which will always find uses. Expert systems of old had a very limited use and large cost of entry. DL models have a relatively low cost of entry and most businesses have some problems that could benefit from some analysis.
Well, when I’m writing about the AI winter here, I meant what I thought was your focus, that is, true AGI, intelligent self-aware artificial general intelligences.
If you want to submit another post for the prize, or send in a submission, you can remove the prize tag from the current submissions. You might post a draft here and ask for comments, to be sure that you are being read correctly.
Hmmm. I hope we are not talking past each other here. I realise that the AI winter will be the failure of AGI. But DL as an analysis tool is so useful that “AI” won’t completely disappear. Nor will funding of course, though it will be reduced I suspect once the enthusiasm dies down.
So I hope my current submission is not missing the mark on this, as I don’t see any contradiction in my view regarding an “AI winter”
OK, as we have communicated, you have filled in what I took to be gaps in your original presentation. It might be useful to review your discussion with the many people who have shown an interest in your work, and see if you can write a final piece that summarizes your position effectively. I for one had to interpret your position and ask for your feedback in order to be sure I was summarizing your position correctly.
My position, which stands in contrast to yours, is that current research in AI and robotics could lead to AGI if other circumstances permit it. I don’t particularly think it is necessary or a good idea to develop AGI, doing so will only add danger and difficulty to an already difficult world scene (as well as add people to it), but I also think it is important to recognize the implications once it happens.
Your part 1 seems to say that cognitive theories (models) of the human mind (human intelligence), when programmed into computer hardware, fail to produce machines with capabilities sufficiently analogous to the human mind to qualify as AGI. You use the example of symbolic reasoning.
The early lisp machines were supposed to evolve into humanlike intelligences, based on faith in a model of human cognition, that is, symbolic reasoning. Lisp let you do symbol processing and symbol processing let a machine do symbolic reasoning and symbolic reasoning was what we humans were supposedly doing (in theory). As the lisp machines improved, their symbolic reasoning skills would eventually reach human levels of symbolic reasoning capabilities and thus exhibit human intelligence. Except that advancement never happened. Efforts to create general intelligences using symbolic reasoning techniques were stymied.
Your part 2 seems to say that deep learning fails to represent the human mind with enough completeness to allow creation of an AGI using only deep learning (DL) techniques. You conclude in part 2 that any combination of symbolic reasoning and deep learning will also fail to reach human-level intelligence.
Your discussion of python requires a bit of interpretation on my part. I think what you mean is that because deep learning succeeds in generating valid python (from a written description?), it shows production capabilities that contradict any known cognitive theories for how symbolic reasoning is performed. A neural network model that does well at generating program code should be impossible because generating program code requires symbolic reasoning. Neural network models do not resemble classical symbolic reasoning models. Yet we have deep learning software that can generate program code. Therefore, we cannot assume that there is an isomorphic relationship between programmable cognitive models and human performance of cognitive tasks. Instead, we should assume that at best programmable cognitive models are suitable only for describing programs that mimic certain aspects of human intelligence. Their development bears no necessary resemblance to development of human intelligence.
In fact, machines can mimic some aspects of human intelligence at levels of performance much greater than humans already, but most people in the industry don’t seem to think that demonstrates human-like general intelligence (though there’s some turing tests out there being passed, apparently).
If I have understood you correctly, then your conclusion is a familiar one. There’s been an acknowledgement in AI research for a long time that software models do not represent actual human cognition. It usually shows in how the software fails. Deep learning pattern recognition and generation fail in weird and obvious ways. Symbolic reasoning programs are obviously dumb. They don’t “understand” anything, they only perform logical operations. AI do narrowly-defined tasks that their algorithms allow.
So in general you disagree that:
1. If we believe in a model of human cognition (a cognitive theory), and we can program the model into hardware, then we can program a machine that cognates like a human.
2. we programmed a model of human cognition into hardware.
3. we wrote software that cognates like a human in some respect.
4. we can advance the software until it is as smart as a human.
and instead think:
1. If software models fail to capture the actual operations of human cognition, then they will fail to resemble human level intelligence.
2. Software models of human cognition fail to capture the actual operations of human cognition.
3. Software models of human cognitive production fail to resemble human-level intelligence.
4. We should concentrate on amplifying human cognitive ability instead.
I don’t agree that models that fail to capture human cognitive operation accurately will necessarily fail to resemble human-level intelligence.
Take a list of features of human intelligence, such as:
1. language processing
2. pattern recognition and generation
3. learning
4. planning
5. sensory processing
6. motor control
As the sophistication of software and hardware models/designs improve, machines start to resemble human beings in their capabilities. There is no reason to require for the sake of argument that the machines actually employ a cognitive model that correctly describes how humans process information. All the argument should involve is whether the machines exhibit some level of intelligence comparable to a human. And that might be possible using cognitive models that are very different from how we humans think.
Now I am not trying to trivialize the judgement of level of intelligence in any being. I think that is a hard judgement to make, and requires understanding their context much more than we seem able to do. This shows up in trying to understand the intelligence of animals. We don’t understand what they intend to do and what they care about and so of course we think they’re stupid when they don’t do what we expect them to do “if they are smart”. I think it’s humans that fail to judge intelligence well. We call other species stupid too easily.
The state of the art in AI progresses over time.
1. symbol processing (some language processing and planning)
2. deep learning algorithms (learning and pattern recognition and generation)
3. symbol processing + neural networks (learning, pattern recognition, pattern generation, planning)
4. ???
Meanwhile, in robotics you have advances in:
1. sensory processing
2. motor control
3. ???
and companies trying to use advanced software models for robots with the goal of helping them develop sensory processing and motor control. As those efforts improve or go into other areas, it will be easier for humans to see that they are in fact making intelligent machines, maybe even AGI level. In other words, humans will recognize the AGI more easily as having consciousness.
In addition, here’s a quote:
-from page 176 of Essence of Artifical Intelligence, by Alison Cawsey, 1998
That learning ability, embodied in a capable humanoid body outfitted with cameras, microphones, and tactile sensors, might someday soon seem human enough to qualify as “AGI”.
You wrote a lot so let me respond in parts.
You claim I say that “A neural network model that does well at generating program code should be impossible because generating program code requires symbolic reasoning.” I did not say that. Writing complex code requires symbolic reasoning, but apparently making relatively simple productions does not. This is in fact the problem.
“If I have understood you correctly, then your conclusion is a familiar one. There’s been an acknowledgement in AI research for a long time that software models do not represent actual human cognition.” This is not my conclusion. I added a paragraph before my points that could clarify. Of course no one believes that a particular software model represents actual human cognition. What they believe is that the principles behind the software model are in principle the same ones that enable human cognition. This is what my argument is meant to defeat.
I agree with some of your concluding comments but confused by some. For example you say
“I don’t agree that models that fail to capture human cognitive operation accurately will necessarily fail to resemble human-level intelligence.
Take a list of features of human intelligence, such as:
1. language processing
2. pattern recognition and generation
3. learning
4. planning
5. sensory processing
6. motor control
As the sophistication of software and hardware models/designs improve, machines start to resemble human beings in their capabilities.”
But this does not seem to be true. Take point 3, learning. ML theorists make a lot of this because learning is one of their core advantages. But there is no ML theory of human language acquisition. ML language learning couldn’t be further from the human ability to learn language, and no one (as far I know) is even silly enough to make a serious theory to link the two. B.F. Skinner tried to d that in the 50s and it was one of the main reasons for the downfall of Behaviorism.
As for the other points, it is the human abilities that rely more on pattern completion that are showing success. But this is to be expected.
I mean I can agree with many of your points, but they do have a “let us wait and see” flavour to them. My point would be that there is no principled reason to think that something like general human intelligence will pop up any time soon
Well, I meant to communicate what I took you to mean, that is, that no software model (or set of principles from a cognitive theory that has a programmatic representation) so far used in AI software represents actual human cognition. And that therefore, as I understood you to mean, there’s no reason to believe that AI software in future will achieve a resemblance to human cognitive capabilities. If you meant something far different, that’s OK.
AI researchers typically satisfy themselves with creating the functional equivalent of a human cognitive capability. They might not care that the models of information processing that they employ to design their software don’t answer questions about how human cognition works.
Lets keep on agreeing that there is not an isomorphism between:
* software/hardware models/designs that enable AI capabilities.
* theoretical models of human cognitive processes, human biology, or human behaviors.
In other words, AI researchers don’t have to theorize about processes like human language acquisition. All that matters is that the AI capabilities that they develop meet or exceed some standard of intelligence. That standard might be set by human performance or by some other set of metrics for intelligence. Either way, nobody expects that reaching that standard first requires that AI researchers understand how humans perform cognitive tasks (like human language acquisition).
It is obvious to me that advances in AI will continue and that contributions will come from theoretical models as well as hardware designs. Regardless of whether anyone ever explains the processes behind human cognitive capabilities, humans could make some suspiciously intelligent machines.
I follow one timeline for the development of world events. I can look at it over long or short time spans or block out some parts while concentrating on others, but the reality is that other issues than AGI shape my big picture view of what’s important in future. Yes, I do wait and see what happens in AGI development, but that’s because I have so little influence on the outcomes involved, and no real interest in producing AGI myself. If I ever make robots or write software agents, they will be simple, reliable, and do the minimum that I need them to do.
I have a hunch that humanoid robot development will be a faster approach to developing recognizably-intelligent AGI as opposed to concentrating on purely software models for AGI.
Improving humanoid robot designs could leapfrog issues potentially involved in:
how researchers frame or define AGI capabilities.
software compute vs hardware design requirements (especially of sensors and sensory data processing).
actuator training and use.
information representation in robots.
but I think people already know that.
Conversely, concentrating on AI with narrow functionality serves purposes like:
automating some aspects of expert-level work.
substituting for undesirable forms of human labor.
concentrating the same amount of work into fewer people’s efforts.
augmenting individual human labor to improve a human’s work capacity.
and people already know that too.
AI with narrow functionality will cause technological unemployment just like AGI could. I think there’s a future for AI with either specific or more general capabilities, but only because powerful financial interests will support that future.
And look, this contest that FTX is putting on has a big pot most of which might not get awarded. Not only that, but not all submissions will even be “engaged with”. That means that no one will necessarily read them. They will be filtered out. The FTX effort is a clever way to crowd-source but it is counterproductive.
If you review the specifications and the types of questions, you don’t see questions like:
when will we have safety models that protect us from out-of-control AGI?
what options are there to avoid systemic risks from AGI?
Those questions are less sexy than questions about chances of doom or great wealth. So the FTX contest questions are about the timing of capability development. Now, if I want to be paid to time that development, I could go study engineering for awhile and then try for a good job at the right company where I also drive that development. The fact is, you have to know something about capability development in order to time it. And that’s what throws this whole contest off.
If you look at the supposed upside of AGI, underpaid AGI (25$/hr? Really?) doing work in an AI company, you don’t see any discussion of AGI rights or autonomy. The whole scenario is implausible from several perspectives.
All to say, if you think that AGI fears are a distracting sideshow, or based on overblown marketing, then I am curious about your thoughts on the economic goals behind AGI pursuit. Are those goals also hype? Are they overblown? Are they appropriate? I wonder what you think.
Thanks Noah for your really interesting piece. I actually think we agree on most things. I certainly agree that AI can produce powerful systems without enlightening us about human cognition, or following the same principles. I think chess playing programs were among the first to demonstrate that, because they used massive search trees and lookahead algorithms which no human could do.
Where we diverge I think is when we talk about more general skills like what people envision when they talk about “AGI”. Here I think the purely engineering approach won’t work because it won’t find the solution by learning from observation. For example consider adductive reasoning: finding an argument to the best explanation of some things you observe. For example: “Walking along the beach, you see what looks like a picture of Winston Churchill in the sand. It could be that, as in the opening pages of Hilary Putnam’s (1981), what you see is actually the trace of an ant crawling on the beach. The much simpler, and therefore (you think) much better, explanation is that someone intentionally drew a picture of Churchill in the sand. That, in any case, is what you come away believing.” (https://stanford.library.sydney.edu.au/archives/spr2013/entries/abduction/)
To be sure, no symbol based theory can answer the question of how we perform adductive reasoning. But, as Jerry Fodor argues in his book “The mind doesn’t work that way”, connectionist theories can’t even ask the question.
Another example follows from my logic example in my first post. That is, we can have complex formulas of prepositional logic, whose truth values are determined by the truth values of their constituents. The question of satisfiability is to see if there is any assignment of truth values to the constituents which will render the whole formula true. Another case where DL can’t even ask the question.
For these examples I really do think we have to have machines which, to some extent, rely on similar principles as the human mind. I think this is also true for complex planning, etc,
As for the last part, I am a little sad about the economic motives of AI. I mean at the very beginning the biggest use of the technology was to figure out which link people would click. Advertising is the biggest initial driver of this magic technology. Fortunately we have had more important uses for it in fields like medical technology, farming, and a few other applications I have hard of. Mainly where image recognition is important. That was a significant step forward. Self driving cars are a telling story—very good in conditions where image recognition is all you need, but totally fail in more complex situations where, for example, abductive reasoning is needed.
But still a lot of the monetary drivers are from companies like Facebook and Google who want to support their advertising revenue in one way or another.
Ah, OK.
I’m finding evidence of software techniques useful for simulating abductive thinking. There are uses in automated software quality testing, some stuff in symbolic reasoning tools (related to backchaining), and I think some newer science tools that do hypothesis generation and testing.
I suspect that an obstacle to creating a tool that appears to have common-sense is its lack of a world model, fwiw, but as I review what I’ve come across on the topic of developing AI with common-sense, I suspect that there’s multiple paths to simulating common-sense, depending on what satisfices for demonstrating “common-sense”, for example, whether its common-sense in discussing the world versus interacting with the world.
I read through your posts to date and comments here and on lesswrong. You got a lot of engagement and interest in exploring details of your claims, in particular the examples you supply from GPT-3. You went into some depth with your examples from Python and got some pushback. Your submissions probably will get read by FTX.
Accordingly, I agree with Linch’s idea that you could answer a question about the danger of an AI tool developed this century, whether it meets your criteria for a true AGI or not. Your answer would probably get some interest.
I understand your belief is that there might be another AI winter if too many people buy into AI hype about DL, but I don’t foresee that happening. Contributions from robotics will prevent that result, if nothing else does.
Thanks for the comments, Noah.
I also agree that the “AI winter” will be different this time. Simply because the current AI summer has provided useful tools for dealing with big data, which will always find uses. Expert systems of old had a very limited use and large cost of entry. DL models have a relatively low cost of entry and most businesses have some problems that could benefit from some analysis.
Well, when I’m writing about the AI winter here, I meant what I thought was your focus, that is, true AGI, intelligent self-aware artificial general intelligences.
If you want to submit another post for the prize, or send in a submission, you can remove the prize tag from the current submissions. You might post a draft here and ask for comments, to be sure that you are being read correctly.
Hmmm. I hope we are not talking past each other here. I realise that the AI winter will be the failure of AGI. But DL as an analysis tool is so useful that “AI” won’t completely disappear. Nor will funding of course, though it will be reduced I suspect once the enthusiasm dies down.
So I hope my current submission is not missing the mark on this, as I don’t see any contradiction in my view regarding an “AI winter”
OK, as we have communicated, you have filled in what I took to be gaps in your original presentation. It might be useful to review your discussion with the many people who have shown an interest in your work, and see if you can write a final piece that summarizes your position effectively. I for one had to interpret your position and ask for your feedback in order to be sure I was summarizing your position correctly.
My position, which stands in contrast to yours, is that current research in AI and robotics could lead to AGI if other circumstances permit it. I don’t particularly think it is necessary or a good idea to develop AGI, doing so will only add danger and difficulty to an already difficult world scene (as well as add people to it), but I also think it is important to recognize the implications once it happens.