Thanks Noah for your really interesting piece. I actually think we agree on most things. I certainly agree that AI can produce powerful systems without enlightening us about human cognition, or following the same principles. I think chess playing programs were among the first to demonstrate that, because they used massive search trees and lookahead algorithms which no human could do.
Where we diverge I think is when we talk about more general skills like what people envision when they talk about “AGI”. Here I think the purely engineering approach won’t work because it won’t find the solution by learning from observation. For example consider adductive reasoning: finding an argument to the best explanation of some things you observe. For example: “Walking along the beach, you see what looks like a picture of Winston Churchill in the sand. It could be that, as in the opening pages of Hilary Putnam’s (1981), what you see is actually the trace of an ant crawling on the beach. The much simpler, and therefore (you think) much better, explanation is that someone intentionally drew a picture of Churchill in the sand. That, in any case, is what you come away believing.” (https://stanford.library.sydney.edu.au/archives/spr2013/entries/abduction/)
To be sure, no symbol based theory can answer the question of how we perform adductive reasoning. But, as Jerry Fodor argues in his book “The mind doesn’t work that way”, connectionist theories can’t even ask the question.
Another example follows from my logic example in my first post. That is, we can have complex formulas of prepositional logic, whose truth values are determined by the truth values of their constituents. The question of satisfiability is to see if there is any assignment of truth values to the constituents which will render the whole formula true. Another case where DL can’t even ask the question.
For these examples I really do think we have to have machines which, to some extent, rely on similar principles as the human mind. I think this is also true for complex planning, etc,
As for the last part, I am a little sad about the economic motives of AI. I mean at the very beginning the biggest use of the technology was to figure out which link people would click. Advertising is the biggest initial driver of this magic technology. Fortunately we have had more important uses for it in fields like medical technology, farming, and a few other applications I have hard of. Mainly where image recognition is important. That was a significant step forward. Self driving cars are a telling story—very good in conditions where image recognition is all you need, but totally fail in more complex situations where, for example, abductive reasoning is needed.
But still a lot of the monetary drivers are from companies like Facebook and Google who want to support their advertising revenue in one way or another.
For these examples I really do think we have to have machines which, to some extent, rely on similar principles as the human mind. I think this is also true for complex planning, etc,
Ah, OK.
I’m finding evidence of software techniques useful for simulating abductive thinking. There are uses in automated software quality testing, some stuff in symbolic reasoning tools (related to backchaining), and I think some newer science tools that do hypothesis generation and testing.
I suspect that an obstacle to creating a tool that appears to have common-sense is its lack of a world model, fwiw, but as I review what I’ve come across on the topic of developing AI with common-sense, I suspect that there’s multiple paths to simulating common-sense, depending on what satisfices for demonstrating “common-sense”, for example, whether its common-sense in discussing the world versus interacting with the world.
I read through your posts to date and comments here and on lesswrong. You got a lot of engagement and interest in exploring details of your claims, in particular the examples you supply from GPT-3. You went into some depth with your examples from Python and got some pushback. Your submissions probably will get read by FTX.
Accordingly, I agree with Linch’s idea that you could answer a question about the danger of an AI tool developed this century, whether it meets your criteria for a true AGI or not. Your answer would probably get some interest.
I understand your belief is that there might be another AI winter if too many people buy into AI hype about DL, but I don’t foresee that happening. Contributions from robotics will prevent that result, if nothing else does.
I also agree that the “AI winter” will be different this time. Simply because the current AI summer has provided useful tools for dealing with big data, which will always find uses. Expert systems of old had a very limited use and large cost of entry. DL models have a relatively low cost of entry and most businesses have some problems that could benefit from some analysis.
Well, when I’m writing about the AI winter here, I meant what I thought was your focus, that is, true AGI, intelligent self-aware artificial general intelligences.
If you want to submit another post for the prize, or send in a submission, you can remove the prize tag from the current submissions. You might post a draft here and ask for comments, to be sure that you are being read correctly.
Hmmm. I hope we are not talking past each other here. I realise that the AI winter will be the failure of AGI. But DL as an analysis tool is so useful that “AI” won’t completely disappear. Nor will funding of course, though it will be reduced I suspect once the enthusiasm dies down.
So I hope my current submission is not missing the mark on this, as I don’t see any contradiction in my view regarding an “AI winter”
OK, as we have communicated, you have filled in what I took to be gaps in your original presentation. It might be useful to review your discussion with the many people who have shown an interest in your work, and see if you can write a final piece that summarizes your position effectively. I for one had to interpret your position and ask for your feedback in order to be sure I was summarizing your position correctly.
My position, which stands in contrast to yours, is that current research in AI and robotics could lead to AGI if other circumstances permit it. I don’t particularly think it is necessary or a good idea to develop AGI, doing so will only add danger and difficulty to an already difficult world scene (as well as add people to it), but I also think it is important to recognize the implications once it happens.
Thanks Noah for your really interesting piece. I actually think we agree on most things. I certainly agree that AI can produce powerful systems without enlightening us about human cognition, or following the same principles. I think chess playing programs were among the first to demonstrate that, because they used massive search trees and lookahead algorithms which no human could do.
Where we diverge I think is when we talk about more general skills like what people envision when they talk about “AGI”. Here I think the purely engineering approach won’t work because it won’t find the solution by learning from observation. For example consider adductive reasoning: finding an argument to the best explanation of some things you observe. For example: “Walking along the beach, you see what looks like a picture of Winston Churchill in the sand. It could be that, as in the opening pages of Hilary Putnam’s (1981), what you see is actually the trace of an ant crawling on the beach. The much simpler, and therefore (you think) much better, explanation is that someone intentionally drew a picture of Churchill in the sand. That, in any case, is what you come away believing.” (https://stanford.library.sydney.edu.au/archives/spr2013/entries/abduction/)
To be sure, no symbol based theory can answer the question of how we perform adductive reasoning. But, as Jerry Fodor argues in his book “The mind doesn’t work that way”, connectionist theories can’t even ask the question.
Another example follows from my logic example in my first post. That is, we can have complex formulas of prepositional logic, whose truth values are determined by the truth values of their constituents. The question of satisfiability is to see if there is any assignment of truth values to the constituents which will render the whole formula true. Another case where DL can’t even ask the question.
For these examples I really do think we have to have machines which, to some extent, rely on similar principles as the human mind. I think this is also true for complex planning, etc,
As for the last part, I am a little sad about the economic motives of AI. I mean at the very beginning the biggest use of the technology was to figure out which link people would click. Advertising is the biggest initial driver of this magic technology. Fortunately we have had more important uses for it in fields like medical technology, farming, and a few other applications I have hard of. Mainly where image recognition is important. That was a significant step forward. Self driving cars are a telling story—very good in conditions where image recognition is all you need, but totally fail in more complex situations where, for example, abductive reasoning is needed.
But still a lot of the monetary drivers are from companies like Facebook and Google who want to support their advertising revenue in one way or another.
Ah, OK.
I’m finding evidence of software techniques useful for simulating abductive thinking. There are uses in automated software quality testing, some stuff in symbolic reasoning tools (related to backchaining), and I think some newer science tools that do hypothesis generation and testing.
I suspect that an obstacle to creating a tool that appears to have common-sense is its lack of a world model, fwiw, but as I review what I’ve come across on the topic of developing AI with common-sense, I suspect that there’s multiple paths to simulating common-sense, depending on what satisfices for demonstrating “common-sense”, for example, whether its common-sense in discussing the world versus interacting with the world.
I read through your posts to date and comments here and on lesswrong. You got a lot of engagement and interest in exploring details of your claims, in particular the examples you supply from GPT-3. You went into some depth with your examples from Python and got some pushback. Your submissions probably will get read by FTX.
Accordingly, I agree with Linch’s idea that you could answer a question about the danger of an AI tool developed this century, whether it meets your criteria for a true AGI or not. Your answer would probably get some interest.
I understand your belief is that there might be another AI winter if too many people buy into AI hype about DL, but I don’t foresee that happening. Contributions from robotics will prevent that result, if nothing else does.
Thanks for the comments, Noah.
I also agree that the “AI winter” will be different this time. Simply because the current AI summer has provided useful tools for dealing with big data, which will always find uses. Expert systems of old had a very limited use and large cost of entry. DL models have a relatively low cost of entry and most businesses have some problems that could benefit from some analysis.
Well, when I’m writing about the AI winter here, I meant what I thought was your focus, that is, true AGI, intelligent self-aware artificial general intelligences.
If you want to submit another post for the prize, or send in a submission, you can remove the prize tag from the current submissions. You might post a draft here and ask for comments, to be sure that you are being read correctly.
Hmmm. I hope we are not talking past each other here. I realise that the AI winter will be the failure of AGI. But DL as an analysis tool is so useful that “AI” won’t completely disappear. Nor will funding of course, though it will be reduced I suspect once the enthusiasm dies down.
So I hope my current submission is not missing the mark on this, as I don’t see any contradiction in my view regarding an “AI winter”
OK, as we have communicated, you have filled in what I took to be gaps in your original presentation. It might be useful to review your discussion with the many people who have shown an interest in your work, and see if you can write a final piece that summarizes your position effectively. I for one had to interpret your position and ask for your feedback in order to be sure I was summarizing your position correctly.
My position, which stands in contrast to yours, is that current research in AI and robotics could lead to AGI if other circumstances permit it. I don’t particularly think it is necessary or a good idea to develop AGI, doing so will only add danger and difficulty to an already difficult world scene (as well as add people to it), but I also think it is important to recognize the implications once it happens.