For these examples I really do think we have to have machines which, to some extent, rely on similar principles as the human mind. I think this is also true for complex planning, etc,
Ah, OK.
I’m finding evidence of software techniques useful for simulating abductive thinking. There are uses in automated software quality testing, some stuff in symbolic reasoning tools (related to backchaining), and I think some newer science tools that do hypothesis generation and testing.
I suspect that an obstacle to creating a tool that appears to have common-sense is its lack of a world model, fwiw, but as I review what I’ve come across on the topic of developing AI with common-sense, I suspect that there’s multiple paths to simulating common-sense, depending on what satisfices for demonstrating “common-sense”, for example, whether its common-sense in discussing the world versus interacting with the world.
I read through your posts to date and comments here and on lesswrong. You got a lot of engagement and interest in exploring details of your claims, in particular the examples you supply from GPT-3. You went into some depth with your examples from Python and got some pushback. Your submissions probably will get read by FTX.
Accordingly, I agree with Linch’s idea that you could answer a question about the danger of an AI tool developed this century, whether it meets your criteria for a true AGI or not. Your answer would probably get some interest.
I understand your belief is that there might be another AI winter if too many people buy into AI hype about DL, but I don’t foresee that happening. Contributions from robotics will prevent that result, if nothing else does.
I also agree that the “AI winter” will be different this time. Simply because the current AI summer has provided useful tools for dealing with big data, which will always find uses. Expert systems of old had a very limited use and large cost of entry. DL models have a relatively low cost of entry and most businesses have some problems that could benefit from some analysis.
Well, when I’m writing about the AI winter here, I meant what I thought was your focus, that is, true AGI, intelligent self-aware artificial general intelligences.
If you want to submit another post for the prize, or send in a submission, you can remove the prize tag from the current submissions. You might post a draft here and ask for comments, to be sure that you are being read correctly.
Hmmm. I hope we are not talking past each other here. I realise that the AI winter will be the failure of AGI. But DL as an analysis tool is so useful that “AI” won’t completely disappear. Nor will funding of course, though it will be reduced I suspect once the enthusiasm dies down.
So I hope my current submission is not missing the mark on this, as I don’t see any contradiction in my view regarding an “AI winter”
OK, as we have communicated, you have filled in what I took to be gaps in your original presentation. It might be useful to review your discussion with the many people who have shown an interest in your work, and see if you can write a final piece that summarizes your position effectively. I for one had to interpret your position and ask for your feedback in order to be sure I was summarizing your position correctly.
My position, which stands in contrast to yours, is that current research in AI and robotics could lead to AGI if other circumstances permit it. I don’t particularly think it is necessary or a good idea to develop AGI, doing so will only add danger and difficulty to an already difficult world scene (as well as add people to it), but I also think it is important to recognize the implications once it happens.
Ah, OK.
I’m finding evidence of software techniques useful for simulating abductive thinking. There are uses in automated software quality testing, some stuff in symbolic reasoning tools (related to backchaining), and I think some newer science tools that do hypothesis generation and testing.
I suspect that an obstacle to creating a tool that appears to have common-sense is its lack of a world model, fwiw, but as I review what I’ve come across on the topic of developing AI with common-sense, I suspect that there’s multiple paths to simulating common-sense, depending on what satisfices for demonstrating “common-sense”, for example, whether its common-sense in discussing the world versus interacting with the world.
I read through your posts to date and comments here and on lesswrong. You got a lot of engagement and interest in exploring details of your claims, in particular the examples you supply from GPT-3. You went into some depth with your examples from Python and got some pushback. Your submissions probably will get read by FTX.
Accordingly, I agree with Linch’s idea that you could answer a question about the danger of an AI tool developed this century, whether it meets your criteria for a true AGI or not. Your answer would probably get some interest.
I understand your belief is that there might be another AI winter if too many people buy into AI hype about DL, but I don’t foresee that happening. Contributions from robotics will prevent that result, if nothing else does.
Thanks for the comments, Noah.
I also agree that the “AI winter” will be different this time. Simply because the current AI summer has provided useful tools for dealing with big data, which will always find uses. Expert systems of old had a very limited use and large cost of entry. DL models have a relatively low cost of entry and most businesses have some problems that could benefit from some analysis.
Well, when I’m writing about the AI winter here, I meant what I thought was your focus, that is, true AGI, intelligent self-aware artificial general intelligences.
If you want to submit another post for the prize, or send in a submission, you can remove the prize tag from the current submissions. You might post a draft here and ask for comments, to be sure that you are being read correctly.
Hmmm. I hope we are not talking past each other here. I realise that the AI winter will be the failure of AGI. But DL as an analysis tool is so useful that “AI” won’t completely disappear. Nor will funding of course, though it will be reduced I suspect once the enthusiasm dies down.
So I hope my current submission is not missing the mark on this, as I don’t see any contradiction in my view regarding an “AI winter”
OK, as we have communicated, you have filled in what I took to be gaps in your original presentation. It might be useful to review your discussion with the many people who have shown an interest in your work, and see if you can write a final piece that summarizes your position effectively. I for one had to interpret your position and ask for your feedback in order to be sure I was summarizing your position correctly.
My position, which stands in contrast to yours, is that current research in AI and robotics could lead to AGI if other circumstances permit it. I don’t particularly think it is necessary or a good idea to develop AGI, doing so will only add danger and difficulty to an already difficult world scene (as well as add people to it), but I also think it is important to recognize the implications once it happens.