My argument against AGI
This is the third post about my argument to try and convince the Future Fund Worldview Prize judges that “all of this AI stuff is a misguided sideshow”. My first post was an extensive argument that unfortunately confused many people.
(The probability that Artificial General Intelligence will be develop)
My second post was much more straightforward but ended up focusing mostly on revealing the reaction that some “AI luminaries” have shown to my argument
(Don’t expect AGI anytime soon)
Now, as a result of answering many excellent questions that exposed the confusions caused by my argument, I believe I am in a position to make a very clear and brief summary of the argument in point form.
To set the scene, the Future Fund is interested in predicting when we will have AI systems that can match human level cognition: “This includes entirely AI-run companies, with AI managers and AI workers and everything being done by AIs.” This is a pretty tall order. It means systems with advanced planning and decision making capabilities. But this is not the first time people predicted that we will have such machines. In my first article I reference a 1960 paper which states that the US Air Force predicted such a machine by 1980. The prediction was based on the same “look how much progress we have made, so AGI can’t be too far away” argument we see today. There must be a new argument/belief if today’s AGI predictions are to bear more fruit than they did in 1960. My argument identifies this new belief. Then it shows why the belief is wrong.
Most of the prevailing cognitive theories involve classical symbol processing systems (with a combinatorial syntax and semantics, like formal logic). For example, theories of reasoning and planning involve logic like processes and natural language is thought by many to involve phrase structure grammars, like for example Python does.
Good old-fashioned AI was (largely) based on the same assumption, that classical symbol systems are necessary for AI.
Good old-fashioned AI failed, showing the limitations of classical symbol systems.
Deep Learning (DL) is an alternative form of computation that does not involve classical symbol systems, and its amazing success shows that human intelligence is not based on classical symbolic systems. In fact, Geoff Hinton in his Turing Award Speech proclaimed that “the success of machine translation is the last nail in the coffin of symbolic AI”.
DL will be much more successful than symbolic AI because it is based on a better model of cognition: the brain. That is, the brain is a neural network, so clearly neural networks are going to be better models.
But hang on. DL is now very good at producing syntactically correct Python programs. But argument 4. should make us conclude that Python does not involve classical symbolic systems because a non-symbolic DL model can write Python. Which is patently false. The argument becomes a reductio ad absurdum. One of the steps in the argument must be wrong, and the obvious choice is 4, which gives us 7.
The success of DL in performing some human task tells us nothing about the underlying human competence needed for the task. For example, natural language might well be the production of a generative grammar in spite of the fact that statistical methods are currently better than methods based on parsing.
Point 7. defeats point 5. There is no scientific reason to believe DL will be much more successful than symbolic AI was in attaining some kind of general intelligence.
In fact, some of my work is already done for me as many of the top experts concede that DL alone is not enough for “AGI”. They propose a need for a symbolic system to supplement DL, in order to be able to do planning, high level reasoning, abductive reasoning, and so on.
The symbolic system should be non-classical because of Part 1 point 2 and 3. That is, we need something better than classical systems because good old-fashioned AI failed as a result of its assumptions about symbol systems.
DL-symbol systems (whatever those are) will be much better because DL has already shown that classical symbol systems are not the right way to model cognitive abilities.
But Part 1 point 7 defeats Part 2 point 3. We don’t know that DL-symbol systems (whatever those are) will be much better than classical AI because DL has not shown anything about the nature of human cognition.
We have no good reason, only faith and marketing, to believe that we will accomplish AGI by pursuing the DL based AI route. The fact that DL can do Python shows that it is good at mimicking symbolic systems when lots of example productions are available, like language and Python. But it struggles in tasks like planning where such examples aren’t there.
We should instead focus our attention of human-machine symbiosis, which explicitly designs systems that supplement rather than replace human intelligence.
- Why some people believe in AGI, but I don’t. by 26 Oct 2022 3:09 UTC; 13 points) (
- 26 Oct 2022 8:54 UTC; 1 point)'s comment on Why some people believe in AGI, but I don’t. by (
- Why some people believe in AGI, but I don’t. by 26 Oct 2022 3:09 UTC; -15 points) (LessWrong;
I don’t think I quite follow what you consider to be the reductio. In particular, I don’t see why your argument wouldn’t also go through with humans. Why doesn’t the following hold?
Biological Learning (BL) is an alternative form of computation that does not involve classical symbol systems, but instead just a bunch of neurons and some wet stuff, and its amazing success at producing human intelligence shows that human intelligence is not based on classical symbolic systems
The reductio is specifically about Python. I show that the argument must conclude that Python is not symbolic, which means the argument must be wrong.
So your alterenative would be that BL shows that Python is not based on classical symbol systems.
We don’t know how human cognition works which is why the BL argument is appealing. But we do know how Python works.
But humans made python.
If you claim it’s impossible for a non-classical system to create something symbolic, I don’t think you get to hide behind “we don’t know how human cognition works”. I think you been to defend the position that human cognition must be symbolic, and then explain how this arises from biological neural networks but not artificial ones.
Yes, humans made Python because we have the ability for symbolic thought.
And I am not saying that non-classical systems can’t create something symbolic. In fact this is the crux of my argument that Symbolic-Neuro symbolic architectures (see my first post) DO create symbol strings. It is the process with which they create the strings that is in question.
If you agree that bundles of biological neurons can have the capacity for symbolic thought, and that non-classical systems can create something symbolic, I don’t understand why you think anything you’ve said shows that DL cannot scale to AGI, even granting your unstated assumption that symbolic thought is necessary for AGI.
(I think that last assumption is false, but don’t think it’s a crux here so I’m keen to grant it for now, and only discuss once we’ve cleared up the other thing)
Biological neutrons have very different properties from artificial networks in very many ways. These are well documented. I would never deny that ensembles of biological neutrons have the capacity for symbol manipulation.
I also believe that non-classical systems can learn mappings between symbols, because this is in fact what they do. Language models map from word tokens to word tokens.
What they don’t do, as the inventors of DL insist, is learn classical symbol manipulation with rules defined over symbols.
Could you mechanistically explain how any of the ‘very many ways’ biological neurons are different mean that the the capacity for symbol manipulation is unique to them?
They’re obviously very different, but what I don’t think you’ve done is show that the differences are responsible for the impossibility of symbolic manipulation in artificial neural networks.
I think I may have said something to confuse the issue. Artificial neural networks certainly ARE capable of representing classical symbolic computations. In fact the first neural networks (e.g. perceptron) did just that. They typically do that with local representations where individual nodes assume the role of representing a given variable. But these were not very good at other tasks like generalisation.
More advanced distributed networks emerged with DL being the newest incarnation. These have representations which makes it very difficult (if not impossible) to dedicate nodes to variables. Which does not worry the architects because they specifically believe that the non-localised representation is what makes them so powerful (see Bengio, LeCun and Hinton’s article for their Turing award)
Turning to real neurons, the fact is that we really don’t know all that much about how they represent knowledge. We know where they tend to fire in response to given stimuli, we know how they are connected, and we know that they have some hierarchical representations. So I can’t give you a biological explanation of how neural ensembles can represent variables. All I can do is give you arguments that humans DO perform symbolic manipulation on variables, so somehow their brain has to be able to encode this.
If you can make an artificial network somehow do this eventually then fine. I will support those efforts. But we are nowhere near that, and the main actors are not even pushing in that direction.
That last comment seems very far from the original post which claimed
If we don’t have a biological representation of how BNNs can represent and perform symbolic representation, why do we have reason to believe that we know ANNs can’t?
Without an ability to point to the difference, this isn’t anything close to a reductio, it’s just saying “yeah I don’t buy it dude, I don’t reckon AI will be that good”
Sorry I think you are misunderstanding the reductio argument. That argument simply undermines the claim that natural language is not based on a generative phrase structure grammar. That is, that non symbolic DL is the “proper” model of language. In fact they are called “language models”. I claim they are not models of language, and therefore there is no reason to discard symbolic models … which is where the need for symbol manipulation comes from. Hence a very different sort of architecture than current DL
And of course we can point to the difference between artificial and biological networks. I didn’t because there are too many! One of the big ones is back propagation. THE major reason we have ANNs in the first place, completely implausible biologically. No back propagation in the brain.
You seem really informed about detailed aspects of language, modelling, and seem to be an active researcher with a long career in modelling and reasoning.
I can’t fully understand or engage with your claims or posts, because I don’t actually know how AI and “symbolic logic” would work, how it reasons about anything, and really even how to start thinking about it.
Can you provide a primer of what symbolic logic/symbolic computing is, as it is relevant to AI (in any sense), and how it is supposed to work on a detailed level, i.e., so I could independently apply it to problems? (E.g. blog post, PDF chapter of a book).
(Assume your audience knows statistical machine learning, like linear classifiers, deep learning, rule based systems, coding, basic math, etc.).
Charles, I don’t think it is necessary to understand all the details about logic to understand my point. The example of a truth table is enough, as I explain in my first post.
It’s more like these deep learning systems are mimicking Python very well . There’s no actual symbolic reasoning. You believe this...right?
Zooming out and untangling this a bit, I think the following is a bit closer to the issue?
Why is this right?
There’s no reason think that any particular computational performance is connected to human intelligence. Why do you believe this? A smartphone is amazingly better than humans at a lot of tasks but that doesn’t seem to mean anything obvious about the nature of human intelligence.
Zooming out more here, it reads like there’s some sort of beef/framework/grand theory/assertion related to symbolic logic, human intelligence, and AGI that you are strongly engaged in. It reads like you got really into this theory and built up your own argument, but it’s unclear why the claims of this underlying theory are true (or even what they are).
The resulting argument has a lot of nested claims and red herrings (the Python thing) and it’s hard to untangle.
I don’t think the question of whether intelligence is pattern recognition, or symbolic logic, is the essence of people’s concerns about AGI. Do you agree or not?
I’m not sure this statement is correct or meaningful (in the context of your argument) because learning Python syntactically isn’t what’s hard, but expressing logic in Python is, and I don’t know what this expression of logic means in your theory. I don’t think you addressed it and I can’t really fill in where it fits in your theory.
Charles, you are right, there is a deep theoretical “beef” behind the issues, but it is not my beef. The debate between “connectionist” neural network theories and symbol based theories raged very much in the 1980s, 1990s. These were really nice scientific debates based on empirical results. Connectionism faded away because it did not prove to be adequate in explaining a lot of challenges. Geoff Hinton was a big part of that debate.
When compute power and data availability grew so fantastically in the 2010s, DL started to have practical success as you see today. Hinton re emerged victoriously and has been wildly attacking believers in symbolic systems ever since. In fact there is a video of him deriding the EU for being tricked into continued funding of symbolic AI research!
I prefer to stay with scientific argumentation and claim that the fact that DL can produce Python defeats Hinton’s claim (not mine) that DL machine translation proves that language is not a symbolic process.
I literally read your post for over ~30 minutes to try to figure out what is going on. I don’t think what I wrote above is relevant/the issue anymore.
Basically, I think what you did was write a narration to yourself, with things that are individually basically true, but that no one claims is important. You also slip in claims like “human cognition must resemble AGI for AGI to happen”, but without making a tight argument.
You then point this resulting reasoning at your final point: “We have no good reason, only faith and marketing, to believe that we will accomplish AGI by pursuing the DL based AI route.”.
Also, it’s really hard to follow this, there’s things in this argument that seem to be like a triple negative.
Honestly, both my decision to read this and my subsequent performance in untangling this, makes me think I’m pretty dumb.
For example, you say that “DL is much more successful than symbolic AI because it’s closer to the human brain”, and you say this is “defeated” later. Ok. That seems fine.
Later you “defeat” the claim that:
You say this means:
But no one is talking about the nature of human cognition being related to AI?
This is your final point before claiming that AGI can’t come from DL or “symbol-DL”.
Charles, thanks for spending so much time trying to understand my argument. I hope my previous answer helps. Also I added a paragraph to clarify my stance before I give my points.
Also you say that “You also slip in claims like “human cognition must resemble AGI for AGI to happen”″. I don’t think I said that. If I did I must correct it.
Is symbol grounding necessary for (dangerous) AGI?
The way I think of risk from AGI is in terms of a giant unconscious pile of linear algebra that manipulates the external world, via an unstoppable optimisation process, to a configuration that is incompatible with biological life. A blind, unfeeling, unknowing “idiot god” that can destroy worlds. Of course you could argue that this is not true “AGI” (i.e. because there is no true “understanding” on the part of the AGI, and it’s all just, at base level, statistical learning), but that’s missing the point.
I think current AI is already dangerous. But that is not so much my concern. I am answering the question of whether AGI is possible at all in the foreseeable future.
Ok, that’s more of a semantic issue with the definition of AGI then. FTX Future Fund care about AI that poses an existential threat, not about whether such AI is AGI, or strong AI or true AI or whatever. Perhaps Transformative AI or TAI (as per OpenPhil’s definition) would be better used in this case.
I’m not sure what Future Fund care about, but they do go into some length defining what they mean by AGI, and they do care about when this AGI will be achieved. This is what I am responding to.
Your part 1 seems to say that cognitive theories (models) of the human mind (human intelligence), when programmed into computer hardware, fail to produce machines with capabilities sufficiently analogous to the human mind to qualify as AGI. You use the example of symbolic reasoning.
The early lisp machines were supposed to evolve into humanlike intelligences, based on faith in a model of human cognition, that is, symbolic reasoning. Lisp let you do symbol processing and symbol processing let a machine do symbolic reasoning and symbolic reasoning was what we humans were supposedly doing (in theory). As the lisp machines improved, their symbolic reasoning skills would eventually reach human levels of symbolic reasoning capabilities and thus exhibit human intelligence. Except that advancement never happened. Efforts to create general intelligences using symbolic reasoning techniques were stymied.
Your part 2 seems to say that deep learning fails to represent the human mind with enough completeness to allow creation of an AGI using only deep learning (DL) techniques. You conclude in part 2 that any combination of symbolic reasoning and deep learning will also fail to reach human-level intelligence.
Your discussion of python requires a bit of interpretation on my part. I think what you mean is that because deep learning succeeds in generating valid python (from a written description?), it shows production capabilities that contradict any known cognitive theories for how symbolic reasoning is performed. A neural network model that does well at generating program code should be impossible because generating program code requires symbolic reasoning. Neural network models do not resemble classical symbolic reasoning models. Yet we have deep learning software that can generate program code. Therefore, we cannot assume that there is an isomorphic relationship between programmable cognitive models and human performance of cognitive tasks. Instead, we should assume that at best programmable cognitive models are suitable only for describing programs that mimic certain aspects of human intelligence. Their development bears no necessary resemblance to development of human intelligence.
In fact, machines can mimic some aspects of human intelligence at levels of performance much greater than humans already, but most people in the industry don’t seem to think that demonstrates human-like general intelligence (though there’s some turing tests out there being passed, apparently).
If I have understood you correctly, then your conclusion is a familiar one. There’s been an acknowledgement in AI research for a long time that software models do not represent actual human cognition. It usually shows in how the software fails. Deep learning pattern recognition and generation fail in weird and obvious ways. Symbolic reasoning programs are obviously dumb. They don’t “understand” anything, they only perform logical operations. AI do narrowly-defined tasks that their algorithms allow.
So in general you disagree that:
1. If we believe in a model of human cognition (a cognitive theory), and we can program the model into hardware, then we can program a machine that cognates like a human.
2. we programmed a model of human cognition into hardware.
3. we wrote software that cognates like a human in some respect.
4. we can advance the software until it is as smart as a human.
and instead think:
1. If software models fail to capture the actual operations of human cognition, then they will fail to resemble human level intelligence.
2. Software models of human cognition fail to capture the actual operations of human cognition.
3. Software models of human cognitive production fail to resemble human-level intelligence.
4. We should concentrate on amplifying human cognitive ability instead.
I don’t agree that models that fail to capture human cognitive operation accurately will necessarily fail to resemble human-level intelligence.
Take a list of features of human intelligence, such as:
1. language processing
2. pattern recognition and generation
5. sensory processing
6. motor control
As the sophistication of software and hardware models/designs improve, machines start to resemble human beings in their capabilities. There is no reason to require for the sake of argument that the machines actually employ a cognitive model that correctly describes how humans process information. All the argument should involve is whether the machines exhibit some level of intelligence comparable to a human. And that might be possible using cognitive models that are very different from how we humans think.
Now I am not trying to trivialize the judgement of level of intelligence in any being. I think that is a hard judgement to make, and requires understanding their context much more than we seem able to do. This shows up in trying to understand the intelligence of animals. We don’t understand what they intend to do and what they care about and so of course we think they’re stupid when they don’t do what we expect them to do “if they are smart”. I think it’s humans that fail to judge intelligence well. We call other species stupid too easily.
The state of the art in AI progresses over time.
1. symbol processing (some language processing and planning)
2. deep learning algorithms (learning and pattern recognition and generation)
3. symbol processing + neural networks (learning, pattern recognition, pattern generation, planning)
Meanwhile, in robotics you have advances in:
1. sensory processing
2. motor control
and companies trying to use advanced software models for robots with the goal of helping them develop sensory processing and motor control. As those efforts improve or go into other areas, it will be easier for humans to see that they are in fact making intelligent machines, maybe even AGI level. In other words, humans will recognize the AGI more easily as having consciousness.
In addition, here’s a quote:
-from page 176 of Essence of Artifical Intelligence, by Alison Cawsey, 1998
That learning ability, embodied in a capable humanoid body outfitted with cameras, microphones, and tactile sensors, might someday soon seem human enough to qualify as “AGI”.
You wrote a lot so let me respond in parts.
You claim I say that “A neural network model that does well at generating program code should be impossible because generating program code requires symbolic reasoning.” I did not say that. Writing complex code requires symbolic reasoning, but apparently making relatively simple productions does not. This is in fact the problem.
“If I have understood you correctly, then your conclusion is a familiar one. There’s been an acknowledgement in AI research for a long time that software models do not represent actual human cognition.” This is not my conclusion. I added a paragraph before my points that could clarify. Of course no one believes that a particular software model represents actual human cognition. What they believe is that the principles behind the software model are in principle the same ones that enable human cognition. This is what my argument is meant to defeat.
I agree with some of your concluding comments but confused by some. For example you say
“I don’t agree that models that fail to capture human cognitive operation accurately will necessarily fail to resemble human-level intelligence.
Take a list of features of human intelligence, such as:
1. language processing
2. pattern recognition and generation
5. sensory processing
6. motor control
As the sophistication of software and hardware models/designs improve, machines start to resemble human beings in their capabilities.”
But this does not seem to be true. Take point 3, learning. ML theorists make a lot of this because learning is one of their core advantages. But there is no ML theory of human language acquisition. ML language learning couldn’t be further from the human ability to learn language, and no one (as far I know) is even silly enough to make a serious theory to link the two. B.F. Skinner tried to d that in the 50s and it was one of the main reasons for the downfall of Behaviorism.
As for the other points, it is the human abilities that rely more on pattern completion that are showing success. But this is to be expected.
I mean I can agree with many of your points, but they do have a “let us wait and see” flavour to them. My point would be that there is no principled reason to think that something like general human intelligence will pop up any time soon
Well, I meant to communicate what I took you to mean, that is, that no software model (or set of principles from a cognitive theory that has a programmatic representation) so far used in AI software represents actual human cognition. And that therefore, as I understood you to mean, there’s no reason to believe that AI software in future will achieve a resemblance to human cognitive capabilities. If you meant something far different, that’s OK.
AI researchers typically satisfy themselves with creating the functional equivalent of a human cognitive capability. They might not care that the models of information processing that they employ to design their software don’t answer questions about how human cognition works.
Lets keep on agreeing that there is not an isomorphism between:
* software/hardware models/designs that enable AI capabilities.
* theoretical models of human cognitive processes, human biology, or human behaviors.
In other words, AI researchers don’t have to theorize about processes like human language acquisition. All that matters is that the AI capabilities that they develop meet or exceed some standard of intelligence. That standard might be set by human performance or by some other set of metrics for intelligence. Either way, nobody expects that reaching that standard first requires that AI researchers understand how humans perform cognitive tasks (like human language acquisition).
It is obvious to me that advances in AI will continue and that contributions will come from theoretical models as well as hardware designs. Regardless of whether anyone ever explains the processes behind human cognitive capabilities, humans could make some suspiciously intelligent machines.
I follow one timeline for the development of world events. I can look at it over long or short time spans or block out some parts while concentrating on others, but the reality is that other issues than AGI shape my big picture view of what’s important in future. Yes, I do wait and see what happens in AGI development, but that’s because I have so little influence on the outcomes involved, and no real interest in producing AGI myself. If I ever make robots or write software agents, they will be simple, reliable, and do the minimum that I need them to do.
I have a hunch that humanoid robot development will be a faster approach to developing recognizably-intelligent AGI as opposed to concentrating on purely software models for AGI.
Improving humanoid robot designs could leapfrog issues potentially involved in:
how researchers frame or define AGI capabilities.
software compute vs hardware design requirements (especially of sensors and sensory data processing).
actuator training and use.
information representation in robots.
but I think people already know that.
Conversely, concentrating on AI with narrow functionality serves purposes like:
automating some aspects of expert-level work.
substituting for undesirable forms of human labor.
concentrating the same amount of work into fewer people’s efforts.
augmenting individual human labor to improve a human’s work capacity.
and people already know that too.
AI with narrow functionality will cause technological unemployment just like AGI could. I think there’s a future for AI with either specific or more general capabilities, but only because powerful financial interests will support that future.
And look, this contest that FTX is putting on has a big pot most of which might not get awarded. Not only that, but not all submissions will even be “engaged with”. That means that no one will necessarily read them. They will be filtered out. The FTX effort is a clever way to crowd-source but it is counterproductive.
If you review the specifications and the types of questions, you don’t see questions like:
when will we have safety models that protect us from out-of-control AGI?
what options are there to avoid systemic risks from AGI?
Those questions are less sexy than questions about chances of doom or great wealth. So the FTX contest questions are about the timing of capability development. Now, if I want to be paid to time that development, I could go study engineering for awhile and then try for a good job at the right company where I also drive that development. The fact is, you have to know something about capability development in order to time it. And that’s what throws this whole contest off.
If you look at the supposed upside of AGI, underpaid AGI (25$/hr? Really?) doing work in an AI company, you don’t see any discussion of AGI rights or autonomy. The whole scenario is implausible from several perspectives.
All to say, if you think that AGI fears are a distracting sideshow, or based on overblown marketing, then I am curious about your thoughts on the economic goals behind AGI pursuit. Are those goals also hype? Are they overblown? Are they appropriate? I wonder what you think.
Thanks Noah for your really interesting piece. I actually think we agree on most things. I certainly agree that AI can produce powerful systems without enlightening us about human cognition, or following the same principles. I think chess playing programs were among the first to demonstrate that, because they used massive search trees and lookahead algorithms which no human could do.
Where we diverge I think is when we talk about more general skills like what people envision when they talk about “AGI”. Here I think the purely engineering approach won’t work because it won’t find the solution by learning from observation. For example consider adductive reasoning: finding an argument to the best explanation of some things you observe. For example: “Walking along the beach, you see what looks like a picture of Winston Churchill in the sand. It could be that, as in the opening pages of Hilary Putnam’s (1981), what you see is actually the trace of an ant crawling on the beach. The much simpler, and therefore (you think) much better, explanation is that someone intentionally drew a picture of Churchill in the sand. That, in any case, is what you come away believing.” (https://stanford.library.sydney.edu.au/archives/spr2013/entries/abduction/)
To be sure, no symbol based theory can answer the question of how we perform adductive reasoning. But, as Jerry Fodor argues in his book “The mind doesn’t work that way”, connectionist theories can’t even ask the question.
Another example follows from my logic example in my first post. That is, we can have complex formulas of prepositional logic, whose truth values are determined by the truth values of their constituents. The question of satisfiability is to see if there is any assignment of truth values to the constituents which will render the whole formula true. Another case where DL can’t even ask the question.
For these examples I really do think we have to have machines which, to some extent, rely on similar principles as the human mind. I think this is also true for complex planning, etc,
As for the last part, I am a little sad about the economic motives of AI. I mean at the very beginning the biggest use of the technology was to figure out which link people would click. Advertising is the biggest initial driver of this magic technology. Fortunately we have had more important uses for it in fields like medical technology, farming, and a few other applications I have hard of. Mainly where image recognition is important. That was a significant step forward. Self driving cars are a telling story—very good in conditions where image recognition is all you need, but totally fail in more complex situations where, for example, abductive reasoning is needed.
But still a lot of the monetary drivers are from companies like Facebook and Google who want to support their advertising revenue in one way or another.
I’m finding evidence of software techniques useful for simulating abductive thinking. There are uses in automated software quality testing, some stuff in symbolic reasoning tools (related to backchaining), and I think some newer science tools that do hypothesis generation and testing.
I suspect that an obstacle to creating a tool that appears to have common-sense is its lack of a world model, fwiw, but as I review what I’ve come across on the topic of developing AI with common-sense, I suspect that there’s multiple paths to simulating common-sense, depending on what satisfices for demonstrating “common-sense”, for example, whether its common-sense in discussing the world versus interacting with the world.
I read through your posts to date and comments here and on lesswrong. You got a lot of engagement and interest in exploring details of your claims, in particular the examples you supply from GPT-3. You went into some depth with your examples from Python and got some pushback. Your submissions probably will get read by FTX.
Accordingly, I agree with Linch’s idea that you could answer a question about the danger of an AI tool developed this century, whether it meets your criteria for a true AGI or not. Your answer would probably get some interest.
I understand your belief is that there might be another AI winter if too many people buy into AI hype about DL, but I don’t foresee that happening. Contributions from robotics will prevent that result, if nothing else does.
Thanks for the comments, Noah.
I also agree that the “AI winter” will be different this time. Simply because the current AI summer has provided useful tools for dealing with big data, which will always find uses. Expert systems of old had a very limited use and large cost of entry. DL models have a relatively low cost of entry and most businesses have some problems that could benefit from some analysis.
Well, when I’m writing about the AI winter here, I meant what I thought was your focus, that is, true AGI, intelligent self-aware artificial general intelligences.
If you want to submit another post for the prize, or send in a submission, you can remove the prize tag from the current submissions. You might post a draft here and ask for comments, to be sure that you are being read correctly.
Hmmm. I hope we are not talking past each other here. I realise that the AI winter will be the failure of AGI. But DL as an analysis tool is so useful that “AI” won’t completely disappear. Nor will funding of course, though it will be reduced I suspect once the enthusiasm dies down.
So I hope my current submission is not missing the mark on this, as I don’t see any contradiction in my view regarding an “AI winter”
OK, as we have communicated, you have filled in what I took to be gaps in your original presentation. It might be useful to review your discussion with the many people who have shown an interest in your work, and see if you can write a final piece that summarizes your position effectively. I for one had to interpret your position and ask for your feedback in order to be sure I was summarizing your position correctly.
My position, which stands in contrast to yours, is that current research in AI and robotics could lead to AGI if other circumstances permit it. I don’t particularly think it is necessary or a good idea to develop AGI, doing so will only add danger and difficulty to an already difficult world scene (as well as add people to it), but I also think it is important to recognize the implications once it happens.