Ok, with that out of the way, getting closer to the substance of the original question about paradigms of AI.
So something that isn’t talked about is something that is known as “classical AI” or “symbolic AI”.
Symbolic AI or classical AI was the original general AI paradigm. (This is written up elsewhere, including on Wikipedia, which provides my knowledge for all further comments.)
Here’s my take to explain the relevance of symbolic AI.
Ok, close your eyes, and imagine for a second that everything in the world is the same, but that deep learning or neural networks don’t exist yet. They just haven’t been discovered.
In this situation, if you told a genius in math or physics, that there was a danger from building an evil, world conquering AI, what would that genius think about?
I honestly bet that they wouldn’t think about feeding giant spreadsheets of data into a bunch of interconnected logistic functions to run argmax, which is essentially what deep learning, RNN, transformers, etc. do today.
Instead, this genius would think about the danger of directly teaching or programming computers with logic, such as giving symbols and rules that seem to allow reasoning. They would think that you could directly evolve these reasoning systems, and growing to general cognition, and become competitive with humans.
If you thought computers could think or use logic, this approach seems like, really obvious.
(According to my 150 seconds of skimming Wikipedia) my guess is that the above way of thinking is exactly what symbolic or classical AI did. According to Wikipedia, this dominated until 1985[1].
Note that this classical or symbolic approach was something full on. This wasn’t “very online”.
Symbolic AI was studied at MIT, Stanford, CMU, RAND, etc. This was a big deal, sort of like how molecular biology, or any mainstream approach today in machine learning is a big deal.
To be really concrete and tangible to explain the “symbolic reasoning” programme, one instance of this probably is this project “Cyc” https://en.wikipedia.org/wiki/Cyc.
If you take a look at that article, you get a sense for this approach. This stores a lot of information and trying to build reasoning from rules.
Ok the story continues, but with a twist—directly relevant to the original question of paradigms.
(According to my 5 minutes of Wikipedia reading) a major critic of of symbolic AI was by this dude, Hubert Dreyfus. Dreyfus was a Harvard trained philosopher and MIT faculty when he was hired by RAND.
There, Dreyfus sort of turned on his new employer (“treacherous turn!”) to write a book against the establishment’s approach to AI.
Dreyfus basically predicted that symbolic approach was a dead end.
“The paper “caused an uproar”, according to Pamela McCorduck. The AI community’s response was derisive and personal. Seymour Papert dismissed one third of the paper as “gossip” and claimed that every quotation was deliberately taken out of context. Herbert A. Simon accused Dreyfus of playing “politics” so that he could attach the prestigious RAND name to his ideas. Simon said, “what I resent about this was the RAND name attached to that garbage”.
Ultimately, Dreyfus was essentially “proved right” (although there’s probably a lot more color here, most contrarians tend to be excessive, dogmatic and fixated themselves, and sometimes right for the wrong reasons.)
Dreyfus did make a splash, but note that even if Dreyfus was correct it’s unlikely that his impact causal. It’s more likely the field petered out as lack of results became too obvious to ignore (based on 3 paragraphs of Wikipedia and mostly my beliefs) .
But ultimately, the field of classical/symbolic AI fell, pretty bigly, to a degree that I don’t think many fields fall.
Directly relevant to the question of “paradigm” in the original question.
This now fallen symbolic AI reasoning faction/programme/agenda is a “paradigm”. This “paradigm” seemed extremely reasonable.
These figures were vastly influential and established, they dotted the top CS schools, the top think tanks and connected with the strongest companies at the time. This AI community was probably vastly more influential than the AI safety community currently is, by a factor of 20 or more.
It still seems like “Symbolic logic”, with some sort of evolution/training, is a reasonable guess for how to build general, strong AI.
In contrast, what exactly are neural networks doing? To many people, deep learning, using a 400B parameter model to hallucinate images and text, seems like a labored and quixotic approach to general AI. So how are these deep learning systems dangerous in any way? (Well, I think there are groups of principled people who have written up important content on this.)
(I’m not a critic or stealth undermining AI safety, this series of comments isn’t a proxy to critique AI safety. These lessons seem generally useful.)
The takeaway from this comment is that:
It’s very likely that these events of symbolic logic hangs over the general AI field (“AI winter”). It’s very possible that the reaction was an overshoot that hampers AI safety today in some way. (But 10 minutes of reading Wikipedia isn’t enough for me to speculate more).
This is a perfect example of a paradigm shift. Symbolic AI was a major commitment by the establishment, the field resisted intensely, but they ultimately fell hard.
There’s probably other “paradigms” (ugh I hate using this phrase and framework) that might be relevant to AI safety. Like, I guess maybe someone should take a look at this as AI safety continues.
Final comment—quick take on “current approach in AI”
If you’re still here, still reading this comment chain (my guess is that there is a 90% chance the original commentor is gone, and most forum readers are gone too), you might be confused because I haven’t mentioned the “current paradigm of ML or AI”.
For completeness it’s worth filling this in. So like, here it is:
Basically, the current paradigm of ML or deep learning right now is a lot less deep than it seems.
Essentially, people have models, like a GPT-3/BERT transformer, that they tweak, run data through, and look at the resulting performance.
To build these models and new ones, people basically append and modify existing models with new architectures or data, and look at how the results perform on established benchmarks (e.g. language translation, object detection).
Yeah, this doesn’t sound super principled, and it isn’t.
To give an analogy, imagine bridge building in civil engineering. My guess is that when designing a bridge, engineers use intricate knowledge of materials and physics, and choose every single square meter of every component of the bridge in accordance to what is needed, so that the resulting whole stands up and supports every other component.
In contrast, another approach to building a bridge is to just bolt and weld a lot of pieces together, in accordance with intuition and experience. This approach probably would copy existing designs, and there would be a lot of tacit knowledge and rules of thumb (e.g. when you want to make a bridge 2x as big, you usually have to use more than 2x more material). With many iterations, over time this process would work and be pretty efficient.
The second approach is basically a lot of how deep learning works. People are trying different architectures, adding layers, that are moderate innovations over the last model.
There’s a lot more details. Things like unsupervised learning, or encoder decoder models, latent spaces, and more, are super interesting and involve real insights or new approaches.
I’m basically too dumb to build a whole new deep learning architecture, but many people are smart enough, and major advances can be made like by new insights. But still, a lot of it is this iteration and trial and error. The biggest enablement is large amounts of data and computing capacity, and I guess lots of investor money.
Ok, with that out of the way, getting closer to the substance of the original question about paradigms of AI.
So something that isn’t talked about is something that is known as “classical AI” or “symbolic AI”.
Symbolic AI or classical AI was the original general AI paradigm. (This is written up elsewhere, including on Wikipedia, which provides my knowledge for all further comments.)
Here’s my take to explain the relevance of symbolic AI.
Ok, close your eyes, and imagine for a second that everything in the world is the same, but that deep learning or neural networks don’t exist yet. They just haven’t been discovered.
In this situation, if you told a genius in math or physics, that there was a danger from building an evil, world conquering AI, what would that genius think about?
I honestly bet that they wouldn’t think about feeding giant spreadsheets of data into a bunch of interconnected logistic functions to run argmax, which is essentially what deep learning, RNN, transformers, etc. do today.
Instead, this genius would think about the danger of directly teaching or programming computers with logic, such as giving symbols and rules that seem to allow reasoning. They would think that you could directly evolve these reasoning systems, and growing to general cognition, and become competitive with humans.
If you thought computers could think or use logic, this approach seems like, really obvious.
(According to my 150 seconds of skimming Wikipedia) my guess is that the above way of thinking is exactly what symbolic or classical AI did. According to Wikipedia, this dominated until 1985[1].
Note that this classical or symbolic approach was something full on. This wasn’t “very online”.
Symbolic AI was studied at MIT, Stanford, CMU, RAND, etc. This was a big deal, sort of like how molecular biology, or any mainstream approach today in machine learning is a big deal.
To be really concrete and tangible to explain the “symbolic reasoning” programme, one instance of this probably is this project “Cyc” https://en.wikipedia.org/wiki/Cyc.
If you take a look at that article, you get a sense for this approach. This stores a lot of information and trying to build reasoning from rules.
Ok the story continues, but with a twist—directly relevant to the original question of paradigms.
(According to my 5 minutes of Wikipedia reading) a major critic of of symbolic AI was by this dude, Hubert Dreyfus. Dreyfus was a Harvard trained philosopher and MIT faculty when he was hired by RAND.
There, Dreyfus sort of turned on his new employer (“treacherous turn!”) to write a book against the establishment’s approach to AI.
Dreyfus basically predicted that symbolic approach was a dead end.
As a result, it looks like he was attacked, and it was an epic drama mine.
Ultimately, Dreyfus was essentially “proved right” (although there’s probably a lot more color here, most contrarians tend to be excessive, dogmatic and fixated themselves, and sometimes right for the wrong reasons.)
Dreyfus did make a splash, but note that even if Dreyfus was correct it’s unlikely that his impact causal. It’s more likely the field petered out as lack of results became too obvious to ignore (based on 3 paragraphs of Wikipedia and mostly my beliefs) .
But ultimately, the field of classical/symbolic AI fell, pretty bigly, to a degree that I don’t think many fields fall.
Directly relevant to the question of “paradigm” in the original question.
This now fallen symbolic AI reasoning faction/programme/agenda is a “paradigm”. This “paradigm” seemed extremely reasonable.
These figures were vastly influential and established, they dotted the top CS schools, the top think tanks and connected with the strongest companies at the time. This AI community was probably vastly more influential than the AI safety community currently is, by a factor of 20 or more.
It still seems like “Symbolic logic”, with some sort of evolution/training, is a reasonable guess for how to build general, strong AI.
In contrast, what exactly are neural networks doing? To many people, deep learning, using a 400B parameter model to hallucinate images and text, seems like a labored and quixotic approach to general AI. So how are these deep learning systems dangerous in any way? (Well, I think there are groups of principled people who have written up important content on this.)
(I’m not a critic or stealth undermining AI safety, this series of comments isn’t a proxy to critique AI safety. These lessons seem generally useful.)
The takeaway from this comment is that:
It’s very likely that these events of symbolic logic hangs over the general AI field (“AI winter”). It’s very possible that the reaction was an overshoot that hampers AI safety today in some way. (But 10 minutes of reading Wikipedia isn’t enough for me to speculate more).
This is a perfect example of a paradigm shift. Symbolic AI was a major commitment by the establishment, the field resisted intensely, but they ultimately fell hard.
There’s probably other “paradigms” (ugh I hate using this phrase and framework) that might be relevant to AI safety. Like, I guess maybe someone should take a look at this as AI safety continues.
Final comment—quick take on “current approach in AI”
If you’re still here, still reading this comment chain (my guess is that there is a 90% chance the original commentor is gone, and most forum readers are gone too), you might be confused because I haven’t mentioned the “current paradigm of ML or AI”.
For completeness it’s worth filling this in. So like, here it is:
Basically, the current paradigm of ML or deep learning right now is a lot less deep than it seems.
Essentially, people have models, like a GPT-3/BERT transformer, that they tweak, run data through, and look at the resulting performance.
To build these models and new ones, people basically append and modify existing models with new architectures or data, and look at how the results perform on established benchmarks (e.g. language translation, object detection).
Yeah, this doesn’t sound super principled, and it isn’t.
To give an analogy, imagine bridge building in civil engineering. My guess is that when designing a bridge, engineers use intricate knowledge of materials and physics, and choose every single square meter of every component of the bridge in accordance to what is needed, so that the resulting whole stands up and supports every other component.
In contrast, another approach to building a bridge is to just bolt and weld a lot of pieces together, in accordance with intuition and experience. This approach probably would copy existing designs, and there would be a lot of tacit knowledge and rules of thumb (e.g. when you want to make a bridge 2x as big, you usually have to use more than 2x more material). With many iterations, over time this process would work and be pretty efficient.
The second approach is basically a lot of how deep learning works. People are trying different architectures, adding layers, that are moderate innovations over the last model.
There’s a lot more details. Things like unsupervised learning, or encoder decoder models, latent spaces, and more, are super interesting and involve real insights or new approaches.
I’m basically too dumb to build a whole new deep learning architecture, but many people are smart enough, and major advances can be made like by new insights. But still, a lot of it is this iteration and trial and error. The biggest enablement is large amounts of data and computing capacity, and I guess lots of investor money.