I have to say, the bad part supports my observation!
Steven was responding to this:
The community of people most focused on keeping up the drumbeat of near-term AGI predictions seems insular, intolerant of disagreement or intellectual or social non-conformity (relative to the groupâs norms), and closed-off to even reasonable, relatively gentle criticism
None of Stevenâs bullet points support this. Many of them say the exact opposite of this.
Unless I misinterpreted what Steven was trying to say, this supports my observation in the OP about insularity:
There were a number of people, all quite new to the fields of AI and AI safety /â alignment, for whom it seems to have never crossed their mind until they talked to me that maybe foundation models wonât scale to AGI, and likewise who didnât seem to realize that the field of AI is broader than just foundation models.
How could you possibly never encounter the view that âfoundation models wonât scale to AGIâ? How could an intellectually healthy community produce this outcome?
Thereâs a popular mistake these days of assuming that LLMs are the entirety of AI, rather than a subfield of AI.
If you make this mistake, then you can go from there to either of two faulty conclusions:
(Faulty inference 1) Transformative AI will happen sooner or later [true IMO] THEREFORE LLMs will scale to TAI [false IMO]
(Faulty inference 2) LLMs will never scale to TAI [true IMO] THEREFORE TAI will never happen [false IMO]
I have seen an awful lot of both (1) and (2), including by e.g. CS professors who really ought to know better (example), and I try to call out both of them when I see them.
You yourself seem mildly guilty of something-like-(2), in this very post. Otherwise you would be asking questions like âhow quickly can AI paradigms go FROM obscure and unimpressive arxiv papers that nobody has heard of, TO a highly-developed technique subject to untold billions of dollars and millions of person-hours of investment?â, and youâd notice that an answer like â5 yearsâ is not out of the question. (See second half of this comment.)
You had never thought through âwhether artificial intelligence could be increasing faster than Mooreâs law.â Should we conclude that AI risk skeptics are âinsular, intolerant of disagreement or intellectual or social non-conformity (relative to the groupâs norms), and closed-off to even reasonable, relatively gentle criticism?â
Steven was responding to this:
None of Stevenâs bullet points support this. Many of them say the exact opposite of this.
Unless I misinterpreted what Steven was trying to say, this supports my observation in the OP about insularity:
How could you possibly never encounter the view that âfoundation models wonât scale to AGIâ? How could an intellectually healthy community produce this outcome?
Thereâs a popular mistake these days of assuming that LLMs are the entirety of AI, rather than a subfield of AI.
If you make this mistake, then you can go from there to either of two faulty conclusions:
(Faulty inference 1) Transformative AI will happen sooner or later [true IMO] THEREFORE LLMs will scale to TAI [false IMO]
(Faulty inference 2) LLMs will never scale to TAI [true IMO] THEREFORE TAI will never happen [false IMO]
I have seen an awful lot of both (1) and (2), including by e.g. CS professors who really ought to know better (example), and I try to call out both of them when I see them.
You yourself seem mildly guilty of something-like-(2), in this very post. Otherwise you would be asking questions like âhow quickly can AI paradigms go FROM obscure and unimpressive arxiv papers that nobody has heard of, TO a highly-developed technique subject to untold billions of dollars and millions of person-hours of investment?â, and youâd notice that an answer like â5 yearsâ is not out of the question. (See second half of this comment.)
Iâm not sure how you define âimminentâ in the OP title, but FWIW, LLM skeptic Yann LeCun says human-level AI âwill take several years if not a decadeâŚ[but with] a long tailâ, and LLM skeptic Francois Chollet says 2038-2048.
You had never thought through âwhether artificial intelligence could be increasing faster than Mooreâs law.â Should we conclude that AI risk skeptics are âinsular, intolerant of disagreement or intellectual or social non-conformity (relative to the groupâs norms), and closed-off to even reasonable, relatively gentle criticism?â
That seems like a non-sequitur and it seems like a calculated insult and not a good faith effort to engage in the substance of my argument.