I was under the impression that most people in AI safety felt this way—that transformers (or diffusion models) weren’t going to be the major underpinning of AGI.
I haven’t done any surveys or anything, but that seems very inaccurate to me. I would have guessed that >90% of “people in AI safety” are either strongly expecting that transformers (or diffusion models) will be the major underpinning of AGI, or at least they’re acting as if they strongly expect that. (I’m including LLMs + scaffolding and so on in this category.)
For example: people seem very happy to make guesses about what tasks the first AGIs will be better and worse at doing based on current LLM capabilities; and people seem very happy to make guesses about how much compute the first AGIs will require based on current LLM compute requirements; and people seem very happy to make guesses about which companies are likely to develop AGIs based on which companies are best at training LLMs today; and people seem very happy to make guesses about AGI UIs based on the particular LLM interface of “context window → output token”; etc. etc. This kind of thing happens constantly, and sometimes I feel like I’m the only one who even notices. It drives me nuts.
Is that just a kind of availability bias—in the ‘marketplace of ideas’ (scare quotes) they’re competing against pure speculation about architecture & compute requirements, which is much harder to make estimates around & generally feels less concrete?
(A) “We should plan for the contingency where LLMs (or scaffolded LLMs etc.) scale to AGI, because this contingency is very likely what’s gonna happen.”
(B) We should plan for the contingency where LLMs (or scaffolded LLMs etc.) scale to AGI, because this contingency is more tractable and urgent than the contingency where they don’t, and hence worth working on regardless of its exact probability.”
I think plenty of AI safety people are in (A), which is at least internally-consistent even if I happen to think they’re wrong. I also think there are also lots of AI safety people who would say that they’re in (B) if pressed, but where they long ago lost track of the fact that that’s what they were doing and instead they’ve started treating the contingency as a definite expectation, and thus they say things that omit essential caveats, or are wrong or misleading in other ways. ¯\_(ツ)_/¯
I haven’t done any surveys or anything, but that seems very inaccurate to me. I would have guessed that >90% of “people in AI safety” are either strongly expecting that transformers (or diffusion models) will be the major underpinning of AGI, or at least they’re acting as if they strongly expect that. (I’m including LLMs + scaffolding and so on in this category.)
For example: people seem very happy to make guesses about what tasks the first AGIs will be better and worse at doing based on current LLM capabilities; and people seem very happy to make guesses about how much compute the first AGIs will require based on current LLM compute requirements; and people seem very happy to make guesses about which companies are likely to develop AGIs based on which companies are best at training LLMs today; and people seem very happy to make guesses about AGI UIs based on the particular LLM interface of “context window → output token”; etc. etc. This kind of thing happens constantly, and sometimes I feel like I’m the only one who even notices. It drives me nuts.
Is that just a kind of availability bias—in the ‘marketplace of ideas’ (scare quotes) they’re competing against pure speculation about architecture & compute requirements, which is much harder to make estimates around & generally feels less concrete?
Yeah sure, here are two reasonable positions:
(A) “We should plan for the contingency where LLMs (or scaffolded LLMs etc.) scale to AGI, because this contingency is very likely what’s gonna happen.”
(B) We should plan for the contingency where LLMs (or scaffolded LLMs etc.) scale to AGI, because this contingency is more tractable and urgent than the contingency where they don’t, and hence worth working on regardless of its exact probability.”
I think plenty of AI safety people are in (A), which is at least internally-consistent even if I happen to think they’re wrong. I also think there are also lots of AI safety people who would say that they’re in (B) if pressed, but where they long ago lost track of the fact that that’s what they were doing and instead they’ve started treating the contingency as a definite expectation, and thus they say things that omit essential caveats, or are wrong or misleading in other ways. ¯\_(ツ)_/¯