(A) “We should plan for the contingency where LLMs (or scaffolded LLMs etc.) scale to AGI, because this contingency is very likely what’s gonna happen.”
(B) We should plan for the contingency where LLMs (or scaffolded LLMs etc.) scale to AGI, because this contingency is more tractable and urgent than the contingency where they don’t, and hence worth working on regardless of its exact probability.”
I think plenty of AI safety people are in (A), which is at least internally-consistent even if I happen to think they’re wrong. I also think there are also lots of AI safety people who would say that they’re in (B) if pressed, but where they long ago lost track of the fact that that’s what they were doing and instead they’ve started treating the contingency as a definite expectation, and thus they say things that omit essential caveats, or are wrong or misleading in other ways. ¯\_(ツ)_/¯
Yeah sure, here are two reasonable positions:
(A) “We should plan for the contingency where LLMs (or scaffolded LLMs etc.) scale to AGI, because this contingency is very likely what’s gonna happen.”
(B) We should plan for the contingency where LLMs (or scaffolded LLMs etc.) scale to AGI, because this contingency is more tractable and urgent than the contingency where they don’t, and hence worth working on regardless of its exact probability.”
I think plenty of AI safety people are in (A), which is at least internally-consistent even if I happen to think they’re wrong. I also think there are also lots of AI safety people who would say that they’re in (B) if pressed, but where they long ago lost track of the fact that that’s what they were doing and instead they’ve started treating the contingency as a definite expectation, and thus they say things that omit essential caveats, or are wrong or misleading in other ways. ¯\_(ツ)_/¯