By doom I mean there are no humans left. How does your scenario not lead to that eventually? Silicon Moloch is likely to foom, surely? (Cf. the “rapid dose of history” that will happen.) What’s to stop the GPT-6+AutoGPT+plugins agent-like economy developing GPT-7, and so on?
By doom I mean there are no humans left. How does your scenario not lead to that eventually?
Eventually, it probably does. But the same is true of our current trajectory. I think the sort of AI we seem likely to get first raises the short run p(doom) somewhat, but primarily by intensifying existing risks.
In the limit of maximally intense competition (which we won’t see, but might approach), probably not. We’d get a thousand years worth of incremental software improvements in a decade and a (slower) takeoff in chip production, but not runaway technological progress: in principle the risk free rate will eventually drop to a rate where basic research becomes economical, but I expect the ascended economy will go the way of all feedback loops and undermine its own preconditions long before that.
What sort of civilization comes out the other end is anyone’s guess, but I doubt it’ll be less equipped to protect itself than we are.
Riding transformer scaling laws all the way to the end of the internet still only gets you something at most moderately superhuman. This would be civilization-of-immortal-geniuses dangerous, but not angry-alien-god dangerous: MAD is still in effect, for instance. No nanotech, no psychohistory.
In particular, they won’t be smart enough to determine whether an alternative architecture can go Foom a priori.
Foom candidates will not be many orders of magnitude cheaper to train than mature language models
and that as a result the marginal return on trying to go Foom will be zero. If it happens, it’ll be the result of deliberate effort by an agent with lots and lots of slack to burn, not something that accidentally falls out of market dynamics.
We’ve had ~50 years of software development so far and gone from 0 to GPT-4.
and a 10,000,000-fold increase in transistor density. We might return to 20th century compute cost improvements for a bit, if things get really really cheap, but it’s not going to move anywhere near as fast as software.
Riding transformer scaling laws all the way to the end of the internet
What about to the limits of data capture? There’s still many orders of magnitude more data that could be collected—imagine all the billions of cameras in the world recording video 24⁄7 for a start. Or the limits of data generation? There are already companies creating sythetic data for training ML models.
and a 10,000,000-fold increase in transistor density.
There’s probably at least another 100-fold hardware overhang in terms of under-utilised compute that could be immediately exploited by AI; much more if all GPUs/TPUs are consolidated for big training runs.
Also, you know those uncanny ads you get that are related to what you were just talking about? Google is likely already capturing more spoken words per day from phone mic recording than were used in the entirety of the GPT-4 training set (~10^12).
By doom I mean there are no humans left. How does your scenario not lead to that eventually? Silicon Moloch is likely to foom, surely? (Cf. the “rapid dose of history” that will happen.) What’s to stop the GPT-6+AutoGPT+plugins agent-like economy developing GPT-7, and so on?
Eventually, it probably does. But the same is true of our current trajectory. I think the sort of AI we seem likely to get first raises the short run p(doom) somewhat, but primarily by intensifying existing risks.
In the limit of maximally intense competition (which we won’t see, but might approach), probably not. We’d get a thousand years worth of incremental software improvements in a decade and a (slower) takeoff in chip production, but not runaway technological progress: in principle the risk free rate will eventually drop to a rate where basic research becomes economical, but I expect the ascended economy will go the way of all feedback loops and undermine its own preconditions long before that.
What sort of civilization comes out the other end is anyone’s guess, but I doubt it’ll be less equipped to protect itself than we are.
And you’re saying this isn’t enough for foom? We’ve had ~50 years of software development so far and gone from 0 to GPT-4.
I expect that
Riding transformer scaling laws all the way to the end of the internet still only gets you something at most moderately superhuman. This would be civilization-of-immortal-geniuses dangerous, but not angry-alien-god dangerous: MAD is still in effect, for instance. No nanotech, no psychohistory.
In particular, they won’t be smart enough to determine whether an alternative architecture can go Foom a priori.
Foom candidates will not be many orders of magnitude cheaper to train than mature language models
and that as a result the marginal return on trying to go Foom will be zero. If it happens, it’ll be the result of deliberate effort by an agent with lots and lots of slack to burn, not something that accidentally falls out of market dynamics.
and a 10,000,000-fold increase in transistor density. We might return to 20th century compute cost improvements for a bit, if things get really really cheap, but it’s not going to move anywhere near as fast as software.
What about to the limits of data capture? There’s still many orders of magnitude more data that could be collected—imagine all the billions of cameras in the world recording video 24⁄7 for a start. Or the limits of data generation? There are already companies creating sythetic data for training ML models.
There’s probably at least another 100-fold hardware overhang in terms of under-utilised compute that could be immediately exploited by AI; much more if all GPUs/TPUs are consolidated for big training runs.
Also, you know those uncanny ads you get that are related to what you were just talking about? Google is likely already capturing more spoken words per day from phone mic recording than were used in the entirety of the GPT-4 training set (~10^12).