By doom I mean there are no humans left. How does your scenario not lead to that eventually?
Eventually, it probably does. But the same is true of our current trajectory. I think the sort of AI we seem likely to get first raises the short run p(doom) somewhat, but primarily by intensifying existing risks.
Silicon Moloch is likely to foom, surely? (Cf. the “rapid dose of history” that will happen.)
In the limit of maximally intense competition (which we won’t see, but might approach), probably not. We’d get a thousand years worth of incremental software improvements in a decade and a (slower) takeoff in chip production, but not runaway technological progress: in principle the risk free rate will eventually drop to a rate where basic research becomes economical, but I expect the ascended economy will go the way of all feedback loops and undermine its own preconditions long before that.
What sort of civilization comes out the other end is anyone’s guess, but I doubt it’ll be less equipped to protect itself than we are.
I expect that
Riding transformer scaling laws all the way to the end of the internet still only gets you something at most moderately superhuman. This would be civilization-of-immortal-geniuses dangerous, but not angry-alien-god dangerous: MAD is still in effect, for instance. No nanotech, no psychohistory.
In particular, they won’t be smart enough to determine whether an alternative architecture can go Foom a priori.
Foom candidates will not be many orders of magnitude cheaper to train than mature language models
and that as a result the marginal return on trying to go Foom will be zero. If it happens, it’ll be the result of deliberate effort by an agent with lots and lots of slack to burn, not something that accidentally falls out of market dynamics.
and a 10,000,000-fold increase in transistor density. We might return to 20th century compute cost improvements for a bit, if things get really really cheap, but it’s not going to move anywhere near as fast as software.