Singleton takeover seems very likely simply down to the speed advantage of the first mover (at the sharp end of the intelligence explosion it will be able to do subjective decades of R&D before the second mover gets off the ground, even if the second mover is only hours behind).
at the sharp end of the intelligence explosion it will be able to do subjective decades of R&D before the second mover gets off the ground, even if the second mover is only hours behind
Where are you getting those numbers from? If by “subjective decades” you mean “decades of work by one smart human researcher”, then I don’t think that’s enough to secure it’s position as a singleton.
If you mean “decades of global progress at the global tech frontier” then imagining that the first-mover can fit ~100 million human research-years into a few hours shortly after (presumably) pulling away from the second-mover in a software intelligence explosion, then I’m skeptical (for reasons I’m happy to elaborate on).
Thinking about it some more, I think I mean something more like “subjective decades of strategising and preparation at the level of intelligence of the second mover”, so it would be able to counter anything the second mover does to try and gain power.
But also there would be software intelligence explosion effects (I think the figures you have in your footnote 37 are overly conservative—human level is probably closer to “GPT-5″).
Singleton takeover seems very likely simply down to the speed advantage of the first mover (at the sharp end of the intelligence explosion it will be able to do subjective decades of R&D before the second mover gets off the ground, even if the second mover is only hours behind).
Where are you getting those numbers from? If by “subjective decades” you mean “decades of work by one smart human researcher”, then I don’t think that’s enough to secure it’s position as a singleton.
If you mean “decades of global progress at the global tech frontier” then imagining that the first-mover can fit ~100 million human research-years into a few hours shortly after (presumably) pulling away from the second-mover in a software intelligence explosion, then I’m skeptical (for reasons I’m happy to elaborate on).
Thinking about it some more, I think I mean something more like “subjective decades of strategising and preparation at the level of intelligence of the second mover”, so it would be able to counter anything the second mover does to try and gain power.
But also there would be software intelligence explosion effects (I think the figures you have in your footnote 37 are overly conservative—human level is probably closer to “GPT-5″).