It seems that AI safety discussions assume that once general intelligence is achieved, recursive improvement means that superintelligence is inevitable.
How confident are safety researchers about this point?
At some point, the difficulty of additional improvements exceeds the increase in intelligence and the AI will eventually no longer be able to improve itself. Why do safety researchers expect this point to be a vast superintelligence rather than something only slightly smarter than a human?
It seems that AI safety discussions assume that once general intelligence is achieved, recursive improvement means that superintelligence is inevitable.
How confident are safety researchers about this point?
At some point, the difficulty of additional improvements exceeds the increase in intelligence and the AI will eventually no longer be able to improve itself. Why do safety researchers expect this point to be a vast superintelligence rather than something only slightly smarter than a human?
Definitely not an expert, but I think there is still no consensus on “slow takeoff vs fast takeoff” (fast takeoff is sometimes referred to as FOOM).
It’s a very important topic of disagreement, see e.g. https://www.lesswrong.com/posts/hRohhttbtpY3SHmmD/takeoff-speeds-have-a-huge-effect-on-what-it-means-to-work-1