“It doesn’t naively seem like AI risk is noticeably higher or lower if recursive self-improvement doesn’t happen.” If I understand right, if recursive self-improvement is possible, this greatly increases the take-off speed, and gives us much less time to fix things on the fly. Also, when Yudkowsky has talked about doomsday foom my recollection is he was generally assuming recursive self-improvement, of a quite-fast variety. So it is important.
(Implementing the AGI in a Harvard architecture, where source code is not in accessible/addressable memory, would help a bit prevent recursive self improvement)
Unfortunately it’s very hard to reason about how easy/hard it would be because we have absolutely no idea what future existentially dangerous AGI will look like. An agent might be able to add some “plugins” to its source code (for instance to access various APIs online or run scientific simulation code) but if AI systems continue trending in the direction they are, a lot of it’s intelligence will probably be impenetrable deep nets.
An alternative scenario would be that intelligence level is directly related to something like “number of cortical columns” , and so to get smarter you just scale that up. The cortical columns are just world modeling units, and something like an RL agent uses them to get reward. In that scenario improving your world modeling ability by increasing # of cortical columns doesn’t really effect alignment much.
All this is just me talking off the top of my head. I am not aware of this being written about more rigorously anywhere.
“It doesn’t naively seem like AI risk is noticeably higher or lower if recursive self-improvement doesn’t happen.” If I understand right, if recursive self-improvement is possible, this greatly increases the take-off speed, and gives us much less time to fix things on the fly. Also, when Yudkowsky has talked about doomsday foom my recollection is he was generally assuming recursive self-improvement, of a quite-fast variety. So it is important.
(Implementing the AGI in a Harvard architecture, where source code is not in accessible/addressable memory, would help a bit prevent recursive self improvement)
Unfortunately it’s very hard to reason about how easy/hard it would be because we have absolutely no idea what future existentially dangerous AGI will look like. An agent might be able to add some “plugins” to its source code (for instance to access various APIs online or run scientific simulation code) but if AI systems continue trending in the direction they are, a lot of it’s intelligence will probably be impenetrable deep nets.
An alternative scenario would be that intelligence level is directly related to something like “number of cortical columns” , and so to get smarter you just scale that up. The cortical columns are just world modeling units, and something like an RL agent uses them to get reward. In that scenario improving your world modeling ability by increasing # of cortical columns doesn’t really effect alignment much.
All this is just me talking off the top of my head. I am not aware of this being written about more rigorously anywhere.