I agree that regulation is enormously important, but I’m not sure about the following claim:
“That means that aligning an AGI, while creating lots of value, would not reduce existential risk”
It seems, naively, that an aligned AGI could help us detect and prevent other power seeking AGIs. It doesn’t completely eliminate the risk, but I feel even a single aligned AGI makes the world a lot safer against misaligned AGI.
Thanks for the comment. I think the ways an aligned AGI could make the world safer against unaligned AGIs can be divided in two categories: preventing unaligned AGIs from coming into existence or stopping already existing unaligned AGIs from causing extinction. The second is the offense/defense balance. The first is what you point at.
If an AGI would prevent people from creating AI, this would likely be against their will. A state would be the only actor who could do so legally, assuming there is regulation in place, and also most practically. Therefore, I think your option falls under what I described in my post as “Types of AI (hardware) regulation may be possible where the state actors implementing the regulation are aided by aligned AIs”. I think this is indeed a realistic option and it may reduce existential risk somewhat. Getting the regulation in place at all, however, seems more important at this point than developing what I see as a pretty far-fetched and—at the moment—intractable way to implement it more effectively.
I agree that regulation is enormously important, but I’m not sure about the following claim:
“That means that aligning an AGI, while creating lots of value, would not reduce existential risk”
It seems, naively, that an aligned AGI could help us detect and prevent other power seeking AGIs. It doesn’t completely eliminate the risk, but I feel even a single aligned AGI makes the world a lot safer against misaligned AGI.
Thanks for the comment. I think the ways an aligned AGI could make the world safer against unaligned AGIs can be divided in two categories: preventing unaligned AGIs from coming into existence or stopping already existing unaligned AGIs from causing extinction. The second is the offense/defense balance. The first is what you point at.
If an AGI would prevent people from creating AI, this would likely be against their will. A state would be the only actor who could do so legally, assuming there is regulation in place, and also most practically. Therefore, I think your option falls under what I described in my post as “Types of AI (hardware) regulation may be possible where the state actors implementing the regulation are aided by aligned AIs”. I think this is indeed a realistic option and it may reduce existential risk somewhat. Getting the regulation in place at all, however, seems more important at this point than developing what I see as a pretty far-fetched and—at the moment—intractable way to implement it more effectively.