This is along the lines of what I’m thinking when I say AGI Governance. The scenario outlined by the winner of FLI’s World Building Contest is an optimistic vision of this.
2. Use early AGI tech to limit AGI proliferation.
This sounds like something to be done unilaterally, as per the ‘pivotal act’ that MIRI folk talk about. To me it seems like such a thing is pretty much as impossible as safely fully aligning an AGI, so working towards doing it unilaterally seems pretty dangerous. Not least for its potential role in exacerbating race dynamics. Maybe the world will be ended by a hubristic team who are convinced that their AGI is safe enough to perform such a pivotal act, and that they need to run it because another team is very close to unleashing their potentially world ending un-aligned AGI. Or by another team seeing all the GPUs starting to melt and pressing go on their (still-not-fully-aligned) AGI… It’s like MAD, but for well intentioned would-be world-savers.
I think your view of AGI governance is idiosyncratic because of thinking in such unilateralist terms. Maybe it could be a move that leads to (the world) winning, but I think that even though effective broad global-scale governance of AGI might seem insurmountable, it’s a better shot. See also aogara’s comment and its links.
This is along the lines of what I’m thinking when I say AGI Governance. The scenario outlined by the winner of FLI’s World Building Contest is an optimistic vision of this.
This sounds like something to be done unilaterally, as per the ‘pivotal act’ that MIRI folk talk about. To me it seems like such a thing is pretty much as impossible as safely fully aligning an AGI, so working towards doing it unilaterally seems pretty dangerous. Not least for its potential role in exacerbating race dynamics. Maybe the world will be ended by a hubristic team who are convinced that their AGI is safe enough to perform such a pivotal act, and that they need to run it because another team is very close to unleashing their potentially world ending un-aligned AGI. Or by another team seeing all the GPUs starting to melt and pressing go on their (still-not-fully-aligned) AGI… It’s like MAD, but for well intentioned would-be world-savers.
I think your view of AGI governance is idiosyncratic because of thinking in such unilateralist terms. Maybe it could be a move that leads to (the world) winning, but I think that even though effective broad global-scale governance of AGI might seem insurmountable, it’s a better shot. See also aogara’s comment and its links.