A reason I consider what I described likely is not least that I find it more likely that future software systems will consist in a multitude of specialized systems with quite different designs, even in the presence of AGI, as opposed to most everything being done by copies of some singular AGI system.
Can you explain why this is relevant to how much effort we should put into AI alignment research today?
Can you explain why this is relevant to how much effort we should put into AI alignment research today?
In brief: the less of a determinant specific AGI structure is of future outcomes, the less relevant/worthy of investment it is.