I’d argue it’s even less stable than nukes, but one reassuring point: There will ultimately be a very weird future with thousands, Millions or billions of AIs, post humans and genetically engineered beings, and the borders are very porous and dissolvable and that ultimately is important to keep in mind. Also we don’t need arbitrarily long alignment, just aligning it for 50-100 years is enough. Ultimately nothing needs to be in the long term stable, just short term chaos and stability.
Because it’s possible that even in unstable, diverse futures, catastrophe can be avoided. As to the long-term future after the Singularity, that’s a question we will deal with it when we get there
I don’t think “dealing with it when we get there” is a good approach to AI safety. I agree that bad outcomes could be averted in unstable futures but I’d prefer to reduce the risk as much as possible nonetheless.
I’d argue it’s even less stable than nukes, but one reassuring point: There will ultimately be a very weird future with thousands, Millions or billions of AIs, post humans and genetically engineered beings, and the borders are very porous and dissolvable and that ultimately is important to keep in mind. Also we don’t need arbitrarily long alignment, just aligning it for 50-100 years is enough. Ultimately nothing needs to be in the long term stable, just short term chaos and stability.
I’m not sure why this should be reassuring. It doesn’t sound clearly good to me. In fact, it sounds pretty controversial.
Because it’s possible that even in unstable, diverse futures, catastrophe can be avoided. As to the long-term future after the Singularity, that’s a question we will deal with it when we get there
I don’t think “dealing with it when we get there” is a good approach to AI safety. I agree that bad outcomes could be averted in unstable futures but I’d prefer to reduce the risk as much as possible nonetheless.