Because it’s possible that even in unstable, diverse futures, catastrophe can be avoided. As to the long-term future after the Singularity, that’s a question we will deal with it when we get there
I don’t think “dealing with it when we get there” is a good approach to AI safety. I agree that bad outcomes could be averted in unstable futures but I’d prefer to reduce the risk as much as possible nonetheless.
I’m not sure why this should be reassuring. It doesn’t sound clearly good to me. In fact, it sounds pretty controversial.
Because it’s possible that even in unstable, diverse futures, catastrophe can be avoided. As to the long-term future after the Singularity, that’s a question we will deal with it when we get there
I don’t think “dealing with it when we get there” is a good approach to AI safety. I agree that bad outcomes could be averted in unstable futures but I’d prefer to reduce the risk as much as possible nonetheless.