I don’t think “dealing with it when we get there” is a good approach to AI safety. I agree that bad outcomes could be averted in unstable futures but I’d prefer to reduce the risk as much as possible nonetheless.
I don’t think “dealing with it when we get there” is a good approach to AI safety. I agree that bad outcomes could be averted in unstable futures but I’d prefer to reduce the risk as much as possible nonetheless.