That would be bad, yes. But lowering the risk (significantly) means that it’s (significantly) less likely that we will go extinct! Say we lower the risk from 1⁄6 (Toby Ord’s all-things-considered estimate for x-risk over the next 100 years) to 1⁄60 this century. We’ve then bought ourselves a lot more time (in expectation) to lower the risk further. If we keep doing this at a high enough rate, we will very likely not go extinct for a very long time.
it is possible to make a difference between AI being pretty well and very well aligned, so we might be able to impact whether the future is good or great
I think “pretty well aligned” basically means we still all die; it has to be very well/perfectly aligned to be compatible with human existence, once you factor in an increase in power level of the AI to superintelligence; so it’s basically all or nothing (I’m with Yudkowsky/MIRI on this).
That would be bad, yes. But lowering the risk (significantly) means that it’s (significantly) less likely that we will go extinct! Say we lower the risk from 1⁄6 (Toby Ord’s all-things-considered estimate for x-risk over the next 100 years) to 1⁄60 this century. We’ve then bought ourselves a lot more time (in expectation) to lower the risk further. If we keep doing this at a high enough rate, we will very likely not go extinct for a very long time.
I think “pretty well aligned” basically means we still all die; it has to be very well/perfectly aligned to be compatible with human existence, once you factor in an increase in power level of the AI to superintelligence; so it’s basically all or nothing (I’m with Yudkowsky/MIRI on this).