the probabilities are of the order of 10^-3 to 10^-8, which is far from infinitesimal
I’m not sure what the probabilties are. You’re right that they are far from infinitesimal (just as every number is!): still, the y may be close enough to warrant discounting on whatever basis people discount Pascal’s mugger.
what is important is reducing the risk to an acceptable level
I think the risk is pretty irrelevant. If we lower the risk but still go extinct, we can pat ourselves on the back for fighting the good fight, but I don’t hink we should assign it much value. Our effect on the risk is instrumentally valulable for what it does for the species.
Also I don’t understand the comment on AI Alignment
The thought was that it is possible to make a difference between AI being pretty well and very well aligned, so we might be able to impact whether the future is good or great, and that is worth pursuing regardless of its relation to existential risk.
That would be bad, yes. But lowering the risk (significantly) means that it’s (significantly) less likely that we will go extinct! Say we lower the risk from 1⁄6 (Toby Ord’s all-things-considered estimate for x-risk over the next 100 years) to 1⁄60 this century. We’ve then bought ourselves a lot more time (in expectation) to lower the risk further. If we keep doing this at a high enough rate, we will very likely not go extinct for a very long time.
it is possible to make a difference between AI being pretty well and very well aligned, so we might be able to impact whether the future is good or great
I think “pretty well aligned” basically means we still all die; it has to be very well/perfectly aligned to be compatible with human existence, once you factor in an increase in power level of the AI to superintelligence; so it’s basically all or nothing (I’m with Yudkowsky/MIRI on this).
I’m not sure what the probabilties are. You’re right that they are far from infinitesimal (just as every number is!): still, the y may be close enough to warrant discounting on whatever basis people discount Pascal’s mugger.
I think the risk is pretty irrelevant. If we lower the risk but still go extinct, we can pat ourselves on the back for fighting the good fight, but I don’t hink we should assign it much value. Our effect on the risk is instrumentally valulable for what it does for the species.
The thought was that it is possible to make a difference between AI being pretty well and very well aligned, so we might be able to impact whether the future is good or great, and that is worth pursuing regardless of its relation to existential risk.
That would be bad, yes. But lowering the risk (significantly) means that it’s (significantly) less likely that we will go extinct! Say we lower the risk from 1⁄6 (Toby Ord’s all-things-considered estimate for x-risk over the next 100 years) to 1⁄60 this century. We’ve then bought ourselves a lot more time (in expectation) to lower the risk further. If we keep doing this at a high enough rate, we will very likely not go extinct for a very long time.
I think “pretty well aligned” basically means we still all die; it has to be very well/perfectly aligned to be compatible with human existence, once you factor in an increase in power level of the AI to superintelligence; so it’s basically all or nothing (I’m with Yudkowsky/MIRI on this).