I’m just saying that the argument “this is a suicide race” is really not the way we should go. We should say the risk is >10% and that’s obviously unacceptable, because that’s an argument we can actually win.
Hmm, just to be clear, I think saying that “this deployment has a 1% chance of causing an existential risk, so you can’t deploy it” seems like a pretty reasonable ask to me.
I agree that I would like to focus on the >10% case first, but I also don’t want to set wrong expectations that I think it’s reasonable at 1% or below.
I agree. When I give numbers I usually say “We should keep the risk of AI takeover beneath 1%” (though I haven’t thought about it very much and mostly the numbers seem less important than the qualitative standard of evidence).
I think that 10% is obviously too high. I think that a society making reasonable tradeoffs could end up with 1% risk, but that it’s not something a government should allow AI developers to do without broader public input (and I suspect that our society would not choose to take this level of risk).
Hmm, just to be clear, I think saying that “this deployment has a 1% chance of causing an existential risk, so you can’t deploy it” seems like a pretty reasonable ask to me.
I agree that I would like to focus on the >10% case first, but I also don’t want to set wrong expectations that I think it’s reasonable at 1% or below.
I agree. When I give numbers I usually say “We should keep the risk of AI takeover beneath 1%” (though I haven’t thought about it very much and mostly the numbers seem less important than the qualitative standard of evidence).
I think that 10% is obviously too high. I think that a society making reasonable tradeoffs could end up with 1% risk, but that it’s not something a government should allow AI developers to do without broader public input (and I suspect that our society would not choose to take this level of risk).
Cool, makes sense. Seems like we are mostly on the same page on this subpoint.