in which a minor slip-up means instant death for everyone so a 1 – epsilon probability of success is unacceptable.
Oh, does Eliezer still think (speak?) that way? I think that would be the first clear reasoning error (that can’t just be written off as a sort of opinionated specialization) I’ve seen him make about AI strategy. In a situation where there’s a certain yearly baseline risk of the deployment of misaligned AGI occurring (which is currently quite low and so this wouldn’t be active yet!), it does actually become acceptable to deploy a system that has a well estimated risk of being misaligned. Techniques that only have a decent chance of working is actually useful and should be collected enthusiastically.
I don’t know that he is still taking a zero risk policy, I’ve been seeing a lot more “no it will almost certainly be misaligned” recently, but it could have given rise to a lot of erroneous inferences.
Oh, does Eliezer still think (speak?) that way? I think that would be the first clear reasoning error (that can’t just be written off as a sort of opinionated specialization) I’ve seen him make about AI strategy. In a situation where there’s a certain yearly baseline risk of the deployment of misaligned AGI occurring (which is currently quite low and so this wouldn’t be active yet!), it does actually become acceptable to deploy a system that has a well estimated risk of being misaligned. Techniques that only have a decent chance of working is actually useful and should be collected enthusiastically.
I don’t know that he is still taking a zero risk policy, I’ve been seeing a lot more “no it will almost certainly be misaligned” recently, but it could have given rise to a lot of erroneous inferences.