It seems unlikely that we’ll ever get AI x-risk down to negligible levels, but it’s currently striking how high a risk is being tolerated by those building (and regulating) the technology, when compared to, as you say, aviation, and also nuclear power (<1 catastrophic accident in 100,000 years being what’s usually aimed for). I think at the very least we need to reach a global consensus on what level of risk we are willing to tolerate before continuing with building AGI.
It seems unlikely that we’ll ever get AI x-risk down to negligible levels, but it’s currently striking how high a risk is being tolerated by those building (and regulating) the technology, when compared to, as you say, aviation, and also nuclear power (<1 catastrophic accident in 100,000 years being what’s usually aimed for). I think at the very least we need to reach a global consensus on what level of risk we are willing to tolerate before continuing with building AGI.