I think you’re confused about what different parts of the AI risk community are concerned about. Your explanation addresses the risks of human-caused, AGI assisted catastrophe. What Eliezer and others are warning about is a post-foom misaligned AGI. And no, a united, peaceful, adaptable world that managed to address the specific risks of pandemics and nuclear war would not be in a materially better position to “stave off” a highly-superhuman agent that controls its communications systems. This is akin to the paradigm of computer security by patching individual components—it will keep out the script-kiddies, but not the NSA.
So as far as I understand it, the key question that splits between different parts of the AI risk community is what the timeline for AGI takeoff is, and that has little to do with cultural approach to risk, and everything to do with the risk analysis itself. (And we already had the rest of this discussion in the comments on the link to your views on non-infallible AGI.)
Foom is not a requirement for AI-risk worries. If it was, I would be even less worried, because in my opinion ai-go-foom is extremely unlikely. Correct me if I’m wrong, but I was under the impression that plenty of Ai x-riskers were not foomers?
I was inexact—by “post-foom” I simply meant after a capabilities takeoff occurs, regardless of whether than takes months, years, or even decades—as long as humanity doesn’t manage to notice and successfully stop ASI from being deployed.
I think you’re confused about what different parts of the AI risk community are concerned about. Your explanation addresses the risks of human-caused, AGI assisted catastrophe. What Eliezer and others are warning about is a post-foom misaligned AGI. And no, a united, peaceful, adaptable world that managed to address the specific risks of pandemics and nuclear war would not be in a materially better position to “stave off” a highly-superhuman agent that controls its communications systems. This is akin to the paradigm of computer security by patching individual components—it will keep out the script-kiddies, but not the NSA.
So as far as I understand it, the key question that splits between different parts of the AI risk community is what the timeline for AGI takeoff is, and that has little to do with cultural approach to risk, and everything to do with the risk analysis itself. (And we already had the rest of this discussion in the comments on the link to your views on non-infallible AGI.)
Foom is not a requirement for AI-risk worries. If it was, I would be even less worried, because in my opinion ai-go-foom is extremely unlikely. Correct me if I’m wrong, but I was under the impression that plenty of Ai x-riskers were not foomers?
I think even the foom skeptics (e.g. Christiano) think that a foom will eventually happen, even if there is a slow-takeoff over many years first.
I was inexact—by “post-foom” I simply meant after a capabilities takeoff occurs, regardless of whether than takes months, years, or even decades—as long as humanity doesn’t manage to notice and successfully stop ASI from being deployed.