Etzioni’s implicit argument against AI posing a nontrivial existential risk seems to be the following:
(a) The probability of human-level AI being developed on a short timeline (less than a couple decades) is trivial.
(b) Before human-level AI is developed, there will be ‘canaries collapsing’ warning us that human-level AI is potentially coming soon or at least is no longer a “very low probability” on the timescale of a couple decades.
(c) “If and when a canary “collapses,” we will have ample time before the emergence of human-level AI to design robust “off-switches” and to identify red lines we don’t want AI to cross”
(d) Therefore, AI does not pose a nontrivial existential risk.
It seems to me that if there is a nontrivial probability that he is wrong about ‘c’ then in fact it is meaningful to say that AI does pose a nontrivial existential risk that we should start preparing for before the canaries he mentions start collapsing.
Etzioni’s implicit argument against AI posing a nontrivial existential risk seems to be the following:
(a) The probability of human-level AI being developed on a short timeline (less than a couple decades) is trivial.
(b) Before human-level AI is developed, there will be ‘canaries collapsing’ warning us that human-level AI is potentially coming soon or at least is no longer a “very low probability” on the timescale of a couple decades.
(c) “If and when a canary “collapses,” we will have ample time before the emergence of human-level AI to design robust “off-switches” and to identify red lines we don’t want AI to cross”
(d) Therefore, AI does not pose a nontrivial existential risk.
It seems to me that if there is a nontrivial probability that he is wrong about ‘c’ then in fact it is meaningful to say that AI does pose a nontrivial existential risk that we should start preparing for before the canaries he mentions start collapsing.