Etzioniâs implicit argument against AI posing a nontrivial existential risk seems to be the following:
(a) The probability of human-level AI being developed on a short timeline (less than a couple decades) is trivial.
(b) Before human-level AI is developed, there will be âcanaries collapsingâ warning us that human-level AI is potentially coming soon or at least is no longer a âvery low probabilityâ on the timescale of a couple decades.
(c) âIf and when a canary âcollapses,â we will have ample time before the emergence of human-level AI to design robust âoff-switchesâ and to identify red lines we donât want AI to crossâ
(d) Therefore, AI does not pose a nontrivial existential risk.
It seems to me that if there is a nontrivial probability that he is wrong about âcâ then in fact it is meaningful to say that AI does pose a nontrivial existential risk that we should start preparing for before the canaries he mentions start collapsing.
Etzioniâs implicit argument against AI posing a nontrivial existential risk seems to be the following:
(a) The probability of human-level AI being developed on a short timeline (less than a couple decades) is trivial.
(b) Before human-level AI is developed, there will be âcanaries collapsingâ warning us that human-level AI is potentially coming soon or at least is no longer a âvery low probabilityâ on the timescale of a couple decades.
(c) âIf and when a canary âcollapses,â we will have ample time before the emergence of human-level AI to design robust âoff-switchesâ and to identify red lines we donât want AI to crossâ
(d) Therefore, AI does not pose a nontrivial existential risk.
It seems to me that if there is a nontrivial probability that he is wrong about âcâ then in fact it is meaningful to say that AI does pose a nontrivial existential risk that we should start preparing for before the canaries he mentions start collapsing.