Also, for posterity, there’s some interesting discussion of that interview with Rohin here.
And some other takes on “Why AI risk might be solved without additional intervention from longtermists” are summarised, and then discussed in the comments, here.
But very much in line with technicalities’ comment, it’s of course totally possible to believe that AI risk will probably be solved without additional intervention from longtermists, and yet still think that serious effort should go into raising that probability further.
Great quote from The Precipice on that general idea, in the context of nuclear weapons:
In 1939, Enrico Fermi told Szilard the chain reaction was but a ‘remote possibility’ [...]
Fermi was asked to clarify the ‘remote possibility’ and ventured ‘ten percent’. Isidor Rabi, who was also present, replied, ‘Ten percent is not a remote possibility if it means that we may die of it. If I have pneumonia and the doctor tells me that there is a remote possibility that I might die, and it’s ten percent, I get excited about it’
Also, for posterity, there’s some interesting discussion of that interview with Rohin here.
And some other takes on “Why AI risk might be solved without additional intervention from longtermists” are summarised, and then discussed in the comments, here.
But very much in line with technicalities’ comment, it’s of course totally possible to believe that AI risk will probably be solved without additional intervention from longtermists, and yet still think that serious effort should go into raising that probability further.
Great quote from The Precipice on that general idea, in the context of nuclear weapons: