Just wanted to note that while I am quoted as being optimistic, I am still working on it specifically to cover the x-risk case and not the value lock-in case. (But certainly some people are working on the value lock-in case.)
(Also I think several people would disagree that I am optimistic, and would instead think I’m too pessimistic, e.g. I get the sense that I would be on the pessimistic side at FHI.)
Also, for posterity, there’s some interesting discussion of that interview with Rohin here.
And some other takes on “Why AI risk might be solved without additional intervention from longtermists” are summarised, and then discussed in the comments, here.
But very much in line with technicalities’ comment, it’s of course totally possible to believe that AI risk will probably be solved without additional intervention from longtermists, and yet still think that serious effort should go into raising that probability further.
Great quote from The Precipice on that general idea, in the context of nuclear weapons:
In 1939, Enrico Fermi told Szilard the chain reaction was but a ‘remote possibility’ [...]
Fermi was asked to clarify the ‘remote possibility’ and ventured ‘ten percent’. Isidor Rabi, who was also present, replied, ‘Ten percent is not a remote possibility if it means that we may die of it. If I have pneumonia and the doctor tells me that there is a remote possibility that I might die, and it’s ten percent, I get excited about it’
Just wanted to note that while I am quoted as being optimistic, I am still working on it specifically to cover the x-risk case and not the value lock-in case. (But certainly some people are working on the value lock-in case.)
(Also I think several people would disagree that I am optimistic, and would instead think I’m too pessimistic, e.g. I get the sense that I would be on the pessimistic side at FHI.)
Also, for posterity, there’s some interesting discussion of that interview with Rohin here.
And some other takes on “Why AI risk might be solved without additional intervention from longtermists” are summarised, and then discussed in the comments, here.
But very much in line with technicalities’ comment, it’s of course totally possible to believe that AI risk will probably be solved without additional intervention from longtermists, and yet still think that serious effort should go into raising that probability further.
Great quote from The Precipice on that general idea, in the context of nuclear weapons: