One minor quibble with this postâs language, rather than any of its actual claims: The title includes the phrase âsafety by defaultâ, and the terms âoptimismâ and âoptimistâ are repeatedly applied to these researchers or their views. The title is reasonable in a sense, as these interviews were partially/âmostly about whether AI would be âsafe by defaultâ, or why we might believe that it would be, or why these researchers believe that thatâs likely. And the use of âoptimismâ/ââoptimistâ are reasonable in a sense, as these researchers were discussing why theyâre relatively optimistic, compared to something like e.g. the âtypical MIRI viewâ.
But it seems potentially misleading to use those phrases here without emphasising (or at least mentioning) that at least some of these researchers think thereâs a greater than 1% chance of extinction or other existential catastrophe as a result of AI. E.g., the statement âRohin reported an unusually large (90%) chance that AI systems will be safe without additional interventionâ implies a 10% credence that that wonât be the case (and Paul and Adam seem to share very roughly similar views, based on Rohinâs summaries). Relevant quote from The Precipice:
In 1939, Enrico Fermi told Szilard the chain reaction was but a âremote possibilityâ [...]
Fermi was asked to clarify the âremote possibilityâ and ventured âten percentâ. Isidor Rabi, who was also present, replied, âTen percent is not a remote possibility if it means that we may die of it. If I have pneumonia and the doctor tells me that there is a remote possibility that I might die, and itâs ten percent, I get excited about itâ
And in this case, the stakes are far greater (meaning no offence to Isidor Rabi).
My guess would be that a decent portion of people who (a) were more used to something like the FHI/â80k/âOxford views, and less used to the MIRI/âBay Area views, and (b) read this without having read the interviews in great detail, might think that these researchers believe something like âThe chance things go wrong is too small to be worth anyone else worrying about.â Which doesnât seem accurate, at least for Rohin, Paul, and Adam.
To be clear: I donât think youâre intending to convey that message. And I definitely wouldnât want to try shut down any statements about AI that donât sound like âthis is a huge deal, everyone get in here now!â Iâm just a bit concerned about posts accidentally conveying an overly optimistic/âsanguine message when that wasnât actually their intent, and when it wasnât supported by the arguments/âevidence provided.
(Something informing this comment is my past experience reading a bunch of cognitive science work on how misinformation spreads and can be sticky. Some discussion here, and a particularly relevant paper here.)
One minor quibble with this postâs language, rather than any of its actual claims: The title includes the phrase âsafety by defaultâ, and the terms âoptimismâ and âoptimistâ are repeatedly applied to these researchers or their views. The title is reasonable in a sense, as these interviews were partially/âmostly about whether AI would be âsafe by defaultâ, or why we might believe that it would be, or why these researchers believe that thatâs likely. And the use of âoptimismâ/ââoptimistâ are reasonable in a sense, as these researchers were discussing why theyâre relatively optimistic, compared to something like e.g. the âtypical MIRI viewâ.
But it seems potentially misleading to use those phrases here without emphasising (or at least mentioning) that at least some of these researchers think thereâs a greater than 1% chance of extinction or other existential catastrophe as a result of AI. E.g., the statement âRohin reported an unusually large (90%) chance that AI systems will be safe without additional interventionâ implies a 10% credence that that wonât be the case (and Paul and Adam seem to share very roughly similar views, based on Rohinâs summaries). Relevant quote from The Precipice:
And in this case, the stakes are far greater (meaning no offence to Isidor Rabi).
My guess would be that a decent portion of people who (a) were more used to something like the FHI/â80k/âOxford views, and less used to the MIRI/âBay Area views, and (b) read this without having read the interviews in great detail, might think that these researchers believe something like âThe chance things go wrong is too small to be worth anyone else worrying about.â Which doesnât seem accurate, at least for Rohin, Paul, and Adam.
To be clear: I donât think youâre intending to convey that message. And I definitely wouldnât want to try shut down any statements about AI that donât sound like âthis is a huge deal, everyone get in here now!â Iâm just a bit concerned about posts accidentally conveying an overly optimistic/âsanguine message when that wasnât actually their intent, and when it wasnât supported by the arguments/âevidence provided.
(Something informing this comment is my past experience reading a bunch of cognitive science work on how misinformation spreads and can be sticky. Some discussion here, and a particularly relevant paper here.)