Thanks for the comment, Ryan. I agree that report by Joseph Carlsmith is quite detailed. However, I do not think it is sufficiently quantitative. In particular, the probabilities which are multiplied to obtain the chance of an existential catastrophe are directly guessed, as opposed to resulting from detailed modelling (in contrast to the AI takeoff speeds calculated in Tom’s report). Joseph was mostly aiming to qualitatively describe the arguments, as opposed to quantifying the risk:
My main hope, though, is not to push for a specific number, but rather to lay out the arguments in a way that can facilitate productive debate.
What about “Is Power-Seeking AI an Existential Risk?”?
I don’t know if you’d count it as quantitative, but it is detailed.
Thanks for the comment, Ryan. I agree that report by Joseph Carlsmith is quite detailed. However, I do not think it is sufficiently quantitative. In particular, the probabilities which are multiplied to obtain the chance of an existential catastrophe are directly guessed, as opposed to resulting from detailed modelling (in contrast to the AI takeoff speeds calculated in Tom’s report). Joseph was mostly aiming to qualitatively describe the arguments, as opposed to quantifying the risk: