Thanks for this response I love this debate. One quick comment for a start (might add more later)
You say “Despite this, he is more extreme in his confidence that things will be ok than the average expert”
From the perspective of an outsider like me, this statement doesn’t seem right. In the only big survey I could find with thousands of AI experts in 2024, the median p doom (which equates with the average expert) was 5% - pretty close to BB’s. In addition Expert forecasters (who are usually better than domain experts at predicting the future) put risk below 1 %. Sure many higher profile experts have more extreme positions (including people like, but these aren’t the average and there are some like Yann Lacunn, Hassibis and Andreeson who are below 2.6% . Even Ord is at 10% which isn’t that much higher than BB—who IMO to his credit tried to use statistics to get to his.
My second issue here (maybe just personal preference) is that I don’t love the way both you and @Bentham’s Bulldog talk about “confidence” Statistically when we talk about how confident we are in our predictions, this relates to how sure (confident) we are that our prediction is correct, not about whether our percentage (in this case pdoom) is high or low. I understand that both meanings can be correct, but for precision and to avoid confusion I prefer the statistical “confidence” definition. It might seem like a nitpick, but I even prefer “how sure are you” ASI will kill us all or even just “I think there’s a high probability that...”
By my definition of confidence then, Bentham’s Bulldog is far less confident than you in his prediction of 2.6%. He doesn’t quote his error bars but expresses that he is very uncertain, and wide error bars are also implicit in his probability tree method as well. YS on the otherhand seem to have very narrow error bars around their claim “if anyone builds ASI with modern methods, everyone will die.”
Thanks for this response I love this debate. One quick comment for a start (might add more later)
You say “Despite this, he is more extreme in his confidence that things will be ok than the average expert”
From the perspective of an outsider like me, this statement doesn’t seem right. In the only big survey I could find with thousands of AI experts in 2024, the median p doom (which equates with the average expert) was 5% - pretty close to BB’s. In addition Expert forecasters (who are usually better than domain experts at predicting the future) put risk below 1 %. Sure many higher profile experts have more extreme positions (including people like, but these aren’t the average and there are some like Yann Lacunn, Hassibis and Andreeson who are below 2.6% . Even Ord is at 10% which isn’t that much higher than BB—who IMO to his credit tried to use statistics to get to his.
My second issue here (maybe just personal preference) is that I don’t love the way both you and @Bentham’s Bulldog talk about “confidence” Statistically when we talk about how confident we are in our predictions, this relates to how sure (confident) we are that our prediction is correct, not about whether our percentage (in this case pdoom) is high or low. I understand that both meanings can be correct, but for precision and to avoid confusion I prefer the statistical “confidence” definition. It might seem like a nitpick, but I even prefer “how sure are you” ASI will kill us all or even just “I think there’s a high probability that...”
By my definition of confidence then, Bentham’s Bulldog is far less confident than you in his prediction of 2.6%. He doesn’t quote his error bars but expresses that he is very uncertain, and wide error bars are also implicit in his probability tree method as well. YS on the otherhand seem to have very narrow error bars around their claim “if anyone builds ASI with modern methods, everyone will die.”