This would benefit from stating a bottom line up front: e.g., Using Shapira’s Doom Train analytic framework, I estimate a 31% p(doom). However, after adjustments—especially for the views of superforcasters and AI insiders—my adjusted p(doom) is 2.76%.
More substantively, I suggest your outcome is largely driven by the Bayes factors—I think the possible range of outcomes is 0% to 9% on the stated factors. And my guess is that you might have chosen greater or lesser factors depending on where your own analysis ended up—so the range of plausible outcomes is even less as a practical matter.
That’s one reason I recommend the BLUF here—someone who doesn’t take the 24 minutes to read the whole thing needs to understand how much of a role the Bayes factors are playing in the titular p(doom) estimate vs. the Doom Train methodology.
This would benefit from stating a bottom line up front: e.g., Using Shapira’s Doom Train analytic framework, I estimate a 31% p(doom). However, after adjustments—especially for the views of superforcasters and AI insiders—my adjusted p(doom) is 2.76%.
More substantively, I suggest your outcome is largely driven by the Bayes factors—I think the possible range of outcomes is 0% to 9% on the stated factors. And my guess is that you might have chosen greater or lesser factors depending on where your own analysis ended up—so the range of plausible outcomes is even less as a practical matter.
That’s one reason I recommend the BLUF here—someone who doesn’t take the 24 minutes to read the whole thing needs to understand how much of a role the Bayes factors are playing in the titular p(doom) estimate vs. the Doom Train methodology.