Hmm. I agree that these numbers are low confidence. But for the purpose of acting and forming conclusions from this, I’m not sure what you think is a better approach (beyond saying that more resources should be put into becoming more confident, which I broadly agree with).
Do you think I can never make statements like “low confidence proposition X is more likely than high confidence proposition Y”? What would feel like a reasonable criteria for being able to say that kind of thing?
More generally, I’m not actually sure what you’re trying to capture with error bounds—what does it actually mean to say that P(AI X-risk) is in [0.5%, 50%] rather than 5%? What is this a probability distribution over? I’m estimating a probability, not a quantity. I’d be open to the argument that the uncertainty comes from ‘what might I think if I thought about this for much longer’.
I’ll also note that the timeline numbers are a distribution over years, which is already implicitly including a bunch of uncertainty plus some probability over AI never. Though obviously it could include more. The figure for AI x-risk is a point estimate, which is much dodgier.
And I’ll note again that the natural causes numbers are at best medium confidence, since they assume the status quo continues!
would give you a value between 0.6% and 87%
Nitpick: I think you mean 6%? (0.37/(0.37+5.3) = 0.06). Obviously this doesn’t change your core point.
Huh, I appreciate you actually putting numbers on this! I was suprised at nuclear risk numbers being remotely competitive with natural causes (let alone significantly dominating over the next 20 years), and I take this as an at least mild downwards update on AI dominating all other risks (on a purely personal level). Probably I had incorrect cached thoughts from people exclusively discussing extinction risk rather than just catastrophic risks, but from a purely personal perspective this distinction matters much less.
EDIT: Added a caveat to the post accordingly