Hmm. I agree that these numbers are low confidence. But for the purpose of acting and forming conclusions from this, I’m not sure what you think is a better approach (beyond saying that more resources should be put into becoming more confident, which I broadly agree with).
Do you think I can never make statements like “low confidence proposition X is more likely than high confidence proposition Y”? What would feel like a reasonable criteria for being able to say that kind of thing?
More generally, I’m not actually sure what you’re trying to capture with error bounds—what does it actually mean to say that P(AI X-risk) is in [0.5%, 50%] rather than 5%? What is this a probability distribution over? I’m estimating a probability, not a quantity. I’d be open to the argument that the uncertainty comes from ‘what might I think if I thought about this for much longer’.
I’ll also note that the timeline numbers are a distribution over years, which is already implicitly including a bunch of uncertainty plus some probability over AI never. Though obviously it could include more. The figure for AI x-risk is a point estimate, which is much dodgier.
And I’ll note again that the natural causes numbers are at best medium confidence, since they assume the status quo continues!
would give you a value between 0.6% and 87%
Nitpick: I think you mean 6%? (0.37/(0.37+5.3) = 0.06). Obviously this doesn’t change your core point.
Do you think I can never make statements like “low confidence proposition X is more likely than high confidence proposition Y”? What would feel like a reasonable criteria for being able to say that kind of thing?
Honestly, yeah, I think it is a weird statement to definitively state that X wildly speculative thing is more likely than Y well known and studied thing (or to put it differently, when the error bounds of X are orders of magnitude different from the error bounds in Y). It might help if you provided a counterexample here? I think my objections here might be partially on the semantics, saying “X is more likely than Y” seems like a smuggling of certainty into a very uncertain proposition.
what does it actually mean to say that P(AI X-risk) is in [0.5%, 50%] rather than 5%
I think it elucidates more accurately the state of knowledge about the situation, which is that you don’t know much at all.
Hmm. I agree that these numbers are low confidence. But for the purpose of acting and forming conclusions from this, I’m not sure what you think is a better approach (beyond saying that more resources should be put into becoming more confident, which I broadly agree with).
Do you think I can never make statements like “low confidence proposition X is more likely than high confidence proposition Y”? What would feel like a reasonable criteria for being able to say that kind of thing?
More generally, I’m not actually sure what you’re trying to capture with error bounds—what does it actually mean to say that P(AI X-risk) is in [0.5%, 50%] rather than 5%? What is this a probability distribution over? I’m estimating a probability, not a quantity. I’d be open to the argument that the uncertainty comes from ‘what might I think if I thought about this for much longer’.
I’ll also note that the timeline numbers are a distribution over years, which is already implicitly including a bunch of uncertainty plus some probability over AI never. Though obviously it could include more. The figure for AI x-risk is a point estimate, which is much dodgier.
And I’ll note again that the natural causes numbers are at best medium confidence, since they assume the status quo continues!
Nitpick: I think you mean 6%? (0.37/(0.37+5.3) = 0.06). Obviously this doesn’t change your core point.
Honestly, yeah, I think it is a weird statement to definitively state that X wildly speculative thing is more likely than Y well known and studied thing (or to put it differently, when the error bounds of X are orders of magnitude different from the error bounds in Y). It might help if you provided a counterexample here? I think my objections here might be partially on the semantics, saying “X is more likely than Y” seems like a smuggling of certainty into a very uncertain proposition.
I think it elucidates more accurately the state of knowledge about the situation, which is that you don’t know much at all.
(also, lol, fair point on the calculation error)