I’m not sure I totally agree, or at least it depends on what you mean by “accurate”. I’m not an expert here, but Wikipedia says:
Broadly speaking, there are two interpretations on Bayesian probability. For objectivists, interpreting probability as extension of logic, probability quantifies the reasonable expectation everyone (even a “robot”) sharing the same knowledge should share in accordance with the rules of Bayesian statistics, which can be justified by Cox’s theorem.[2][8] For subjectivists, probability corresponds to a personal belief.
I think a project like Ord’s is probably most useful if it’s at least striving for objetivist Bayesian probabilities. (I think “the epistemic interpretation” is also relevant.)
I’m also not an expert here, but I think we’d have to agree about how to interpret knowledge and build the model, and have the same priors to guarantee this kind of agreement. See some discussion here. The link you sent about probability interpretations also links to the reference class problem.
And if it’s doing so, I think the probabilities can be meaningfully critiqued as more or less reasonable or useful.
I think we can critique probabilities based on how they were estimated, at least, and I think some probabilities we can be pretty confident in because they come from repeated random-ish trials or we otherwise have reliable precedent to base them on (e.g. good reference classes, and the estimates don’t vary too much between the best reference classes). If there’s only really one reasonable model, and all of the probabilities are pretty precise in it (based on precedent), then the final probability should be pretty precise, too.
Just found a quote from the book which I should’ve mentioned earlier (perhaps this should’ve also been a footnote in this post):
any notion of risk must involve some kind of probability. What kind is involved in existential risk? Understanding the probability in terms of objective long-run frequencies won’t work, as the existential catastrophes we are concerned with can only ever happen once, and will always be unprecedented until the moment it is too late. We can’t say the probability of an existential catastrophe is precisely zero just because it hasn’t happened yet.
Situations like these require an evidential sense of probability, which describes the appropriate degree of belief we should have on the basis of the available information. This is the familiar type of probability used in courtrooms, banks and betting shops. When I speak of the probability of an existential catastrophe, I mean the credence humanity should have that it will occur, in light of our best evidence.
And I’m pretty sure there was another quote somewhere about the complexities with this.
As for your comment, I’m not sure if we’re just using language slightly differently or actually have different views. But I think we do have different views on this point:
If there’s only really one reasonable model, and all of the probabilities are pretty precise in it (based on precedent), then the final probability should be pretty precise, too.
I would say that, even if one model is the most (or only) reasonable one we’re aware of, if we’re not certain about the model, we should account for model uncertainty (or uncertainty about the argument). So (I think) even if we don’t have specific reasons for other precise probabilities, or for decreasing the precision, we should still make our probabilities less precise, because there could be “unknown unknowns”, or mistakes in our reasoning process, or whatever.
If we know that our model might be wrong, and we don’t account for that when thinking about how certain vs uncertain we are, then we’re not using all the evidence and information we have. Thus, we wouldn’t be striving for that “evidential” sense of probability as well as we could. And more importantly, it seems likely we’d predictably do worse in making plans and achieving our goals.
Interestingly, Ord is among the main people I’ve seen making the sort of argument I make in the prior paragraph, both in this book and in two prior papers (one of which I’ve only read the abstract of). This increased my degree of surprise at him appearing to suggest he was fairly confident these estimates were of the right order of magnitude.
I agree that we should consider model uncertainty, including the possibility of unknown unknowns.
I think it’s rare that you can show that only one model is reasonable in practice, because the world is so complex. Mostly only really well-defined problems with known parts and finitely many known unknowns, like certain games, (biased) coin flipping, etc..
I’m also not an expert here, but I think we’d have to agree about how to interpret knowledge and build the model, and have the same priors to guarantee this kind of agreement. See some discussion here. The link you sent about probability interpretations also links to the reference class problem.
I think we can critique probabilities based on how they were estimated, at least, and I think some probabilities we can be pretty confident in because they come from repeated random-ish trials or we otherwise have reliable precedent to base them on (e.g. good reference classes, and the estimates don’t vary too much between the best reference classes). If there’s only really one reasonable model, and all of the probabilities are pretty precise in it (based on precedent), then the final probability should be pretty precise, too.
Just found a quote from the book which I should’ve mentioned earlier (perhaps this should’ve also been a footnote in this post):
And I’m pretty sure there was another quote somewhere about the complexities with this.
As for your comment, I’m not sure if we’re just using language slightly differently or actually have different views. But I think we do have different views on this point:
I would say that, even if one model is the most (or only) reasonable one we’re aware of, if we’re not certain about the model, we should account for model uncertainty (or uncertainty about the argument). So (I think) even if we don’t have specific reasons for other precise probabilities, or for decreasing the precision, we should still make our probabilities less precise, because there could be “unknown unknowns”, or mistakes in our reasoning process, or whatever.
If we know that our model might be wrong, and we don’t account for that when thinking about how certain vs uncertain we are, then we’re not using all the evidence and information we have. Thus, we wouldn’t be striving for that “evidential” sense of probability as well as we could. And more importantly, it seems likely we’d predictably do worse in making plans and achieving our goals.
Interestingly, Ord is among the main people I’ve seen making the sort of argument I make in the prior paragraph, both in this book and in two prior papers (one of which I’ve only read the abstract of). This increased my degree of surprise at him appearing to suggest he was fairly confident these estimates were of the right order of magnitude.
I agree that we should consider model uncertainty, including the possibility of unknown unknowns.
I think it’s rare that you can show that only one model is reasonable in practice, because the world is so complex. Mostly only really well-defined problems with known parts and finitely many known unknowns, like certain games, (biased) coin flipping, etc..