Yes, it does seem worth pointing out that these are Bayesian rather than āfrequencyā/āāphysicalā probabilities. (Though Ord uses them as somewhat connected to frequency probabilities, as he also discusses how long we should expect humanity to last given various probabilities of x-catastrophe per century.)
To be clear, though, thatās what I had in mind when suggesting that being uncertain only within a particular order of magnitude was surprising to me. E.g., I agree with the following statement:
But these are Ordās beliefs, so when he says they could be a factor of 3 higher or lower, I think he means that he think thereās a good chance that he could be convinced that theyāre that much higher or lower, with new information
...but I was surprised to hear that, if Ord does mean that the way it sounds to me, he thinks he could only be convinced to raise or lower his credence by a factor of ~3.
Though itās possible he instead meant that they could definitely be off by a factor of 3, which that wouldnāt surprise him at all, but itās also plausible they could be off by even more.
I donāt think itās meaningful to say that a belief āX will happen with probability pā is accurate or not. We could test a set of beliefs and probabilities for calibration, but there are too few events here (many of which are extremely unlikely according to his views and are too far in the future) to test his calibration on them. So itās basically meaningless to say whether or not heās accurate about these.
I think thereās something to this, but Iām not sure I totally agree. Or at least it might depend on what you mean by āaccurateā. Iām not an expert here, but Wikipedia says:
Broadly speaking, there are two interpretations on Bayesian probability. For objectivists, interpreting probability as extension of logic, probability quantifies the reasonable expectation everyone (even a ārobotā) sharing the same knowledge should share in accordance with the rules of Bayesian statistics, which can be justified by Coxās theorem.[2][8] For subjectivists, probability corresponds to a personal belief.
I think a project like Ordās is probably most useful if itās at least striving for objectivist Bayesian probabilities. (I think āthe epistemic interpretationā is also relevant.) And if itās doing so, I think the probabilities can be meaningfully critiqued as more or less reasonable or useful.
The claim that they represent the right orders of magnitude is equivalent to them being correct to within a factor of about 3.
I agree that this is at least roughly correct, given that heās presenting each credence/āprobability as ā1 in [some power of 10]ā. I didnāt mean to imply that I was questioning two substantively different claims of his; more just to point out that he reiterates a similar point, weakly suggesting he really does mean that this is roughly the range of uncertainty he considers these probabilities to have.
Iām not sure I totally agree, or at least it depends on what you mean by āaccurateā. Iām not an expert here, but Wikipedia says:
Broadly speaking, there are two interpretations on Bayesian probability. For objectivists, interpreting probability as extension of logic, probability quantifies the reasonable expectation everyone (even a ārobotā) sharing the same knowledge should share in accordance with the rules of Bayesian statistics, which can be justified by Coxās theorem.[2][8] For subjectivists, probability corresponds to a personal belief.
I think a project like Ordās is probably most useful if itās at least striving for objetivist Bayesian probabilities. (I think āthe epistemic interpretationā is also relevant.)
Iām also not an expert here, but I think weād have to agree about how to interpret knowledge and build the model, and have the same priors to guarantee this kind of agreement. See some discussion here. The link you sent about probability interpretations also links to the reference class problem.
And if itās doing so, I think the probabilities can be meaningfully critiqued as more or less reasonable or useful.
I think we can critique probabilities based on how they were estimated, at least, and I think some probabilities we can be pretty confident in because they come from repeated random-ish trials or we otherwise have reliable precedent to base them on (e.g. good reference classes, and the estimates donāt vary too much between the best reference classes). If thereās only really one reasonable model, and all of the probabilities are pretty precise in it (based on precedent), then the final probability should be pretty precise, too.
Just found a quote from the book which I shouldāve mentioned earlier (perhaps this shouldāve also been a footnote in this post):
any notion of risk must involve some kind of probability. What kind is involved in existential risk? Understanding the probability in terms of objective long-run frequencies wonāt work, as the existential catastrophes we are concerned with can only ever happen once, and will always be unprecedented until the moment it is too late. We canāt say the probability of an existential catastrophe is precisely zero just because it hasnāt happened yet.
Situations like these require an evidential sense of probability, which describes the appropriate degree of belief we should have on the basis of the available information. This is the familiar type of probability used in courtrooms, banks and betting shops. When I speak of the probability of an existential catastrophe, I mean the credence humanity should have that it will occur, in light of our best evidence.
And Iām pretty sure there was another quote somewhere about the complexities with this.
As for your comment, Iām not sure if weāre just using language slightly differently or actually have different views. But I think we do have different views on this point:
If thereās only really one reasonable model, and all of the probabilities are pretty precise in it (based on precedent), then the final probability should be pretty precise, too.
I would say that, even if one model is the most (or only) reasonable one weāre aware of, if weāre not certain about the model, we should account for model uncertainty (or uncertainty about the argument). So (I think) even if we donāt have specific reasons for other precise probabilities, or for decreasing the precision, we should still make our probabilities less precise, because there could be āunknown unknownsā, or mistakes in our reasoning process, or whatever.
If we know that our model might be wrong, and we donāt account for that when thinking about how certain vs uncertain we are, then weāre not using all the evidence and information we have. Thus, we wouldnāt be striving for that āevidentialā sense of probability as well as we could. And more importantly, it seems likely weād predictably do worse in making plans and achieving our goals.
Interestingly, Ord is among the main people Iāve seen making the sort of argument I make in the prior paragraph, both in this book and in two prior papers (one of which Iāve only read the abstract of). This increased my degree of surprise at him appearing to suggest he was fairly confident these estimates were of the right order of magnitude.
I agree that we should consider model uncertainty, including the possibility of unknown unknowns.
I think itās rare that you can show that only one model is reasonable in practice, because the world is so complex. Mostly only really well-defined problems with known parts and finitely many known unknowns, like certain games, (biased) coin flipping, etc..
Thanks for the comment!
Yes, it does seem worth pointing out that these are Bayesian rather than āfrequencyā/āāphysicalā probabilities. (Though Ord uses them as somewhat connected to frequency probabilities, as he also discusses how long we should expect humanity to last given various probabilities of x-catastrophe per century.)
To be clear, though, thatās what I had in mind when suggesting that being uncertain only within a particular order of magnitude was surprising to me. E.g., I agree with the following statement:
...but I was surprised to hear that, if Ord does mean that the way it sounds to me, he thinks he could only be convinced to raise or lower his credence by a factor of ~3.
Though itās possible he instead meant that they could definitely be off by a factor of 3, which that wouldnāt surprise him at all, but itās also plausible they could be off by even more.
I think thereās something to this, but Iām not sure I totally agree. Or at least it might depend on what you mean by āaccurateā. Iām not an expert here, but Wikipedia says:
I think a project like Ordās is probably most useful if itās at least striving for objectivist Bayesian probabilities. (I think āthe epistemic interpretationā is also relevant.) And if itās doing so, I think the probabilities can be meaningfully critiqued as more or less reasonable or useful.
I agree that this is at least roughly correct, given that heās presenting each credence/āprobability as ā1 in [some power of 10]ā. I didnāt mean to imply that I was questioning two substantively different claims of his; more just to point out that he reiterates a similar point, weakly suggesting he really does mean that this is roughly the range of uncertainty he considers these probabilities to have.
Iām also not an expert here, but I think weād have to agree about how to interpret knowledge and build the model, and have the same priors to guarantee this kind of agreement. See some discussion here. The link you sent about probability interpretations also links to the reference class problem.
I think we can critique probabilities based on how they were estimated, at least, and I think some probabilities we can be pretty confident in because they come from repeated random-ish trials or we otherwise have reliable precedent to base them on (e.g. good reference classes, and the estimates donāt vary too much between the best reference classes). If thereās only really one reasonable model, and all of the probabilities are pretty precise in it (based on precedent), then the final probability should be pretty precise, too.
Just found a quote from the book which I shouldāve mentioned earlier (perhaps this shouldāve also been a footnote in this post):
And Iām pretty sure there was another quote somewhere about the complexities with this.
As for your comment, Iām not sure if weāre just using language slightly differently or actually have different views. But I think we do have different views on this point:
I would say that, even if one model is the most (or only) reasonable one weāre aware of, if weāre not certain about the model, we should account for model uncertainty (or uncertainty about the argument). So (I think) even if we donāt have specific reasons for other precise probabilities, or for decreasing the precision, we should still make our probabilities less precise, because there could be āunknown unknownsā, or mistakes in our reasoning process, or whatever.
If we know that our model might be wrong, and we donāt account for that when thinking about how certain vs uncertain we are, then weāre not using all the evidence and information we have. Thus, we wouldnāt be striving for that āevidentialā sense of probability as well as we could. And more importantly, it seems likely weād predictably do worse in making plans and achieving our goals.
Interestingly, Ord is among the main people Iāve seen making the sort of argument I make in the prior paragraph, both in this book and in two prior papers (one of which Iāve only read the abstract of). This increased my degree of surprise at him appearing to suggest he was fairly confident these estimates were of the right order of magnitude.
I agree that we should consider model uncertainty, including the possibility of unknown unknowns.
I think itās rare that you can show that only one model is reasonable in practice, because the world is so complex. Mostly only really well-defined problems with known parts and finitely many known unknowns, like certain games, (biased) coin flipping, etc..