Just found a quote from the book which I shouldāve mentioned earlier (perhaps this shouldāve also been a footnote in this post):
any notion of risk must involve some kind of probability. What kind is involved in existential risk? Understanding the probability in terms of objective long-run frequencies wonāt work, as the existential catastrophes we are concerned with can only ever happen once, and will always be unprecedented until the moment it is too late. We canāt say the probability of an existential catastrophe is precisely zero just because it hasnāt happened yet.
Situations like these require an evidential sense of probability, which describes the appropriate degree of belief we should have on the basis of the available information. This is the familiar type of probability used in courtrooms, banks and betting shops. When I speak of the probability of an existential catastrophe, I mean the credence humanity should have that it will occur, in light of our best evidence.
And Iām pretty sure there was another quote somewhere about the complexities with this.
As for your comment, Iām not sure if weāre just using language slightly differently or actually have different views. But I think we do have different views on this point:
If thereās only really one reasonable model, and all of the probabilities are pretty precise in it (based on precedent), then the final probability should be pretty precise, too.
I would say that, even if one model is the most (or only) reasonable one weāre aware of, if weāre not certain about the model, we should account for model uncertainty (or uncertainty about the argument). So (I think) even if we donāt have specific reasons for other precise probabilities, or for decreasing the precision, we should still make our probabilities less precise, because there could be āunknown unknownsā, or mistakes in our reasoning process, or whatever.
If we know that our model might be wrong, and we donāt account for that when thinking about how certain vs uncertain we are, then weāre not using all the evidence and information we have. Thus, we wouldnāt be striving for that āevidentialā sense of probability as well as we could. And more importantly, it seems likely weād predictably do worse in making plans and achieving our goals.
Interestingly, Ord is among the main people Iāve seen making the sort of argument I make in the prior paragraph, both in this book and in two prior papers (one of which Iāve only read the abstract of). This increased my degree of surprise at him appearing to suggest he was fairly confident these estimates were of the right order of magnitude.
I agree that we should consider model uncertainty, including the possibility of unknown unknowns.
I think itās rare that you can show that only one model is reasonable in practice, because the world is so complex. Mostly only really well-defined problems with known parts and finitely many known unknowns, like certain games, (biased) coin flipping, etc..
Just found a quote from the book which I shouldāve mentioned earlier (perhaps this shouldāve also been a footnote in this post):
And Iām pretty sure there was another quote somewhere about the complexities with this.
As for your comment, Iām not sure if weāre just using language slightly differently or actually have different views. But I think we do have different views on this point:
I would say that, even if one model is the most (or only) reasonable one weāre aware of, if weāre not certain about the model, we should account for model uncertainty (or uncertainty about the argument). So (I think) even if we donāt have specific reasons for other precise probabilities, or for decreasing the precision, we should still make our probabilities less precise, because there could be āunknown unknownsā, or mistakes in our reasoning process, or whatever.
If we know that our model might be wrong, and we donāt account for that when thinking about how certain vs uncertain we are, then weāre not using all the evidence and information we have. Thus, we wouldnāt be striving for that āevidentialā sense of probability as well as we could. And more importantly, it seems likely weād predictably do worse in making plans and achieving our goals.
Interestingly, Ord is among the main people Iāve seen making the sort of argument I make in the prior paragraph, both in this book and in two prior papers (one of which Iāve only read the abstract of). This increased my degree of surprise at him appearing to suggest he was fairly confident these estimates were of the right order of magnitude.
I agree that we should consider model uncertainty, including the possibility of unknown unknowns.
I think itās rare that you can show that only one model is reasonable in practice, because the world is so complex. Mostly only really well-defined problems with known parts and finitely many known unknowns, like certain games, (biased) coin flipping, etc..