Thanks for this. I wonât respond to your second/âthird bullets; as you say itâs not a defense of the claim itself, and while itâs plausible to me that many conclusions go through on much shorter timelines, I still want to understand the basis for the actual arguments made as best I can. Not least because if I canât defend such arguments, then my personal pitches for longtermism (both to myself and to others) will not include them; they and I will focus on the next e.g. 10,000 years instead.
On your first bullet:
You are correct that within fixed models we can justifiably have extreme credences, e.g. for the probability of a specific result of 30 coin flips. However, I think the case for âmodestyââi.e. not ruling out very long futuresârests largely on model uncertainty...
...This insight that extremely low credences all-things-considered are often âforbiddenâ by model uncertainty is basically the point from Ord, Hillerbrand, & Sandberg (2008).
Iâll go and read the paper you mention, but flagging that my coinflip example is more general than you seem to think. Probability theory has conjunctions even outside of simple fixed models, and itâs the conjunction, not the fixed model, which is forcing you to have extreme credences. At best, we may be able to define a certain class of events where such credences are âforbiddenâ (this could well be what the paper tries to do). We would then need to make sure that no such event can be expressed as a conjunction of a very large number of other such events.
Concretely, P(Humanity survives one billion years) is the product of one million probabilities of surviving each millenia, conditional on having survivied up to that point. As a result, we either need to set some of the intervening probabilities like P(Humanity survivies the next millenia | Humanity has survived to the year 500,000,000 AD) extremely high, or we need to set the overall product extremely low. Setting everything to the range 0.01% â 99.99% is not an option, without giving up on arithmetic or probability theory. And of course, I could break the product into a billion-fold conjunction where each component was âsurvive the next yearâ if I wanted to make the requirements even more extreme.
Note I think it is plausible such extremes can be justified, since it seems like a version of humanity that has survived 500,000 millenia really should have excellent odds of surviving the next millenium. Indeed, I think that if you actually write out the model uncertainty argument mathematically, what ends up happening here is the fact that humanity has survivied 500,000 millenia is massive overwhelming Bayesian evidence that the âcorrectâ model is one of the ones that makes such a long life possible, allowing you to reach very extreme credences about the then-future. This is somewhat analagous to the intuitive extreme credence most people have that they wonât die in the next second.
my coinflip example is more general than you seem to think. Probability theory has conjunctions even outside of simple fixed models, and itâs the conjunction, not the fixed model, which is forcing you to have extreme credences. At best, we may be able to define a certain class of events where such credences are âforbiddenâ (this could well be what the paper tries to do).
I agree with everything you say in your reply. I think I simply partly misunderstood the point you were trying to make and phrased part of my response poorly. In particular, I agree that extreme credences arenât âforbiddenâ in general.
(Sorry, I think it would have been better if I had flagged that I had read your comment and written mine very quickly.)
I still think that the distinction between credence/âprobabilities within a model and credence that a modelis correct are is relevant here, for reasons such as:
I think itâs often harder to justify an extreme credence that a particular model is right than it is to justify an extreme probability within a model.
Often when it seems we have extreme credence in a model this just holds âat a certain level of detailâ, and if we looked at a richer space of models that makes more fine-grained distinctions weâd say that our credence is distributed over a (potentially very large) family of models.
There is a difference between an extreme all-things-considered credence (i.e. in this simplified way of thinking about epistemics the âexpected credenceâ across models) and being highly confident in an extreme credence;
I think the latter is less often justified than the former. And again if it seems that the latter is justified, I think itâll often be because an extreme amount of credence is distributed among different models, but all of these models agree about some event weâre considering. (E.g. ~all models agree that I wontât spontaneously die in the next second, or that Santa Clause isnât going to appear in my bedroom.)
When different models agree that some event is the conjunction of many others, then each model will have an extreme credence for some event but the models might disagree about for which events the credence is extreme.
Taken together (i.e. across events/âdecisions) your all-things-considered credences might look therefore look âfunnyâ or âinconsistentâ (by the light of any single model). E.g. you might have non-extreme all-things-considered credence in two events based on two different models that are inconsistent with each other, and each of which rules out one of the events with extreme probability but not the other.
I acknowledge that Iâm making somewhat vague claims here, and that in order to have anything close to a satisfying philosophical account of whatâs going on I would need to spell out what exactly I mean by âoftenâ etc. (Because as I said I do agree that these claims donât always hold!)
Some fixed models also support macroscopic probabilities of indefinite survival: e.g. if in each generation each individual has a number of descendants drawn from a Poisson distribution with parameter 1.1, then thereâs a finite chance of extinction in each generation but these diminish fast enough (as the population gets enormous) that if you make it through an initial rocky period youâre pretty much safe.
That model is clearly too optimistic because it doesnât admit crises with correlated problems across all the individuals in a generation. But then thereâs a question about how high is the unavoidable background rate of such crises (i.e. ones that remain even if you have a very sophisticated and well-resourced attempt to prevent them).
On current understanding I think the lower bounds for the rate of exogenous such events rely on things like false vacuum decay (and maybe GRBs while weâre local enough), and those lower bounds are really quite low, so itâs fairly plausible that the true rate is really low (though also plausible itâs higher because there are risks that arenât observed/âunderstood).
Bounding endogenous risk seems a bit harder to reason about. I think that you can give kind of fairytale/âhandwaving existence proofs of stable political systems (which might however be utterly horrific to us). Then itâs at least sort of plausible that there would be systems which are simultaneously extremely stable and also desirable.
I wonât respond to your second/âthird bullets; as you say itâs not a defense of the claim itself, and while itâs plausible to me that many conclusions go through on much shorter timelines, I still want to understand the basis for the actual arguments made as best I can. Not least because if I canât defend such arguments, then my personal pitches for longtermism (both to myself and to others) will not include them; they and I will focus on the next e.g. 10,000 years instead.
To be clear, this makes a lot of sense to me, and I emphatically agree that understanding the arguments is valuable independently from whether this immediately changes a practical conclusion.
Thanks for this. I wonât respond to your second/âthird bullets; as you say itâs not a defense of the claim itself, and while itâs plausible to me that many conclusions go through on much shorter timelines, I still want to understand the basis for the actual arguments made as best I can. Not least because if I canât defend such arguments, then my personal pitches for longtermism (both to myself and to others) will not include them; they and I will focus on the next e.g. 10,000 years instead.
On your first bullet:
Iâll go and read the paper you mention, but flagging that my coinflip example is more general than you seem to think. Probability theory has conjunctions even outside of simple fixed models, and itâs the conjunction, not the fixed model, which is forcing you to have extreme credences. At best, we may be able to define a certain class of events where such credences are âforbiddenâ (this could well be what the paper tries to do). We would then need to make sure that no such event can be expressed as a conjunction of a very large number of other such events.
Concretely, P(Humanity survives one billion years) is the product of one million probabilities of surviving each millenia, conditional on having survivied up to that point. As a result, we either need to set some of the intervening probabilities like P(Humanity survivies the next millenia | Humanity has survived to the year 500,000,000 AD) extremely high, or we need to set the overall product extremely low. Setting everything to the range 0.01% â 99.99% is not an option, without giving up on arithmetic or probability theory. And of course, I could break the product into a billion-fold conjunction where each component was âsurvive the next yearâ if I wanted to make the requirements even more extreme.
Note I think it is plausible such extremes can be justified, since it seems like a version of humanity that has survived 500,000 millenia really should have excellent odds of surviving the next millenium. Indeed, I think that if you actually write out the model uncertainty argument mathematically, what ends up happening here is the fact that humanity has survivied 500,000 millenia is massive overwhelming Bayesian evidence that the âcorrectâ model is one of the ones that makes such a long life possible, allowing you to reach very extreme credences about the then-future. This is somewhat analagous to the intuitive extreme credence most people have that they wonât die in the next second.
I agree with everything you say in your reply. I think I simply partly misunderstood the point you were trying to make and phrased part of my response poorly. In particular, I agree that extreme credences arenât âforbiddenâ in general.
(Sorry, I think it would have been better if I had flagged that I had read your comment and written mine very quickly.)
I still think that the distinction between credence/âprobabilities within a model and credence that a model is correct are is relevant here, for reasons such as:
I think itâs often harder to justify an extreme credence that a particular model is right than it is to justify an extreme probability within a model.
Often when it seems we have extreme credence in a model this just holds âat a certain level of detailâ, and if we looked at a richer space of models that makes more fine-grained distinctions weâd say that our credence is distributed over a (potentially very large) family of models.
There is a difference between an extreme all-things-considered credence (i.e. in this simplified way of thinking about epistemics the âexpected credenceâ across models) and being highly confident in an extreme credence;
I think the latter is less often justified than the former. And again if it seems that the latter is justified, I think itâll often be because an extreme amount of credence is distributed among different models, but all of these models agree about some event weâre considering. (E.g. ~all models agree that I wontât spontaneously die in the next second, or that Santa Clause isnât going to appear in my bedroom.)
When different models agree that some event is the conjunction of many others, then each model will have an extreme credence for some event but the models might disagree about for which events the credence is extreme.
Taken together (i.e. across events/âdecisions) your all-things-considered credences might look therefore look âfunnyâ or âinconsistentâ (by the light of any single model). E.g. you might have non-extreme all-things-considered credence in two events based on two different models that are inconsistent with each other, and each of which rules out one of the events with extreme probability but not the other.
I acknowledge that Iâm making somewhat vague claims here, and that in order to have anything close to a satisfying philosophical account of whatâs going on I would need to spell out what exactly I mean by âoftenâ etc. (Because as I said I do agree that these claims donât always hold!)
Some fixed models also support macroscopic probabilities of indefinite survival: e.g. if in each generation each individual has a number of descendants drawn from a Poisson distribution with parameter 1.1, then thereâs a finite chance of extinction in each generation but these diminish fast enough (as the population gets enormous) that if you make it through an initial rocky period youâre pretty much safe.
That model is clearly too optimistic because it doesnât admit crises with correlated problems across all the individuals in a generation. But then thereâs a question about how high is the unavoidable background rate of such crises (i.e. ones that remain even if you have a very sophisticated and well-resourced attempt to prevent them).
On current understanding I think the lower bounds for the rate of exogenous such events rely on things like false vacuum decay (and maybe GRBs while weâre local enough), and those lower bounds are really quite low, so itâs fairly plausible that the true rate is really low (though also plausible itâs higher because there are risks that arenât observed/âunderstood).
Bounding endogenous risk seems a bit harder to reason about. I think that you can give kind of fairytale/âhandwaving existence proofs of stable political systems (which might however be utterly horrific to us). Then itâs at least sort of plausible that there would be systems which are simultaneously extremely stable and also desirable.
To be clear, this makes a lot of sense to me, and I emphatically agree that understanding the arguments is valuable independently from whether this immediately changes a practical conclusion.