TL;DR: Iâm curious what the most detailed or strongly-evidenced arguments are in favour of extinction risk eventually falling to extremely low levels.
An argument I often see goes to the effect of âwe have a lot of uncertainty about the future, and given that it seems hard to be >99% confident that humanity will last <1 billion yearsâ. As written, this seems like a case of getting anchored by percentages and failing to process just how long one billion years really is (weak supporting evidence for the latter is that I sometimes see eerily similar estimates for one million years...). Perhaps this is my finance background talking, but I can easily imagine a world where the dominant way to express probability is basis points and our go-to probability for âvery unlikely thingâ was 1 bp rather than 1%, which is 100x smaller. Or we could have have a generic probability analogy to micromorts, which are 100x smaller still, etc. Yet such choices in language shouldnât be affecting our decisions or beliefs about the best thing to do.
On the object level, one type of event Iâm allowed to be extremely confident about is a large conjunction of events; if I flip a fair coin 30 times, the chance of getting 30 heads is approximately one in a billion.
Humanity surviving for a long time has a similar property; if you think that civilisation has a 50% chance of making it through the next 10,000 years, then conditional on that a 50% chance of making it through the next 20,000 years, then 50% for the next 40,000 years, etc. (applying a common rule-of-thumb for estimating uncertain lifetimes, starting with the observation that civilisation has been around for ~10,000 years so far), then the odds of surviving a billion years come out somewhere between 1 in 2^16 and 1 in 2^17, AKA roughly 0.001%.
We could also try to estimate current extinction risk directly based on known risks. Most attempts Iâve seen at this suggest that 50% to make it through the next 10,000 years, AKA roughly 0.007% per year, is very generous. As I see it, this is because an object-level anlysis of the risks suggests they have rising, not falling as the Lindy rule would imply.
When Iâve expressed this point to people in the past, I tend to get very handwavy (non-numeric) arguments about how a super-aligned-AI could dramatically cut existential risk to the levels required; another way of framing the above is that, to envisage a plausible future where we then have >one billion years in expectation, annualised risk needs to get to <0.0000001% in that future. Another thought is that space colonization could make humanity virtually invincible. So partly Iâm wondering if thereâs a better-developed version of these that accounts for the risks that would remain, or other routes to the same conclusion, since this assumption of a large-future-in-expectation seems critical to a lot of longtermist thought.
Some scattered thoughts (sorry for such a long comment!). Organized in order rather than by importanceâI think the most important argument for me is the analogy to computers.
Itâs possible to write âHumanity survives the next billion yearsâ as a conjunction of a billion events (humanity survives year 1, and year 2, and...). Itâs also possible to write âhumanity goes extinct next yearâ as a conjunction of a billion events (Alice dies, and Bob dies, and...). Both of those are quite weak prima facie justifications for assigning high confidence. You could say that the second conjunction is different, because the billionth person is very likely to die once the others have died (since there has apparently been some kind of catastrophe), but the same is true for survival. In both cases there are conceivable events that would cause every term of the conjunction to be true, and we need to address the probability of those common causes directly. Being able to write the claim as a conjunction doesnât seem to help you get to extreme probabilities without an argument about independence.
I feel you should be very hesitant to assign 99%+ probabilities without a good argument, and I donât think this is about anchoring to percent. The burden of proof gets stronger and stronger as you move closer to 1, and 100 is getting to be a big number. I think this is less likely to be a tractable disagreement than the other bullets but it seems worth mentioning for completeness. Iâm curious if you think there are other natural statements where the kind of heuristic you are describing (or any other similarly abstract heuristic) would justifiably get you to such high confidences. I agree with Max Danielâs point that it doesnât work for realistic versions of claims like âThis coin will come up heads 30 times in a row.â You say that itâs not exclusive to simplified models but I think Iâd be similarly skeptical of any application of this principle. (More generally, I think itâs not surprising to assign very small probabilities to complex statements based on weak evidence, but that it will happen much more rarely for simple statements. It doesnât seem promising to get into that though.)
I think space colonization is probably possible, though getting up to probabilities like 50% for space colonization feasibility would be a much longer discussion. (I personally think >50% probability is much more reasonable than <10%.) If there is a significant probability that we colonize space, and that spreading out makes the survival of different colonists independent (as it appears it would), then it seems like we end up with some significant probability of survival. That said, I would also assign ~1/â2 probability to surviving a billion years even if we were confined to Earth. I could imagine being argued down to 1â4 or even 1â8 but each successive factor of 2 seems much harder. So in some sense the disagreement isnât really about colonization.
Stepping back, I think the key object-level questions are something like âIs there any way to build a civilization that is very stable?â and âWill people try?â It seems to me you should have a fairly high probability on âyesâ to both questions. I donât think you have to invoke super-aligned AI to justify that conclusionâitâs easy to imagine organizing society in a way which drives existing extinction risks to negligible levels, and once thatâs done itâs not clear where youâd get to 90%+ probabilities for new risks emerging that are much harder to reduce. (Iâm not sure which step of this you get off the boat forâis it that you canât imagine a world that say reduced the risk of an engineered pandemic killing everyone to < 1/âbillion per year? Or that you think itâs very likely other much harder-to-reduce risks would emerge?)
A lot of this is about burden of proof arguments. Is the burden of proof on someone to exhibit a risk thatâs very hard to reduce, or someone to argue that there exists no risk that is hard to reduce? Once weâre talking about 10% or 1% probabilities it seems clear to me that the burden of proof is on the confident person. You could try to say âThe claim of âno bad risksâ is a conjunction over all possible risks, so itâs pretty unlikelyâ but I could just as well say âThe claim about âthe risk is irreducibleâ is a conjunction over all possible reduction strategies, so itâs pretty unlikelyâ so I donât think this gets us out of the stalemate (and the stalemate is plenty to justify uncertainty).
I do furthermore think that we can discuss concrete (kind of crazy) civilizations that are likely to have negligible levels of risk, given that e.g. (i) we have existence proofs for highly reliable machines over billion-year timescales, namely life, (ii) we have existence proofs for computers if you can build reliable machinery of any kind, (iii) itâs easy to construct programs that appear to be morally relevant but which would manifestly keep running indefinitely. We canât get too far with this kind of concrete argument, since any particular future we can imagine is bound to be pretty unlikely. But itâs relevant to me that e.g. stable-civilization scenarios seem about as gut-level plausible to me as non-AI extinction scenarios do in the 21st century.
Consider the analogous question âIs it possible to build computers that successfully carry out trillions of operations without errors that corrupt the final result?â My understanding is that in the early 20th century this question was seriously debated (though thatâs not important to my point), and it feels very similar to your question. Itâs very easy for a computational error to cascade and change the final result of a computation. Itâs possible to take various precautions to reduce the probability of an uncorrected error, but why think that itâs possible to reduce that risk to levels lower than 1 in a trillion, given that all observed computers have had fairly high error rates? Moreover, it seems that error rates are growing as we build bigger and bigger computers, since each element has an independent failure rate, including the machinery designed to correct errors. To really settle this we need to get into engineering details, but until youâve gotten into those details I think itâs clearly unwise to assign very low probability to building a computer that carries out trillions of steps successfullyâthe space of possible designs is large and people are going to try to find one that works, so youâd need to have some good argument about why to be confident that they are going to fail.
You could say that computers are an exceptional example Iâve chosen with hindsight. But Iâm left wondering if there are any valid applications of this kind of heuristicâwhatâs the reference class of which âhighly reliable computersâ are exceptional rather than typical?
If someone said:âA billion years is a long time. Any given thing that can plausibly happen should probably be expected to happen over that time periodâ then Iâd ask about why life survived the last billion years.
You could say that âa billion yearsâ is a really long time for human civilization (given that important changes tend to happen within decades or centuries) but not a long time for intelligent life (given that important changes takes millions of years). This is similar to what happens if you appeal to current levels of extinction risk being really high. I donât buy this because life on earth is currently at a period of unprecedentedly rapid change. You should have some reasonable probability of returning to more historically typical timescales of hundreds of millions of years, which in turn gives you a reasonable overall probability on surviving for hundreds of millions of years. (Actually I think we should have >50% probabilities for reversion to lower timescales, since we can tell that the current period of rapid growth will soon be over. Over our history rapid change and rapid growth have basically coincided, so itâs particularly plausible that returning to slow-growth will also return to slow-change.)
Applying the rule of thumb for estimating lifetimes to âthe human speciesâ rather than âintelligent lifeâ seems like itâs doing a huge amount of work. It might be reasonable to do the extrapolation using some mixture between these reference classes (and others), but in order to get extreme probabilities for extinction youâd need to have an extreme mixture. This is part of the general pattern why you donât usually end up with 99% probabilities for interesting questions without real argumentsâyou need to not only have a way of estimating that has very high confidence, you need to be very confident in that way of estimating.
You could appeal to some similar outside view to say âhumanity will undergo changes similar in magnitude to those that have occurred over the last billion years;â I think thatâs way more plausible (though I still wouldnât believe 99%) but I donât think that it matters for claims about the expected moral value of the future.
The doomsday argument can plausibly arrive at very high confidences based on anthropic considerations (if you accept those anthropic principles with very high confidence). I think many long-termists would endorse the conclusion that the vast majority of observers like us do not actually live in a large and colonizable universeânot at 99.999999% but at least at 99%. Personally I would reject the inference that we probably donât live in a large universe because I reject the implicit symmetry principle. At any rate, these lines of argument go in a rather different direction than the rest of your post and I donât feel like itâs what you are getting at.
Thanks for the long comment, this gives me a much richer picture of how people might be thinking about this. On the first two bullets:
You say you arenât anchoring, in a world where we defaulted to expressing probability in 1/â10^6 units called Ms Iâm just left feeling like you would write âyou should be hesitant to assign 999,999M+ probabilities without a good argument. The burden of proof gets stronger and stronger as you move closer to 1, and 1,000,000 is getting to be a big number.â. So if itâs not anchoring, what calculation or intuition is leading you to specifically 99% (or at least, something in that ballpark), and would similarly lead you to roughly 990,000M with the alternate language?
My reply to Max and your first bullet both give examples of cases in the natural world where probabilities of real future events would go way outside the 0.01% â 99.99% range. Conjunctions force you to have extreme confidence somewhere, the only question is where. If I try to steelman your claim, I think I end up with an idea that we should apply our extreme confidence to the thing inside the product due to correlated cause, rather than the thing outside; does that sound fair?
The rest I see as an attempt to justify the extreme confidences inside the product, and Iâll have to think about more. The following are gut responses:
Iâm not sure which step of this you get off the boat for
Iâm much more baseline cynical than you seem to be about peopleâs willingness and ability to actually try, and try consistently, over a huge time period. To give some idea, Iâd probably have assigned <50% probability to humanity surviving to the year 2150, and <10% for the year 3000, before I came across EA. Whether thatâs correct or not, I donât think its wildly unusual among people who take climate change seriously*, and yet we almost certainly arenât doing enough to combat that as a society. This gives me little hope for dealing with <10% threats that will surely appear over the centuries, and as a result I found and continue to find the seemingly-baseline optimism of longtermist EA very jarring.
(Again, the above is a gut response as opposed to a reasoned claim.)
Applying the rule of thumb for estimating lifetimes to âthe human speciesâ rather than âintelligent lifeâ seems like itâs doing a huge amount of work.
Yeah, Owen made a similar point, and actually I was using civilisation rather than âthe human speciesâ, which is 20x shorter still. I honestly hadnât thought about intelligent life as a possible class before, and that probably is the thing from this conversation that has the most chance of changing how I think about this.
*âThe survey from the Yale Program on Climate Change Communication found that 39 percent think the odds of global warming ending the human race are at least 50 percent. â
I roughly think that there simply isnât very strong evidence for this. I.e. I think it would be mistaken to have a highly resilient large credence in extinction risk eventually falling to ~0.0000001%, humanity or its descendants surviving for a billion years, or anything like that.
[ETA: Upon rereading, I realized the above is ambiguous. With âlargeâ I was here referring to something stronger than ânon-extremeâ. E.g. I do think itâs defensible to believe that, e.g. âIâm like 90% confident that over the next 10 years my credence in information-based civilization surviving for 1 billion years wonât fall below 0.1%â, and indeed thatâs a statement I would endorse. I think Iâd start feeling skeptical if someone claimed there is no way theyâd update to a credence below 40% or something like that.]
I think this is one of several reasons for why the ânaive caseâ for focusing on extinction risk reduction fails. (Another example of such a reason is the fact that, for most known hazards, collapse short of extinction seems way more likely than immediate extinction, that as a consequence most interventions affect both the probability of extinction and the probability and trajectory of various collapse scenarios, and that the latter effect might dominate but has unclear sign.)
I think the most convincing response is a combination of the following. Note, however, that the last two mostly argue that we should be longtermists despite the case for billion-year futures being shaky rather than defenses of that case itself.
You are correct that within fixed models we can justifiably have extreme credences, e.g. for the probability of a specific result of 30 coin flips. However, I think the case for âmodestyââi.e. not ruling out very long futuresârests largely on model uncertainty, i.e. our inability to confidently identify the âcorrectâ model for reasoning about the length of the future.
For example, suppose I produce a coin from my pocket and ask you to estimate how likely it is that in my first 30 flips I get only heads. Your all-things-considered credence will be dominated by your uncertainty over whether my coin is strongly biased toward heads. Since 30 heads are vanishingly unlikely if the coin is fair, this is the case even if your prior says that most coins someone produces from their pocket are fair: âvanishingly unlikelyâ here is much stronger (in this case around 10â9) than your prior can justifiably be, i.e. âmost coinsâ might defensibly refer to 90% or 99% or 99.99% but not 99.9999999%.
This insight that extremely low credences all-things-considered are often âforbiddenâ by model uncertainty is basically the point from Ord, Hillerbrand, & Sandberg (2008).
Note that I think itâs still true that there is a possible epistemic state (and probably even model we can write down now) that rules out very long futures with extreme confidence. The point just is that we wonât be able to get to that epistemic state in practice.
Overall, I think the lower bound on the all-things-considered credence we should have in some speculative scenario often comes down to understanding how âfundamentalâ our model uncertainty is. I.e. roughly: to get to models that have practically significant credence in the scenario in question, how fundamentally would I need to revise my best-guess model of the world?
E.g. if Iâm asking whether the LHC will blow up the world, or whether itâs worth looking for the philosopherâs stone, then I would need to revise extremely fundamental aspects of my world model such as fundamental physicsâwe are justified in having pretty high credences in those.
By contrast, very long futures seem at least plausibly consistent with fundamental physics as well as plausible theories for how cultural evolution, technological progress, economics, etc. work.
It is here, and for this reason, that points like âbut itâs conceivable that superintelligent AI will reduce extinction risk to near-zeroâ are significant.
Therefore, model uncertainty will push me toward a higher credence in a very long future than in the LHC blowing up the world (but even for the latter my credence is plausibly dominated by model uncertainty rather than my credence in this happening conditional on my model of physics being correct).
Longtermism goes through (i.e. it looks like we can have most impact by focusing on the long-term) on much less extreme time scales than 1 billion.
Some such less extreme time scales have âmore defensibleâ reasons behind them, e.g. outside view considerations based on the survival of other species or the amount of time humanity or civilization have survived so far. The Lindy rule prior you describe is one example.
There is a wager for long futures: we can have much more impact if the future is long, so these scenarios might dominate our decision-making even if they are unlikely.
(NB I think this is a wager that is unproblematic only if we have independently established that the probability of the relevant futures isnât vanishingly small. This is because of the standard problems around Pascalâs wager.)
That all being said, my views on this feel reasonably but not super resilientâlike itâs âonlyâ 10% Iâll have changed my mind about this in major ways in 2 years. I also think there is room for more work on how to best think about such questions (the Ord et al. paper is a great example), e.g. checking that this kind of reasoning doesnât âprove too muchâ or leads to absurd conclusions when applied to other cases.
Thanks for this. I wonât respond to your second/âthird bullets; as you say itâs not a defense of the claim itself, and while itâs plausible to me that many conclusions go through on much shorter timelines, I still want to understand the basis for the actual arguments made as best I can. Not least because if I canât defend such arguments, then my personal pitches for longtermism (both to myself and to others) will not include them; they and I will focus on the next e.g. 10,000 years instead.
On your first bullet:
You are correct that within fixed models we can justifiably have extreme credences, e.g. for the probability of a specific result of 30 coin flips. However, I think the case for âmodestyââi.e. not ruling out very long futuresârests largely on model uncertainty...
...This insight that extremely low credences all-things-considered are often âforbiddenâ by model uncertainty is basically the point from Ord, Hillerbrand, & Sandberg (2008).
Iâll go and read the paper you mention, but flagging that my coinflip example is more general than you seem to think. Probability theory has conjunctions even outside of simple fixed models, and itâs the conjunction, not the fixed model, which is forcing you to have extreme credences. At best, we may be able to define a certain class of events where such credences are âforbiddenâ (this could well be what the paper tries to do). We would then need to make sure that no such event can be expressed as a conjunction of a very large number of other such events.
Concretely, P(Humanity survives one billion years) is the product of one million probabilities of surviving each millenia, conditional on having survivied up to that point. As a result, we either need to set some of the intervening probabilities like P(Humanity survivies the next millenia | Humanity has survived to the year 500,000,000 AD) extremely high, or we need to set the overall product extremely low. Setting everything to the range 0.01% â 99.99% is not an option, without giving up on arithmetic or probability theory. And of course, I could break the product into a billion-fold conjunction where each component was âsurvive the next yearâ if I wanted to make the requirements even more extreme.
Note I think it is plausible such extremes can be justified, since it seems like a version of humanity that has survived 500,000 millenia really should have excellent odds of surviving the next millenium. Indeed, I think that if you actually write out the model uncertainty argument mathematically, what ends up happening here is the fact that humanity has survivied 500,000 millenia is massive overwhelming Bayesian evidence that the âcorrectâ model is one of the ones that makes such a long life possible, allowing you to reach very extreme credences about the then-future. This is somewhat analagous to the intuitive extreme credence most people have that they wonât die in the next second.
my coinflip example is more general than you seem to think. Probability theory has conjunctions even outside of simple fixed models, and itâs the conjunction, not the fixed model, which is forcing you to have extreme credences. At best, we may be able to define a certain class of events where such credences are âforbiddenâ (this could well be what the paper tries to do).
I agree with everything you say in your reply. I think I simply partly misunderstood the point you were trying to make and phrased part of my response poorly. In particular, I agree that extreme credences arenât âforbiddenâ in general.
(Sorry, I think it would have been better if I had flagged that I had read your comment and written mine very quickly.)
I still think that the distinction between credence/âprobabilities within a model and credence that a modelis correct are is relevant here, for reasons such as:
I think itâs often harder to justify an extreme credence that a particular model is right than it is to justify an extreme probability within a model.
Often when it seems we have extreme credence in a model this just holds âat a certain level of detailâ, and if we looked at a richer space of models that makes more fine-grained distinctions weâd say that our credence is distributed over a (potentially very large) family of models.
There is a difference between an extreme all-things-considered credence (i.e. in this simplified way of thinking about epistemics the âexpected credenceâ across models) and being highly confident in an extreme credence;
I think the latter is less often justified than the former. And again if it seems that the latter is justified, I think itâll often be because an extreme amount of credence is distributed among different models, but all of these models agree about some event weâre considering. (E.g. ~all models agree that I wontât spontaneously die in the next second, or that Santa Clause isnât going to appear in my bedroom.)
When different models agree that some event is the conjunction of many others, then each model will have an extreme credence for some event but the models might disagree about for which events the credence is extreme.
Taken together (i.e. across events/âdecisions) your all-things-considered credences might look therefore look âfunnyâ or âinconsistentâ (by the light of any single model). E.g. you might have non-extreme all-things-considered credence in two events based on two different models that are inconsistent with each other, and each of which rules out one of the events with extreme probability but not the other.
I acknowledge that Iâm making somewhat vague claims here, and that in order to have anything close to a satisfying philosophical account of whatâs going on I would need to spell out what exactly I mean by âoftenâ etc. (Because as I said I do agree that these claims donât always hold!)
Some fixed models also support macroscopic probabilities of indefinite survival: e.g. if in each generation each individual has a number of descendants drawn from a Poisson distribution with parameter 1.1, then thereâs a finite chance of extinction in each generation but these diminish fast enough (as the population gets enormous) that if you make it through an initial rocky period youâre pretty much safe.
That model is clearly too optimistic because it doesnât admit crises with correlated problems across all the individuals in a generation. But then thereâs a question about how high is the unavoidable background rate of such crises (i.e. ones that remain even if you have a very sophisticated and well-resourced attempt to prevent them).
On current understanding I think the lower bounds for the rate of exogenous such events rely on things like false vacuum decay (and maybe GRBs while weâre local enough), and those lower bounds are really quite low, so itâs fairly plausible that the true rate is really low (though also plausible itâs higher because there are risks that arenât observed/âunderstood).
Bounding endogenous risk seems a bit harder to reason about. I think that you can give kind of fairytale/âhandwaving existence proofs of stable political systems (which might however be utterly horrific to us). Then itâs at least sort of plausible that there would be systems which are simultaneously extremely stable and also desirable.
I wonât respond to your second/âthird bullets; as you say itâs not a defense of the claim itself, and while itâs plausible to me that many conclusions go through on much shorter timelines, I still want to understand the basis for the actual arguments made as best I can. Not least because if I canât defend such arguments, then my personal pitches for longtermism (both to myself and to others) will not include them; they and I will focus on the next e.g. 10,000 years instead.
To be clear, this makes a lot of sense to me, and I emphatically agree that understanding the arguments is valuable independently from whether this immediately changes a practical conclusion.
One argument goes via something like the reference class of global autopoeitic information-processing systems: life has persisted since it started several billion years ago; multicellular life similarly; sexual selection similarly. Sure, species go extinct when theyâre outcompeted, but the larger systems theyâre part of have only continued to thrive.
The right reference class (on this story) is not âhumanity as a mammalian speciesâ but âinformation-based civilization as the next step in faster evolutionâ. Then we might be quite optimistic about civilization in some meaningful sense continuing indefinitely (though perhaps not about particular institutions or things that are recognisably human doing so).
If I understand you correctly, the argument is not âautopoietic systems have persisted for billions of yearsâ but more specifically âso far each new âtypeâ of such systems has persisted, so we should expect the most recent new type of âinformation-based civilizationâ to persist as wellâ.
This is an interesting argument I hadnât considered in this form.
(I think itâs interesting because I think the case that it talks about a morally relevant long future is stronger than for the simple appeal to all autopoietic systems as a reference class. The latter include many things that are so weirdâlike eusocial insects, asexually reproducing organisms, and potentially even non-living systems like autocatalytic chemical reactionsâthat the argument seems quite vulnerable to the objection that knowing that âsome kind of autopoietic system will be around for billions of yearsâ isnât that relevant. We arguably care about something that, while more general than current values or humans as biological species, is more narrow than that.
[Tbc, I think there are non-crazy views that care at least somewhat about basically all autopoietic systems, but my impression is that the standard justification for longtermism doesnât want to commit itself to such views.])
However, I have some worries about survivorship bias: If there was a âfailed major transition in evolutionâ, would we know about it? Like, could it be that 2 billion years ago organisms started doing sphexual selection (a hypothetical form of reproduction thatâs as different from previous asexual reproduction as sexual reproduction but also different from the latter) but that this type of reproduction died out after 1,000 yearsâand similarly for sphexxual selection, sphexxxual selection, ⊠? Such that with full knowledge weâd conclude the reverse from your conclusion above, i.e. âalmost all new types of autopoietic systems died out soon, so we should expect information-based civilization to die out soon as wellâ?
(FWIW my guess is that the answer actually is âour understanding of the history of evolution is sufficiently good that together with broad priors we can rule out at least an extremely high number of such âfailed transitionsââ, but Iâm not sure and so I wanted to mention the possible problem.)
If there were lots of failed major transitions in evolution, that would also update us towards there being a greater number of attempted transitions than we previously thought, which would in turn update us positively on information-based civilization emerging eventually, no? Or are you assuming that these would be too weird/âdifferent from homo sapiens such that we wouldnât share values enough?
Furthermore, sexual selection looks like a fairly simple and straightforward solution to the problem âorganisms with higher life expectancy donât evolve quickly enoughâ, so it doesnât look like thereâs a lot of space left for any alternatives.
Hereâs a relevant thread from ~5 years ago(!) when some people were briefly discussing points along these lines. I think it illustrates both some similar points and also offers some quick responses to them.
Please do hit see in context to see some further responses there!
And agree, I would also like to further understand the arguments here :)
Thanks for the link. I did actually comment on that thread, and while I didnât have it specifically in mind it was probably close to the start of me asking questions along these lines.
To answer your linguistic objection directly, I think one reason/âintuition I have for not trusting probabilities much above 99% or much below 1% is that the empirical rates for the reference class of âfairly decent forecaster considers a novel well-defined question for some time, and then becomes inside-view utterly confident in the resultâ has a failure rate likely between 0.1% and 5%.
For me personally, I think the rate is slightly under 1%, including from misreading a question (eg forgetting the ânotâ) and not understanding the data source.
This isnât decisive (I do indeed say things like giving <0.1% for direct human extinction from nuclear war or climate change this century) but is sort of a weak outside view argument for why anchoring on 1%-99% is not entirely absurd, even if we lived in an epistemic environment where basis points or 1-millionths probabilities are the default expressions of uncertainty.
Put another way, I think if the best research on how humans think of probabilities to date for novel well-defined problems is Expert Political Judgement where political expertsâ âutter confidenceâ translates to a ~15% failure rate (and my personal anecdotal evidence lines up with the empirical results), I think Iâd say something similar about 10-90% being range of âreasonableâ probabilities even if we use percentage-point based language.
TL;DR: Iâm curious what the most detailed or strongly-evidenced arguments are in favour of extinction risk eventually falling to extremely low levels.
An argument I often see goes to the effect of âwe have a lot of uncertainty about the future, and given that it seems hard to be >99% confident that humanity will last <1 billion yearsâ. As written, this seems like a case of getting anchored by percentages and failing to process just how long one billion years really is (weak supporting evidence for the latter is that I sometimes see eerily similar estimates for one million years...). Perhaps this is my finance background talking, but I can easily imagine a world where the dominant way to express probability is basis points and our go-to probability for âvery unlikely thingâ was 1 bp rather than 1%, which is 100x smaller. Or we could have have a generic probability analogy to micromorts, which are 100x smaller still, etc. Yet such choices in language shouldnât be affecting our decisions or beliefs about the best thing to do.
On the object level, one type of event Iâm allowed to be extremely confident about is a large conjunction of events; if I flip a fair coin 30 times, the chance of getting 30 heads is approximately one in a billion.
Humanity surviving for a long time has a similar property; if you think that civilisation has a 50% chance of making it through the next 10,000 years, then conditional on that a 50% chance of making it through the next 20,000 years, then 50% for the next 40,000 years, etc. (applying a common rule-of-thumb for estimating uncertain lifetimes, starting with the observation that civilisation has been around for ~10,000 years so far), then the odds of surviving a billion years come out somewhere between 1 in 2^16 and 1 in 2^17, AKA roughly 0.001%.
We could also try to estimate current extinction risk directly based on known risks. Most attempts Iâve seen at this suggest that 50% to make it through the next 10,000 years, AKA roughly 0.007% per year, is very generous. As I see it, this is because an object-level anlysis of the risks suggests they have rising, not falling as the Lindy rule would imply.
When Iâve expressed this point to people in the past, I tend to get very handwavy (non-numeric) arguments about how a super-aligned-AI could dramatically cut existential risk to the levels required; another way of framing the above is that, to envisage a plausible future where we then have >one billion years in expectation, annualised risk needs to get to <0.0000001% in that future. Another thought is that space colonization could make humanity virtually invincible. So partly Iâm wondering if thereâs a better-developed version of these that accounts for the risks that would remain, or other routes to the same conclusion, since this assumption of a large-future-in-expectation seems critical to a lot of longtermist thought.
Some scattered thoughts (sorry for such a long comment!). Organized in order rather than by importanceâI think the most important argument for me is the analogy to computers.
Itâs possible to write âHumanity survives the next billion yearsâ as a conjunction of a billion events (humanity survives year 1, and year 2, and...). Itâs also possible to write âhumanity goes extinct next yearâ as a conjunction of a billion events (Alice dies, and Bob dies, and...). Both of those are quite weak prima facie justifications for assigning high confidence. You could say that the second conjunction is different, because the billionth person is very likely to die once the others have died (since there has apparently been some kind of catastrophe), but the same is true for survival. In both cases there are conceivable events that would cause every term of the conjunction to be true, and we need to address the probability of those common causes directly. Being able to write the claim as a conjunction doesnât seem to help you get to extreme probabilities without an argument about independence.
I feel you should be very hesitant to assign 99%+ probabilities without a good argument, and I donât think this is about anchoring to percent. The burden of proof gets stronger and stronger as you move closer to 1, and 100 is getting to be a big number. I think this is less likely to be a tractable disagreement than the other bullets but it seems worth mentioning for completeness. Iâm curious if you think there are other natural statements where the kind of heuristic you are describing (or any other similarly abstract heuristic) would justifiably get you to such high confidences. I agree with Max Danielâs point that it doesnât work for realistic versions of claims like âThis coin will come up heads 30 times in a row.â You say that itâs not exclusive to simplified models but I think Iâd be similarly skeptical of any application of this principle. (More generally, I think itâs not surprising to assign very small probabilities to complex statements based on weak evidence, but that it will happen much more rarely for simple statements. It doesnât seem promising to get into that though.)
I think space colonization is probably possible, though getting up to probabilities like 50% for space colonization feasibility would be a much longer discussion. (I personally think >50% probability is much more reasonable than <10%.) If there is a significant probability that we colonize space, and that spreading out makes the survival of different colonists independent (as it appears it would), then it seems like we end up with some significant probability of survival. That said, I would also assign ~1/â2 probability to surviving a billion years even if we were confined to Earth. I could imagine being argued down to 1â4 or even 1â8 but each successive factor of 2 seems much harder. So in some sense the disagreement isnât really about colonization.
Stepping back, I think the key object-level questions are something like âIs there any way to build a civilization that is very stable?â and âWill people try?â It seems to me you should have a fairly high probability on âyesâ to both questions. I donât think you have to invoke super-aligned AI to justify that conclusionâitâs easy to imagine organizing society in a way which drives existing extinction risks to negligible levels, and once thatâs done itâs not clear where youâd get to 90%+ probabilities for new risks emerging that are much harder to reduce. (Iâm not sure which step of this you get off the boat forâis it that you canât imagine a world that say reduced the risk of an engineered pandemic killing everyone to < 1/âbillion per year? Or that you think itâs very likely other much harder-to-reduce risks would emerge?)
A lot of this is about burden of proof arguments. Is the burden of proof on someone to exhibit a risk thatâs very hard to reduce, or someone to argue that there exists no risk that is hard to reduce? Once weâre talking about 10% or 1% probabilities it seems clear to me that the burden of proof is on the confident person. You could try to say âThe claim of âno bad risksâ is a conjunction over all possible risks, so itâs pretty unlikelyâ but I could just as well say âThe claim about âthe risk is irreducibleâ is a conjunction over all possible reduction strategies, so itâs pretty unlikelyâ so I donât think this gets us out of the stalemate (and the stalemate is plenty to justify uncertainty).
I do furthermore think that we can discuss concrete (kind of crazy) civilizations that are likely to have negligible levels of risk, given that e.g. (i) we have existence proofs for highly reliable machines over billion-year timescales, namely life, (ii) we have existence proofs for computers if you can build reliable machinery of any kind, (iii) itâs easy to construct programs that appear to be morally relevant but which would manifestly keep running indefinitely. We canât get too far with this kind of concrete argument, since any particular future we can imagine is bound to be pretty unlikely. But itâs relevant to me that e.g. stable-civilization scenarios seem about as gut-level plausible to me as non-AI extinction scenarios do in the 21st century.
Consider the analogous question âIs it possible to build computers that successfully carry out trillions of operations without errors that corrupt the final result?â My understanding is that in the early 20th century this question was seriously debated (though thatâs not important to my point), and it feels very similar to your question. Itâs very easy for a computational error to cascade and change the final result of a computation. Itâs possible to take various precautions to reduce the probability of an uncorrected error, but why think that itâs possible to reduce that risk to levels lower than 1 in a trillion, given that all observed computers have had fairly high error rates? Moreover, it seems that error rates are growing as we build bigger and bigger computers, since each element has an independent failure rate, including the machinery designed to correct errors. To really settle this we need to get into engineering details, but until youâve gotten into those details I think itâs clearly unwise to assign very low probability to building a computer that carries out trillions of steps successfullyâthe space of possible designs is large and people are going to try to find one that works, so youâd need to have some good argument about why to be confident that they are going to fail.
You could say that computers are an exceptional example Iâve chosen with hindsight. But Iâm left wondering if there are any valid applications of this kind of heuristicâwhatâs the reference class of which âhighly reliable computersâ are exceptional rather than typical?
If someone said:âA billion years is a long time. Any given thing that can plausibly happen should probably be expected to happen over that time periodâ then Iâd ask about why life survived the last billion years.
You could say that âa billion yearsâ is a really long time for human civilization (given that important changes tend to happen within decades or centuries) but not a long time for intelligent life (given that important changes takes millions of years). This is similar to what happens if you appeal to current levels of extinction risk being really high. I donât buy this because life on earth is currently at a period of unprecedentedly rapid change. You should have some reasonable probability of returning to more historically typical timescales of hundreds of millions of years, which in turn gives you a reasonable overall probability on surviving for hundreds of millions of years. (Actually I think we should have >50% probabilities for reversion to lower timescales, since we can tell that the current period of rapid growth will soon be over. Over our history rapid change and rapid growth have basically coincided, so itâs particularly plausible that returning to slow-growth will also return to slow-change.)
Applying the rule of thumb for estimating lifetimes to âthe human speciesâ rather than âintelligent lifeâ seems like itâs doing a huge amount of work. It might be reasonable to do the extrapolation using some mixture between these reference classes (and others), but in order to get extreme probabilities for extinction youâd need to have an extreme mixture. This is part of the general pattern why you donât usually end up with 99% probabilities for interesting questions without real argumentsâyou need to not only have a way of estimating that has very high confidence, you need to be very confident in that way of estimating.
You could appeal to some similar outside view to say âhumanity will undergo changes similar in magnitude to those that have occurred over the last billion years;â I think thatâs way more plausible (though I still wouldnât believe 99%) but I donât think that it matters for claims about the expected moral value of the future.
The doomsday argument can plausibly arrive at very high confidences based on anthropic considerations (if you accept those anthropic principles with very high confidence). I think many long-termists would endorse the conclusion that the vast majority of observers like us do not actually live in a large and colonizable universeânot at 99.999999% but at least at 99%. Personally I would reject the inference that we probably donât live in a large universe because I reject the implicit symmetry principle. At any rate, these lines of argument go in a rather different direction than the rest of your post and I donât feel like itâs what you are getting at.
Thanks for the long comment, this gives me a much richer picture of how people might be thinking about this. On the first two bullets:
You say you arenât anchoring, in a world where we defaulted to expressing probability in 1/â10^6 units called Ms Iâm just left feeling like you would write âyou should be hesitant to assign 999,999M+ probabilities without a good argument. The burden of proof gets stronger and stronger as you move closer to 1, and 1,000,000 is getting to be a big number.â. So if itâs not anchoring, what calculation or intuition is leading you to specifically 99% (or at least, something in that ballpark), and would similarly lead you to roughly 990,000M with the alternate language?
My reply to Max and your first bullet both give examples of cases in the natural world where probabilities of real future events would go way outside the 0.01% â 99.99% range. Conjunctions force you to have extreme confidence somewhere, the only question is where. If I try to steelman your claim, I think I end up with an idea that we should apply our extreme confidence to the thing inside the product due to correlated cause, rather than the thing outside; does that sound fair?
The rest I see as an attempt to justify the extreme confidences inside the product, and Iâll have to think about more. The following are gut responses:
Iâm much more baseline cynical than you seem to be about peopleâs willingness and ability to actually try, and try consistently, over a huge time period. To give some idea, Iâd probably have assigned <50% probability to humanity surviving to the year 2150, and <10% for the year 3000, before I came across EA. Whether thatâs correct or not, I donât think its wildly unusual among people who take climate change seriously*, and yet we almost certainly arenât doing enough to combat that as a society. This gives me little hope for dealing with <10% threats that will surely appear over the centuries, and as a result I found and continue to find the seemingly-baseline optimism of longtermist EA very jarring.
(Again, the above is a gut response as opposed to a reasoned claim.)
Yeah, Owen made a similar point, and actually I was using civilisation rather than âthe human speciesâ, which is 20x shorter still. I honestly hadnât thought about intelligent life as a possible class before, and that probably is the thing from this conversation that has the most chance of changing how I think about this.
*âThe survey from the Yale Program on Climate Change Communication found that 39 percent think the odds of global warming ending the human race are at least 50 percent. â
I roughly think that there simply isnât very strong evidence for this. I.e. I think it would be mistaken to have a highly resilient large credence in extinction risk eventually falling to ~0.0000001%, humanity or its descendants surviving for a billion years, or anything like that.
[ETA: Upon rereading, I realized the above is ambiguous. With âlargeâ I was here referring to something stronger than ânon-extremeâ. E.g. I do think itâs defensible to believe that, e.g. âIâm like 90% confident that over the next 10 years my credence in information-based civilization surviving for 1 billion years wonât fall below 0.1%â, and indeed thatâs a statement I would endorse. I think Iâd start feeling skeptical if someone claimed there is no way theyâd update to a credence below 40% or something like that.]
I think this is one of several reasons for why the ânaive caseâ for focusing on extinction risk reduction fails. (Another example of such a reason is the fact that, for most known hazards, collapse short of extinction seems way more likely than immediate extinction, that as a consequence most interventions affect both the probability of extinction and the probability and trajectory of various collapse scenarios, and that the latter effect might dominate but has unclear sign.)
I think the most convincing response is a combination of the following. Note, however, that the last two mostly argue that we should be longtermists despite the case for billion-year futures being shaky rather than defenses of that case itself.
You are correct that within fixed models we can justifiably have extreme credences, e.g. for the probability of a specific result of 30 coin flips. However, I think the case for âmodestyââi.e. not ruling out very long futuresârests largely on model uncertainty, i.e. our inability to confidently identify the âcorrectâ model for reasoning about the length of the future.
For example, suppose I produce a coin from my pocket and ask you to estimate how likely it is that in my first 30 flips I get only heads. Your all-things-considered credence will be dominated by your uncertainty over whether my coin is strongly biased toward heads. Since 30 heads are vanishingly unlikely if the coin is fair, this is the case even if your prior says that most coins someone produces from their pocket are fair: âvanishingly unlikelyâ here is much stronger (in this case around 10â9) than your prior can justifiably be, i.e. âmost coinsâ might defensibly refer to 90% or 99% or 99.99% but not 99.9999999%.
This insight that extremely low credences all-things-considered are often âforbiddenâ by model uncertainty is basically the point from Ord, Hillerbrand, & Sandberg (2008).
Note that I think itâs still true that there is a possible epistemic state (and probably even model we can write down now) that rules out very long futures with extreme confidence. The point just is that we wonât be able to get to that epistemic state in practice.
Overall, I think the lower bound on the all-things-considered credence we should have in some speculative scenario often comes down to understanding how âfundamentalâ our model uncertainty is. I.e. roughly: to get to models that have practically significant credence in the scenario in question, how fundamentally would I need to revise my best-guess model of the world?
E.g. if Iâm asking whether the LHC will blow up the world, or whether itâs worth looking for the philosopherâs stone, then I would need to revise extremely fundamental aspects of my world model such as fundamental physicsâwe are justified in having pretty high credences in those.
By contrast, very long futures seem at least plausibly consistent with fundamental physics as well as plausible theories for how cultural evolution, technological progress, economics, etc. work.
It is here, and for this reason, that points like âbut itâs conceivable that superintelligent AI will reduce extinction risk to near-zeroâ are significant.
Therefore, model uncertainty will push me toward a higher credence in a very long future than in the LHC blowing up the world (but even for the latter my credence is plausibly dominated by model uncertainty rather than my credence in this happening conditional on my model of physics being correct).
Longtermism goes through (i.e. it looks like we can have most impact by focusing on the long-term) on much less extreme time scales than 1 billion.
Some such less extreme time scales have âmore defensibleâ reasons behind them, e.g. outside view considerations based on the survival of other species or the amount of time humanity or civilization have survived so far. The Lindy rule prior you describe is one example.
There is a wager for long futures: we can have much more impact if the future is long, so these scenarios might dominate our decision-making even if they are unlikely.
(NB I think this is a wager that is unproblematic only if we have independently established that the probability of the relevant futures isnât vanishingly small. This is because of the standard problems around Pascalâs wager.)
That all being said, my views on this feel reasonably but not super resilientâlike itâs âonlyâ 10% Iâll have changed my mind about this in major ways in 2 years. I also think there is room for more work on how to best think about such questions (the Ord et al. paper is a great example), e.g. checking that this kind of reasoning doesnât âprove too muchâ or leads to absurd conclusions when applied to other cases.
Thanks for this. I wonât respond to your second/âthird bullets; as you say itâs not a defense of the claim itself, and while itâs plausible to me that many conclusions go through on much shorter timelines, I still want to understand the basis for the actual arguments made as best I can. Not least because if I canât defend such arguments, then my personal pitches for longtermism (both to myself and to others) will not include them; they and I will focus on the next e.g. 10,000 years instead.
On your first bullet:
Iâll go and read the paper you mention, but flagging that my coinflip example is more general than you seem to think. Probability theory has conjunctions even outside of simple fixed models, and itâs the conjunction, not the fixed model, which is forcing you to have extreme credences. At best, we may be able to define a certain class of events where such credences are âforbiddenâ (this could well be what the paper tries to do). We would then need to make sure that no such event can be expressed as a conjunction of a very large number of other such events.
Concretely, P(Humanity survives one billion years) is the product of one million probabilities of surviving each millenia, conditional on having survivied up to that point. As a result, we either need to set some of the intervening probabilities like P(Humanity survivies the next millenia | Humanity has survived to the year 500,000,000 AD) extremely high, or we need to set the overall product extremely low. Setting everything to the range 0.01% â 99.99% is not an option, without giving up on arithmetic or probability theory. And of course, I could break the product into a billion-fold conjunction where each component was âsurvive the next yearâ if I wanted to make the requirements even more extreme.
Note I think it is plausible such extremes can be justified, since it seems like a version of humanity that has survived 500,000 millenia really should have excellent odds of surviving the next millenium. Indeed, I think that if you actually write out the model uncertainty argument mathematically, what ends up happening here is the fact that humanity has survivied 500,000 millenia is massive overwhelming Bayesian evidence that the âcorrectâ model is one of the ones that makes such a long life possible, allowing you to reach very extreme credences about the then-future. This is somewhat analagous to the intuitive extreme credence most people have that they wonât die in the next second.
I agree with everything you say in your reply. I think I simply partly misunderstood the point you were trying to make and phrased part of my response poorly. In particular, I agree that extreme credences arenât âforbiddenâ in general.
(Sorry, I think it would have been better if I had flagged that I had read your comment and written mine very quickly.)
I still think that the distinction between credence/âprobabilities within a model and credence that a model is correct are is relevant here, for reasons such as:
I think itâs often harder to justify an extreme credence that a particular model is right than it is to justify an extreme probability within a model.
Often when it seems we have extreme credence in a model this just holds âat a certain level of detailâ, and if we looked at a richer space of models that makes more fine-grained distinctions weâd say that our credence is distributed over a (potentially very large) family of models.
There is a difference between an extreme all-things-considered credence (i.e. in this simplified way of thinking about epistemics the âexpected credenceâ across models) and being highly confident in an extreme credence;
I think the latter is less often justified than the former. And again if it seems that the latter is justified, I think itâll often be because an extreme amount of credence is distributed among different models, but all of these models agree about some event weâre considering. (E.g. ~all models agree that I wontât spontaneously die in the next second, or that Santa Clause isnât going to appear in my bedroom.)
When different models agree that some event is the conjunction of many others, then each model will have an extreme credence for some event but the models might disagree about for which events the credence is extreme.
Taken together (i.e. across events/âdecisions) your all-things-considered credences might look therefore look âfunnyâ or âinconsistentâ (by the light of any single model). E.g. you might have non-extreme all-things-considered credence in two events based on two different models that are inconsistent with each other, and each of which rules out one of the events with extreme probability but not the other.
I acknowledge that Iâm making somewhat vague claims here, and that in order to have anything close to a satisfying philosophical account of whatâs going on I would need to spell out what exactly I mean by âoftenâ etc. (Because as I said I do agree that these claims donât always hold!)
Some fixed models also support macroscopic probabilities of indefinite survival: e.g. if in each generation each individual has a number of descendants drawn from a Poisson distribution with parameter 1.1, then thereâs a finite chance of extinction in each generation but these diminish fast enough (as the population gets enormous) that if you make it through an initial rocky period youâre pretty much safe.
That model is clearly too optimistic because it doesnât admit crises with correlated problems across all the individuals in a generation. But then thereâs a question about how high is the unavoidable background rate of such crises (i.e. ones that remain even if you have a very sophisticated and well-resourced attempt to prevent them).
On current understanding I think the lower bounds for the rate of exogenous such events rely on things like false vacuum decay (and maybe GRBs while weâre local enough), and those lower bounds are really quite low, so itâs fairly plausible that the true rate is really low (though also plausible itâs higher because there are risks that arenât observed/âunderstood).
Bounding endogenous risk seems a bit harder to reason about. I think that you can give kind of fairytale/âhandwaving existence proofs of stable political systems (which might however be utterly horrific to us). Then itâs at least sort of plausible that there would be systems which are simultaneously extremely stable and also desirable.
To be clear, this makes a lot of sense to me, and I emphatically agree that understanding the arguments is valuable independently from whether this immediately changes a practical conclusion.
One argument goes via something like the reference class of global autopoeitic information-processing systems: life has persisted since it started several billion years ago; multicellular life similarly; sexual selection similarly. Sure, species go extinct when theyâre outcompeted, but the larger systems theyâre part of have only continued to thrive.
The right reference class (on this story) is not âhumanity as a mammalian speciesâ but âinformation-based civilization as the next step in faster evolutionâ. Then we might be quite optimistic about civilization in some meaningful sense continuing indefinitely (though perhaps not about particular institutions or things that are recognisably human doing so).
If I understand you correctly, the argument is not âautopoietic systems have persisted for billions of yearsâ but more specifically âso far each new âtypeâ of such systems has persisted, so we should expect the most recent new type of âinformation-based civilizationâ to persist as wellâ.
This is an interesting argument I hadnât considered in this form.
(I think itâs interesting because I think the case that it talks about a morally relevant long future is stronger than for the simple appeal to all autopoietic systems as a reference class. The latter include many things that are so weirdâlike eusocial insects, asexually reproducing organisms, and potentially even non-living systems like autocatalytic chemical reactionsâthat the argument seems quite vulnerable to the objection that knowing that âsome kind of autopoietic system will be around for billions of yearsâ isnât that relevant. We arguably care about something that, while more general than current values or humans as biological species, is more narrow than that.
[Tbc, I think there are non-crazy views that care at least somewhat about basically all autopoietic systems, but my impression is that the standard justification for longtermism doesnât want to commit itself to such views.])
However, I have some worries about survivorship bias: If there was a âfailed major transition in evolutionâ, would we know about it? Like, could it be that 2 billion years ago organisms started doing sphexual selection (a hypothetical form of reproduction thatâs as different from previous asexual reproduction as sexual reproduction but also different from the latter) but that this type of reproduction died out after 1,000 yearsâand similarly for sphexxual selection, sphexxxual selection, ⊠? Such that with full knowledge weâd conclude the reverse from your conclusion above, i.e. âalmost all new types of autopoietic systems died out soon, so we should expect information-based civilization to die out soon as wellâ?
(FWIW my guess is that the answer actually is âour understanding of the history of evolution is sufficiently good that together with broad priors we can rule out at least an extremely high number of such âfailed transitionsââ, but Iâm not sure and so I wanted to mention the possible problem.)
If there were lots of failed major transitions in evolution, that would also update us towards there being a greater number of attempted transitions than we previously thought, which would in turn update us positively on information-based civilization emerging eventually, no? Or are you assuming that these would be too weird/âdifferent from homo sapiens such that we wouldnât share values enough?
Furthermore, sexual selection looks like a fairly simple and straightforward solution to the problem âorganisms with higher life expectancy donât evolve quickly enoughâ, so it doesnât look like thereâs a lot of space left for any alternatives.
Hereâs a relevant thread from ~5 years ago(!) when some people were briefly discussing points along these lines. I think it illustrates both some similar points and also offers some quick responses to them.
Please do hit see in context to see some further responses there!
And agree, I would also like to further understand the arguments here :)
Thanks for the link. I did actually comment on that thread, and while I didnât have it specifically in mind it was probably close to the start of me asking questions along these lines.
To answer your linguistic objection directly, I think one reason/âintuition I have for not trusting probabilities much above 99% or much below 1% is that the empirical rates for the reference class of âfairly decent forecaster considers a novel well-defined question for some time, and then becomes inside-view utterly confident in the resultâ has a failure rate likely between 0.1% and 5%.
For me personally, I think the rate is slightly under 1%, including from misreading a question (eg forgetting the ânotâ) and not understanding the data source.
This isnât decisive (I do indeed say things like giving <0.1% for direct human extinction from nuclear war or climate change this century) but is sort of a weak outside view argument for why anchoring on 1%-99% is not entirely absurd, even if we lived in an epistemic environment where basis points or 1-millionths probabilities are the default expressions of uncertainty.
Put another way, I think if the best research on how humans think of probabilities to date for novel well-defined problems is Expert Political Judgement where political expertsâ âutter confidenceâ translates to a ~15% failure rate (and my personal anecdotal evidence lines up with the empirical results), I think Iâd say something similar about 10-90% being range of âreasonableâ probabilities even if we use percentage-point based language.