Finally, I’m a bit suspicious of infinitesimal probabilities due to the strength they give the prior. They imply we’d need enormously strong evidence to update much at all in a way that seems unreasonable to me.
[...]
Cf. some of Joe’s discussion of settling on infinitesimal priors here.
I think there is a potential misunderstanding here. Joe Carlsmith’s[1]discussion on the contraints on future updating apply to one’s best guess. In contrast, my astronomically low best guess prior is not supposed to be neither my current best guess nor a preliminary best guess from which one should formally update towards one’s best guess. That being said, historical war deaths seem to me like the most natural prior to assess future war deaths, so I see some merit in using my astronomically low best guess prior as a preliminary best guess.
I also agree with Joe that an astronomically low annual AI extinction risk (e.g. 6.36*10^-14) would not make sense (see this somewhat related thread). However, I would think about the possibility of AI killing all humans in the context of AI risk, not great power war.
Let’s take your preferred estimate of an annual probability of “6.36*10^-14”. That’s a 1 in 15,723,270,440,252 chance. That is, 1 in 15 trillion years.
I look around at the world and I see a nuclear-armed state fighting against a NATO-backed ally in Ukraine; I see conflict once again spreading throughout the Middle East; I see the US arming and perhaps preparing to defend Taiwan against China, which is governed by a leader who claims to consider reunification both inevitable and an existential issue for his nation.
And I see nuclear arsenals that still top 12,000 warheads and growing; I see ongoing bioweapons research powered by ever-more-capable biotechnologies; and I see obvious military interest in developing AI systems and autonomous weapons.
This does not seem like a situation that only leads to total existential destruction once every 15 trillion years.
I feel like the sentiment you are expressing describing current events and trends would also have applied in the past, and today to risks which you might consider overly low. On the one hand, I appreciate a probability like 6.36*10^-14 intuitively feels way too small. On the other, humans are not designed to intuitively/directly assess the probability of rare events in a reliable way. These involve many steps, and therefore give rise to scope neglect.
As a side note, I do not think there is an evolutionary incentive for an individual human to accurately distinguishing between an extinction risk of 10^-14 and 0.01 %, because both are negligible in comparison with the annual risk of death 1 % (for a life expectancy of 100 years). Relatedly, I mentioned in the post that:
In general, I suspect there is a tendency to give probabilities between 1 % and 99 % for events whose mechanics we do not understand well [e.g. extinction conditional on a war larger than World War 2], given this range encompasses the vast majority (98 %) of the available linear space (from 0 to 1), and events in everyday life one cares about are not that extreme. However, the available logarithmic space is infinitely vast, so there is margin for such guesses to be major overestimates. In the context of tail risk, subjective guesses can easily fail to adequately account for the faster decay of the tail distribution as severity approaches the maximum.
In addition, I guess my astronomically low annual war extinction risk feels like an extreme value to many because they have in the back of their minds Toby’s guesses for the existential risk between 2021 and 2120 given in The Precipice. The guess was 0.1 % for nuclear war, which respects an annual existential risk of around 10^-5, way larger than the estimates for annual war extinction risk I present in my post. Toby does not mechanistically explain how he got his guesses, but I do not think he used quantitative models to derive them. So I think they may well be prone to scope neglect. In terms of Toby’s guesses, I also mentioned in the post that:
In general, I agree with David Thorstad that Toby Ord’s guesses for the existential risk between 2021 and 2120 given in The Precipice are very high (e.g. 0.1 % for nuclear war). In the realm of the more anthropogenic AI, bio and nuclear risk, I personally think underweighting the outside view is a major reason leading to overly high risk. I encourage readers to check David’s series exaggerating the risks, which includes subseries on climate, AI and bio risk.
To give an example that is not discussed by David, Salotti 2022 estimated the extinction risk per century from asteroids and comets is 2.2*10^-12 (see Table 1), which is 6 (= log10(10^-6/(2.2*10^-12)) orders of magnitude lower than Toby Ord’s guess for the existential risk. The concept of existential risk is quite vague, but I do not think one can say existential risk from asteroids and comets is 6 orders of magnitude higher than extinction risk from these:
There have been 5 mass extinctions, and the impact winter involved in the last one, which played a role in the extinction of the dinosaurs, may well have contributed to the emergence of mammals, and ultimately humans.
It is possible a species better than humans at steering the future would have evolved given fewer mass extinctions, or in the absence of the last one in particular, but this is unclear. So I would say the above is some evidence that existential risk may even be lower than extinction risk.
I know you’re only talking about the prior, but your preferred estimate implies we’d need a galactically-enormous update to get to a posterior probability of war x-risk that seems reasonable. So I think something might be going wrong.
The methodology I followed in my analysis is quite similar to yours. The major differences are that:
I fitted distributions to the top 10 % logarithm of the annual war deaths of combatants as a fraction of the global population, whereas you relied on an extinction risk per war from Bear obtained by fitting a power law to war deaths of combatants per war. As I commented, it is unclear to me whether this is a major issue, but I prefer my approach.
I dealt with 111 types distributions, whereas you focussed on 1.
For the distribution you used (a Pareto), I got an annual probability of a war causing human extinction of 0.0122 %, which is very similar to the 0.0124 %/year respecting your estimate of 0.95 % over 77 years.
Aggregating the results of the top 100 distributions, I got 6.36*10^-14.
You might be thinking something along the lines of:
Given no fundamental flaw in my methodology, one should updated towards an astronomically low war extinction risk.
Given a fundamental flaw in my methodology, one should updated towards a war extinction risk e.g. 10 % as high as your 0.0124 %/year, i.e. 0.00124 %/year.
However, given the similarities between our methodologies, I think there is a high chance that any fundamental flaw in my methodology would affect yours too. So, given a fundamental flaw in mine, I would mostly believe that neither my best guess prior nor your best guess could be trusted. So, to the extent your best guess for the war extinction is informed by your methodology, I would not use it as a prior given a fundamental flaw in my methodology. In this case, one would have to come up with a better methodology rather than multiplying your annual war extinction risk by e.g. 10 %.
I also feel like updating on your prior via multiplication by something like 10 % would be quite arbitrary because my estimates for the annual war extinction risk are all over the map. Across all 111 distributions, and 3 values for the deaths of combatants as a fraction of the total deaths (10 %, 50 % and 90%) I studied, I got estimates for the annual probability of a war causing human extinction from 0 to 8.84 %. Considering just my best guess of war deaths of combatants equal to 50 % of the total deaths, the annual probability of a war causing human extinction still ranges from 0 to 2.95 %. Given such wide ranges, I would instead update towards a state of greater cluelessness or less resilience. In turn, these would imply a greater need for a better methodology, and more research on quantifying the risk of war in general.
I think there is a potential misunderstanding here. Joe Carlsmith’s[1] discussion on the contraints on future updating apply to one’s best guess. In contrast, my astronomically low best guess prior is not supposed to be neither my current best guess nor a preliminary best guess from which one should formally update towards one’s best guess. That being said, historical war deaths seem to me like the most natural prior to assess future war deaths, so I see some merit in using my astronomically low best guess prior as a preliminary best guess.
I also agree with Joe that an astronomically low annual AI extinction risk (e.g. 6.36*10^-14) would not make sense (see this somewhat related thread). However, I would think about the possibility of AI killing all humans in the context of AI risk, not great power war.
I feel like the sentiment you are expressing describing current events and trends would also have applied in the past, and today to risks which you might consider overly low. On the one hand, I appreciate a probability like 6.36*10^-14 intuitively feels way too small. On the other, humans are not designed to intuitively/directly assess the probability of rare events in a reliable way. These involve many steps, and therefore give rise to scope neglect.
As a side note, I do not think there is an evolutionary incentive for an individual human to accurately distinguishing between an extinction risk of 10^-14 and 0.01 %, because both are negligible in comparison with the annual risk of death 1 % (for a life expectancy of 100 years). Relatedly, I mentioned in the post that:
In addition, I guess my astronomically low annual war extinction risk feels like an extreme value to many because they have in the back of their minds Toby’s guesses for the existential risk between 2021 and 2120 given in The Precipice. The guess was 0.1 % for nuclear war, which respects an annual existential risk of around 10^-5, way larger than the estimates for annual war extinction risk I present in my post. Toby does not mechanistically explain how he got his guesses, but I do not think he used quantitative models to derive them. So I think they may well be prone to scope neglect. In terms of Toby’s guesses, I also mentioned in the post that:
To give an example that is not discussed by David, Salotti 2022 estimated the extinction risk per century from asteroids and comets is 2.2*10^-12 (see Table 1), which is 6 (= log10(10^-6/(2.2*10^-12)) orders of magnitude lower than Toby Ord’s guess for the existential risk. The concept of existential risk is quite vague, but I do not think one can say existential risk from asteroids and comets is 6 orders of magnitude higher than extinction risk from these:
There have been 5 mass extinctions, and the impact winter involved in the last one, which played a role in the extinction of the dinosaurs, may well have contributed to the emergence of mammals, and ultimately humans.
It is possible a species better than humans at steering the future would have evolved given fewer mass extinctions, or in the absence of the last one in particular, but this is unclear. So I would say the above is some evidence that existential risk may even be lower than extinction risk.
The methodology I followed in my analysis is quite similar to yours. The major differences are that:
I fitted distributions to the top 10 % logarithm of the annual war deaths of combatants as a fraction of the global population, whereas you relied on an extinction risk per war from Bear obtained by fitting a power law to war deaths of combatants per war. As I commented, it is unclear to me whether this is a major issue, but I prefer my approach.
I dealt with 111 types distributions, whereas you focussed on 1.
For the distribution you used (a Pareto), I got an annual probability of a war causing human extinction of 0.0122 %, which is very similar to the 0.0124 %/year respecting your estimate of 0.95 % over 77 years.
Aggregating the results of the top 100 distributions, I got 6.36*10^-14.
You might be thinking something along the lines of:
Given no fundamental flaw in my methodology, one should updated towards an astronomically low war extinction risk.
Given a fundamental flaw in my methodology, one should updated towards a war extinction risk e.g. 10 % as high as your 0.0124 %/year, i.e. 0.00124 %/year.
However, given the similarities between our methodologies, I think there is a high chance that any fundamental flaw in my methodology would affect yours too. So, given a fundamental flaw in mine, I would mostly believe that neither my best guess prior nor your best guess could be trusted. So, to the extent your best guess for the war extinction is informed by your methodology, I would not use it as a prior given a fundamental flaw in my methodology. In this case, one would have to come up with a better methodology rather than multiplying your annual war extinction risk by e.g. 10 %.
I also feel like updating on your prior via multiplication by something like 10 % would be quite arbitrary because my estimates for the annual war extinction risk are all over the map. Across all 111 distributions, and 3 values for the deaths of combatants as a fraction of the total deaths (10 %, 50 % and 90%) I studied, I got estimates for the annual probability of a war causing human extinction from 0 to 8.84 %. Considering just my best guess of war deaths of combatants equal to 50 % of the total deaths, the annual probability of a war causing human extinction still ranges from 0 to 2.95 %. Given such wide ranges, I would instead update towards a state of greater cluelessness or less resilience. In turn, these would imply a greater need for a better methodology, and more research on quantifying the risk of war in general.
I like use the full name on the 1st occasion a name is mentioned, and then just the 1st name afterwards.