I’m worried that modelling the tail risk here as a power law is doing a lot of work, since it’s an assumption which makes the risk of very large events quite small (especially since you’re taking a power law in the ratio, aside from the threshold from requiring a certain number of humans to have a viable population, the structure of the assumption essentially gives that extinction is impossible).
But we know from (the fancifully named) dragon king theory that the very largest events are often substantially larger than would be predicted by power law extrapolation.
Thanks for the critique, Owen! I strongly upvoted it.
I’m worried that modelling the tail risk here as a power law is doing a lot of work, since it’s an assumption which makes the risk of very large events quite small (especially since you’re taking a power law in the ratio
Assuming the PDF of the ratio between the initial and final population follows a loguniform distribution (instead of a power law), the expected value density of the cost-effectiveness of saving a life would be constant, i.e. it would not depend on the severity of the catastrophe. However, I think assuming a loguniform distribution for the ratio between the initial and final population majorly overestimates tail risk. For example, I think a population loss (over my period length of 1 year[1]) of 90 % to 99 % (ratio between the initial and final population of 10 to 100) is more likely than a population loss of 99.99 % to 99.999 % (ratio between the initial and final population of 10 k to 100 k), whereas a loguniform distribution would predict both of these to be equally likely.
aside from the threshold from requiring a certain number of humans to have a viable population
My reduction in population is supposed to refer to a period of 1 year, but the above only decreases population over longer horizons.
the structure of the assumption essentially gives that extinction is impossible
I think human extinction over 1 year is extremely unlikely. I estimated 5.93*10^-12 for nuclear wars, 2.20*10^-14 for asteroids and comets, 3.38*10^-14 for supervolcanoes, a prior of 6.36*10^-14 for wars, and a prior of 4.35*10^-15 for terrorist attacks.
But we know from (the fancifully named) dragon king theory that the very largest events are often substantially larger than would be predicted by power law extrapolation.
Interesting! I did not know about that theory. On the other hand, there are counterexamples. David Roodman has argued the tail risk of solar storms decreases faster than predicted by a power law:
I have also found the tail risk of wars decreases faster than predicted by a power law:
Do you have a sense of the extent to which the dragon king theory applies in the context of deaths in catastrophes?
I think human extinction over 1 year is extremely unlikely. I estimated 5.93*10^-12 for nuclear wars, 2.20*10^-14 for asteroids and comets, 3.38*10^-14 for supervolcanoes, a prior of 6.36*10^-14 for wars, and a prior of 4.35*10^-15 for terrorist attacks.
Without having dug into them closely, these numbers don’t seem crazy to me for the current state of the world. I think that the risk of human extinction over 1 year is almost all driven by some powerful new technology (with residues for the wilder astrophysical disasters, and the rise of some powerful ideology which somehow leads there). But this is an important class! In general dragon kings operate via something which is mechanically different than the more tame parts of the distribution, and “new technology” could totally facilitate that.
Do you have a sense of the extent to which the dragon king theory applies in the context of deaths in catastrophes?
Unfortunately, for the relevant part of the curve (catastrophes large enough to wipe out large fractions of the population) we have no data, so we’ll be relying on theory. My understanding (based significantly just on the “mechanisms” section of that wikipedia page) is that dragon kings tend to arise in cases where there’s a qualitatively different mechanism which causes the very large events but doesn’t show up in the distribution of smaller events. In some cases we might not have such a mechanism, and in others we might. It certainly seems plausible to me when considering catastrophes (and this is enough to drive significant concern, because if we can’t rule it out it’s prudent to be concerned, and risk having wasted some resources if we turn out to be in a world where the total risk is extremely small), via the kind of mechanisms I allude to in the first half of this comment.
I think that the risk of human extinction over 1 year is almost all driven by some powerful new technology (with residues for the wilder astrophysical disasters, and the rise of some powerful ideology which somehow leads there). But this is an important class! In general dragon kings operate via something which is mechanically different than the more tame parts of the distribution, and “new technology” could totally facilitate that.
To clarify, my estimates are supposed to account for unknown unknowns. Otherwise, they would be any orders of magnitude lower.
Unfortunately, for the relevant part of the curve (catastrophes large enough to wipe out large fractions of the population) we have no data, so we’ll be relying on theory.
I found the “Unfortunately” funny!
My understanding (based significantly just on the “mechanisms” section of that wikipedia page) is that dragon kings tend to arise in cases where there’s a qualitatively different mechanism which causes the very large events but doesn’t show up in the distribution of smaller events. In some cases we might not have such a mechanism, and in others we might.
Makes sense. We may even have both cases in the same tail distribution. The tail distribution of the annual war deaths as a fraction of the global population is characteristic of a power law from 0.001 % to 0.01 %, then it seems to have a dragon king from around 0.01 % to 0.1 %, and then it decreases much faster than predicted by a power law. Since the tail distribution can decay slower and faster than a power law, I feel like this is still a decent assumption.
It certainly seems plausible to me when considering catastrophes (and this is enough to drive significant concern, because if we can’t rule it out it’s prudent to be concerned, and risk having wasted some resources if we turn out to be in a world where the total risk is extremely small), via the kind of mechanisms I allude to in the first half of this comment.
I agree we cannot rule out dragon kings (flatter sections of the tail distribution), but this is not enough for saving lives in catastrophes to be more valuable than in normal times. At least for the annual war deaths as a fraction of the global population, the tail distribution still ends up decaying faster than a power law despite the presence of a dragon king, so the expected value density of the cost-effectiveness of saving lives is still lower for larger wars (at least given my assumption that the cost to save a life does not vary with the severity of the catastrophe). I concluded the same holds for the famine deaths caused by the climatic effects of nuclear war.
One could argue we should not only put decent weight on the existence of dragon kings, but also on the possibility that they will make the expected value density of saving lives higher than in normal times. However, this would be assuming the conclusion.
I’m worried that modelling the tail risk here as a power law is doing a lot of work, since it’s an assumption which makes the risk of very large events quite small (especially since you’re taking a power law in the ratio, aside from the threshold from requiring a certain number of humans to have a viable population, the structure of the assumption essentially gives that extinction is impossible).
But we know from (the fancifully named) dragon king theory that the very largest events are often substantially larger than would be predicted by power law extrapolation.
Thanks for the critique, Owen! I strongly upvoted it.
Assuming the PDF of the ratio between the initial and final population follows a loguniform distribution (instead of a power law), the expected value density of the cost-effectiveness of saving a life would be constant, i.e. it would not depend on the severity of the catastrophe. However, I think assuming a loguniform distribution for the ratio between the initial and final population majorly overestimates tail risk. For example, I think a population loss (over my period length of 1 year[1]) of 90 % to 99 % (ratio between the initial and final population of 10 to 100) is more likely than a population loss of 99.99 % to 99.999 % (ratio between the initial and final population of 10 k to 100 k), whereas a loguniform distribution would predict both of these to be equally likely.
My reduction in population is supposed to refer to a period of 1 year, but the above only decreases population over longer horizons.
I think human extinction over 1 year is extremely unlikely. I estimated 5.93*10^-12 for nuclear wars, 2.20*10^-14 for asteroids and comets, 3.38*10^-14 for supervolcanoes, a prior of 6.36*10^-14 for wars, and a prior of 4.35*10^-15 for terrorist attacks.
Interesting! I did not know about that theory. On the other hand, there are counterexamples. David Roodman has argued the tail risk of solar storms decreases faster than predicted by a power law:
I have also found the tail risk of wars decreases faster than predicted by a power law:
Do you have a sense of the extent to which the dragon king theory applies in the context of deaths in catastrophes?
I have now clarified this in the post.
Without having dug into them closely, these numbers don’t seem crazy to me for the current state of the world. I think that the risk of human extinction over 1 year is almost all driven by some powerful new technology (with residues for the wilder astrophysical disasters, and the rise of some powerful ideology which somehow leads there). But this is an important class! In general dragon kings operate via something which is mechanically different than the more tame parts of the distribution, and “new technology” could totally facilitate that.
Unfortunately, for the relevant part of the curve (catastrophes large enough to wipe out large fractions of the population) we have no data, so we’ll be relying on theory. My understanding (based significantly just on the “mechanisms” section of that wikipedia page) is that dragon kings tend to arise in cases where there’s a qualitatively different mechanism which causes the very large events but doesn’t show up in the distribution of smaller events. In some cases we might not have such a mechanism, and in others we might. It certainly seems plausible to me when considering catastrophes (and this is enough to drive significant concern, because if we can’t rule it out it’s prudent to be concerned, and risk having wasted some resources if we turn out to be in a world where the total risk is extremely small), via the kind of mechanisms I allude to in the first half of this comment.To clarify, my estimates are supposed to account for unknown unknowns. Otherwise, they would be any orders of magnitude lower.
I found the “
Unfortunately” funny!Makes sense. We may even have both cases in the same tail distribution. The tail distribution of the annual war deaths as a fraction of the global population is characteristic of a power law from 0.001 % to 0.01 %, then it seems to have a dragon king from around 0.01 % to 0.1 %, and then it decreases much faster than predicted by a power law. Since the tail distribution can decay slower and faster than a power law, I feel like this is still a decent assumption.
I agree we cannot rule out dragon kings (flatter sections of the tail distribution), but this is not enough for saving lives in catastrophes to be more valuable than in normal times. At least for the annual war deaths as a fraction of the global population, the tail distribution still ends up decaying faster than a power law despite the presence of a dragon king, so the expected value density of the cost-effectiveness of saving lives is still lower for larger wars (at least given my assumption that the cost to save a life does not vary with the severity of the catastrophe). I concluded the same holds for the famine deaths caused by the climatic effects of nuclear war.
One could argue we should not only put decent weight on the existence of dragon kings, but also on the possibility that they will make the expected value density of saving lives higher than in normal times. However, this would be assuming the conclusion.