Constant per-century risk is implausible because these are conditional probabilities, conditional on surviving up to that century, which means that they’re non-independent.
For example, the probability of surviving the 80th century from now is conditioned on having survived the next 79 centuries. And the worlds where human civilization survives the next 79 centuries are mostly not worlds where we face a 10% chance of extinction risk each century and keep managing to stumble along. Rather, they’re worlds where the per-century probabilities of extinction over the next 79 centuries are generally lower than that, for whatever reason. And worlds where the next 79 per-century extinction probabilities are generally lower than 10% are mostly worlds where the 80th extinction probability is also lower than that. So, structurally we should expect extinction probabilities to go down over time, as cumulative extinction risk means filtering for less extinction-prone worlds.
Dice roll style models which assume independence can be helpful for sketching out some of the broad contours of the problem, but in many cases they don’t capture the structure of the situation well enough to use them for quantitative forecasts. This is an issue for modeling many domains, like for projecting how many games a sports team will win over the upcoming season. If your central estimate is that a team will win 30% of its games, worlds where the team winds up winning more than half of its games are mostly not worlds where the team kept getting lucky game after game. Rather, they’re worlds where the team was better than expected, so they generally had more than a 30% chance of winning each game.
A model that doesn’t account for this non-independence across games (like just using a binomial distribution based on your central estimate of what fraction of games a team will win) has built in the assumption that the only way to win more games is to have a bunch of 30% events go their way, and implicitly rules out the possibility of the team being better than your central estimate, so it will give inaccurate distributions. For example 3 of the 30 NBA teams had less than a 1/1000 chance of winning as many games as they did this year according to the distributions you’d get from a simple binomial model using the numbers here.
Similarly, Sam Wang’s 2016 election forecast gave Trump less than a 1% chance of winning the US presidency because it failed to correctly account for correlated uncertainty. By not sufficiently tracking non-independence, the model accidentally made extremely strong assumptions which led to a very overconfident prediction.
Constant per-century risk is implausible because these are conditional probabilities, conditional on surviving up to that century, which means that they’re non-independent.
For example, the probability of surviving the 80th century from now is conditioned on having survived the next 79 centuries. And the worlds where human civilization survives the next 79 centuries are mostly not worlds where we face a 10% chance of extinction risk each century and keep managing to stumble along. Rather, they’re worlds where the per-century probabilities of extinction over the next 79 centuries are generally lower than that, for whatever reason. And worlds where the next 79 per-century extinction probabilities are generally lower than 10% are mostly worlds where the 80th extinction probability is also lower than that. So, structurally we should expect extinction probabilities to go down over time, as cumulative extinction risk means filtering for less extinction-prone worlds.
Dice roll style models which assume independence can be helpful for sketching out some of the broad contours of the problem, but in many cases they don’t capture the structure of the situation well enough to use them for quantitative forecasts. This is an issue for modeling many domains, like for projecting how many games a sports team will win over the upcoming season. If your central estimate is that a team will win 30% of its games, worlds where the team winds up winning more than half of its games are mostly not worlds where the team kept getting lucky game after game. Rather, they’re worlds where the team was better than expected, so they generally had more than a 30% chance of winning each game.
A model that doesn’t account for this non-independence across games (like just using a binomial distribution based on your central estimate of what fraction of games a team will win) has built in the assumption that the only way to win more games is to have a bunch of 30% events go their way, and implicitly rules out the possibility of the team being better than your central estimate, so it will give inaccurate distributions. For example 3 of the 30 NBA teams had less than a 1/1000 chance of winning as many games as they did this year according to the distributions you’d get from a simple binomial model using the numbers here.
Similarly, Sam Wang’s 2016 election forecast gave Trump less than a 1% chance of winning the US presidency because it failed to correctly account for correlated uncertainty. By not sufficiently tracking non-independence, the model accidentally made extremely strong assumptions which led to a very overconfident prediction.