This is a very interesting post. Here’s how it fits into my thinking about existential risk and time and space.
We already know about several related risk effects over space and time:
If different locations in space can serve as backups, such that humanity fails only if all of them fail simultaneously, then the number of these only needs to grow logarithmically before there is a non-zero chance of indefinite survival.
However, this does not solve existential risk, as it only helps with uncorrelated risks such as asteroid impacts. Some risks are correlated between all locations in a planetary system or all stars in a galaxy (often because an event in one causes the downfall of all others) and having multiple settlements doesn’t help with those.
Also, to reach indefinite survival, we need to reduce per-century existential risk by some constant fraction each century (quite possibly requiring a deliberate and permanent prioritisation of this by humanity)
You pointed out a fourth related issue. When it comes to the correlated risks, settling more and more star systems doesn’t just not help with these — it creates more and more opportunities for these to happen. If there were 100 billion settled systems then at least for the risks that can’t be defended against (such as vacuum collapse) a galactic-scale civilisation would undergo 100 billion centuries worth of this risk per century. So as well as an existential-risk-reducing effect for space settlement, there is also a systematic risk-increasing effect. (And this is a more robust and analysable argument than those about space wars.)
I’ve long felt that humanity would want to bind all its settlements to a common constitution, ruling out certain things such as hostile actions towards each other, preparing advanced weaponry that could be used for this purpose, or attempts to seize new territory in inappropriate ways. That might help sufficiently for some of the coordination problems, but I hadn’t noticed that even if each location is coordinated and aligned, if we settle 100 billion worlds, a certain part of the accident risk gets multiplied by more than a billion-fold, and this creates a fundamental tension between the benefits of settling more places and the risks of doing so. I feel like getting per-century risk down to 1% in a few centuries might not be that hard, but if we need to get it down to 0.00000000001%, it is less clear that is possible (though at least it is only the *objective probability* that needs to get so low — it’s OK if your confidence you are right isn’t as strong as 99.999999999%).
One style of answer is to require that almost no settlements are capable of these galaxy-wide existential risks. e.g.
they have no people, but contribute to our goals in some other way (such as pure happiness or robotic energy-harvesting for some later project)
or they have people flourishing in relatively low-tech utopian states
or they have people flourishing inside virtual worlds maintained by machines, where the people have no way of affecting the outside world
or we find all the possible correlated risks in advance and build defense-in-depth guardrails around all of them.
Alternatively, if each settlement is capable of imposing such risks, then you could think of each star-system-century as playing the same role as a century in my earlier model. i.e. instead of needing to exponentially decrease per-period risk over time, we need to exponentially decrease it per additional star system as well. But this is a huge challenge if we are thinking of adding billions of places within a small number of centuries. Alternatively, one could think of it as requiring that we divide the acceptable level of per-period risk by the number of settlements. In the worst case, one might not be able to gain any EV by settling other star systems relative to just staying on one, as the risk-downsides outweigh the benefits. (But the kinds of limited settlements listed above should still be possible.)
There is an interesting question about whether raw population has the same effect as extra settled star systems. I’m inclined to think it doesn’t, via a model whereby well-governed star systems aren’t just as weak as their most irresponsible or unlucky citizen, even if the galaxy is as weak as its most irresponsible or unlucky star system. e.g. that doubling the population of a star system doesn’t double the chance it triggers a vacuum collapse, but doubling the number of independently governed star systems might (as a single system might decide not to prioritise risk avoidance).
Here is a nice simple model of the trade-off between redundancy and correlated risk. Assume that each time period, each planet has an independent and constant chance of destroying civilisation on its own planet and an independent and constant chance of destroying civilisation on all planets. Furthermore, assume that unless all planets fail in the same time period, they can be restored from those that survive.
e.g. assume the planetary destruction rate is 10% per century and the galaxy destruction rate is 1 in 1 million per century. Then with one planet the existential risk for human civilisation is ~10% per century. With two planets it is about 1% per century, and reaches a minimum at about 6 planets, where there is only a 1 in a million chance you lose all planets simultaneously from planetary risk, but now ~6 in a million chance of one of them destroying everything. In this case, beyond 6 planets, the total risk starts rising as the amount of redundancy they add is smaller than the amount of new risk they create, and by the time you have 1 million planets, the existential risk rate for human civilisation per century is about 63%.
This is an overly simple model and I’ve used arbitrary parameters, but it shows it is quite easy for risk to first reduce and then increase as more planets are settled, with a risk-optimal level of settlement in between.