Thanks for this comment! I broadly agree with it all and it was very interesting to read. Thanks in particular for advancing my initial takes on governance (I’m far more comfortable discussing quantum physics than governance systems).
a) Preventing catastrophe seems much more important for advanced civilizations than I realized and its not enough for the universe to be defense-dominated.
b) Robustly good governance seems attainable? It may be possible to functionally ‘lock-out’ catastrophic-risk and tyranny-risk on approach to tech maturity and it seems conceivable (albeit challenging) to softly lock-in definitions of ‘catastrophe’ and ‘tyranny’ which can then be amended in future as cultures evolve and circumstances change.
Agreed on both. Locking stuff out seems possible, and then as knowledge advances (in terms of moral philosophy or fundamental physics) and new possibilities come to light and priorities change, the “governance system” could be updated from a centralised position, like a software update expanding at the speed of light. Then the main tradeoff is between ensuring no possibility of a galactic x-risk or s-risk you know about could ever happen, and being adaptable to changing knowledge and emerging risks.
At the scale of advanced civilizations collapse/catastrophe for even a single star system seems unbearable.
I strongly agree with this. We (as in, humanity) are at a point where we can control what the long-term future in space will look like. We should not tolerate a mostly great future with some star systems falling into collapse or suffering—we are responsible for preventing that and allowing it to happen at all is inconceivably terrible even if the EV calculation is positive. We’re better than naive utilitarianism.
If we buy your argument here Jordan or my takeaways from Joe’s talk then we’re like, ah man we may need really strong space governance. Like excellent, robust space governance. But no, No! This is a tyranny risk.
There are ways to address the risks I outlined without a centralised government that might be prone to tyranny (echoing your “hand waving” section later):
Digital World creation – super capable machine with blueprints (not an AGI superintelligence) goes to each star system and creates digital sentient beings. That’s it. No need for governance of independent civs.
We only send out probes to collect resources from the galaxy and bring them back to our solar system. We can expand in the digital realm here and remain coordinated.
Right from the beginning we figure out a governance system with 100% existential security and 0% s-risk (whatever that is). The expansion into space is supervised to ensure that each independent star system begins with this super governance system, but other than that they have liberty.
Just implement excellent observation of inhabited star systems. Alert systems for bad behaviour to nearby star systems prevents s-risks from lasting millennia (but, of course, carries conflict risks).
Maybe if we find the ultimate moral good and are coordinated enough to spread that, then the universe will be homogenous, so there is no need for governance to address unpredicted behaviour.
In particular it seems possible to forcibly couple the power to govern with goodness.
I think this is a crucial point. I’m hopeful of that. If it’s possible to lock in that strong correlation then does that ensure absolute existential security and no s-risks? I think it depends on goodness. If the “goodness” is based on panbiotic ethics, then we have a universe full of suffering Darwinian biology. If the “goodness” is utilitarian then the universe becomes full of happiness machines… maybe that’s bad. Don’t know. It seems that the goodness in your USA example is defined by Christian values, which maybe don’t give us the best possible long-term future. Maybe I’m reading too far into your simple model (I find it conceptually very helpful though).
There’s also the sort hand-off or die roll wherein you cede/lose power to something and can’t get it back unless so willed by the entity in question. I prefer my sketch of marching to decouple governmental power from competitiveness.
Yeah I agree. But I think it’s dependent on the way that society evolves. If we’re able to have a long reflection (which I think unlikely), then maybe we can build a good God more confidently. But your model sounds more realistic.
Thanks for this comment! I broadly agree with it all and it was very interesting to read. Thanks in particular for advancing my initial takes on governance (I’m far more comfortable discussing quantum physics than governance systems).
Agreed on both. Locking stuff out seems possible, and then as knowledge advances (in terms of moral philosophy or fundamental physics) and new possibilities come to light and priorities change, the “governance system” could be updated from a centralised position, like a software update expanding at the speed of light. Then the main tradeoff is between ensuring no possibility of a galactic x-risk or s-risk you know about could ever happen, and being adaptable to changing knowledge and emerging risks.
I strongly agree with this. We (as in, humanity) are at a point where we can control what the long-term future in space will look like. We should not tolerate a mostly great future with some star systems falling into collapse or suffering—we are responsible for preventing that and allowing it to happen at all is inconceivably terrible even if the EV calculation is positive. We’re better than naive utilitarianism.
There are ways to address the risks I outlined without a centralised government that might be prone to tyranny (echoing your “hand waving” section later):
Digital World creation – super capable machine with blueprints (not an AGI superintelligence) goes to each star system and creates digital sentient beings. That’s it. No need for governance of independent civs.
We only send out probes to collect resources from the galaxy and bring them back to our solar system. We can expand in the digital realm here and remain coordinated.
Right from the beginning we figure out a governance system with 100% existential security and 0% s-risk (whatever that is). The expansion into space is supervised to ensure that each independent star system begins with this super governance system, but other than that they have liberty.
Just implement excellent observation of inhabited star systems. Alert systems for bad behaviour to nearby star systems prevents s-risks from lasting millennia (but, of course, carries conflict risks).
Maybe if we find the ultimate moral good and are coordinated enough to spread that, then the universe will be homogenous, so there is no need for governance to address unpredicted behaviour.
I think this is a crucial point. I’m hopeful of that. If it’s possible to lock in that strong correlation then does that ensure absolute existential security and no s-risks? I think it depends on goodness. If the “goodness” is based on panbiotic ethics, then we have a universe full of suffering Darwinian biology. If the “goodness” is utilitarian then the universe becomes full of happiness machines… maybe that’s bad. Don’t know. It seems that the goodness in your USA example is defined by Christian values, which maybe don’t give us the best possible long-term future. Maybe I’m reading too far into your simple model (I find it conceptually very helpful though).
Yeah I agree. But I think it’s dependent on the way that society evolves. If we’re able to have a long reflection (which I think unlikely), then maybe we can build a good God more confidently. But your model sounds more realistic.