In a galactic civilisation of thousands of independent and technologically advanced colonies, what is the probability that one of those colonies will create trillions of suffering digital sentient beings? (probably near 100% if digital sentience is possible… it only takes one)
Is it possible to create a governance structure that would prevent any person in a whole galactic civilisation from creating digital sentience capable of suffering? (sounds really hard especially given the huge distances and potential time delays in messaging… no idea)
What is the point of no-return where a domino is knocked over that inevitably leads to self-perpetuating human expansion and the creation of galactic civilisation? (somewhere around a self-sustaining civilisation on Mars I think).
If the answer to question 3 is “Mars colony”, then it’s possible that creating a colony on Mars is a huge s-risk if we don’t first answer question 2.
Probability of s-risk
I agree that in a sufficiently large space civilization (that isn’t controlled by your Governance Structure), the probability of s-risk is almost 100% (but not just from digital minds).
Let’s unpack this:
Our galaxy has roughly 200 billion stars (2*10^11). This means 10^10 viable settleable star systems at least. A dyson swarm around a sun-like start could conservatively support 10^20 biological humans (Today, we are 10^10 and this number was extrapolated from how much sunlight is needed to sustain on human with conventional farming).
80k defines an s-risk as “something causing vastly more suffering than has existed on Earth so far”. This could easily be “achieved” even w/o digital minds if just one colony out of the 10^10 decides they want to create lots of wildlife preserves and their dyson swarm consists of mostly those. With around 10^10 more living area as on Earth and as many more wild animals, one year would go buy around this star and the cumulative suffering experienced by all of them would exceed the total suffering from all of Earth’s history (with only ~ 1 billion (10^9) years of animal life).
This would not necessarily mean that the whole galactic civ was morally net bad.
A galaxy with 10,000 hellish star systems, 10 million heavenly systems and a 10 billion rather normal but good systems would still be a pretty awesome future from a total utility standpoint.
My point is that s-risk being defined in terms of Earth suffering becomes an increasingly low bar to cross the larger your civilization is. At some point you’d have to have insanely good “quality control” in every corner of your civilzation. This would be analogous to having to ensure that every single one of the 10^10 humans today on earth is happy and never gets hurt even once. That seems like a bit too high a standard to have for how good the future should go.
But that nitpick aside, I currently expect that a space future without some kind of governance system you’re describing still has a high chance of ending up net bad.
How to create the Governance Structure (GS)
Here is my idea how this could look like:
A superintelligence (could also be post-human) creates countless identical but independent GS copies of itself that expand through the universe and accompany every settlement mission. Their detailed value system is made virtually unalterable, built to last for trillions of years. This I think, is technically achieveable: strong copy-error and damage protections, not updatable via new evidence, strongly defended against outside manipulation attacks.
The GS copies largely act on their own in their respective star system colony but have protocols in place on how to coordinate in a loose manner across star systems and millions of years. I think this could work a bit analogous to an ant colony: Lots of small, selfless agents locally interacting with on another; everyone has the exact same values and probably secure intra-hive communication methods; They could still mount an impressively coordinated galactic response to say a von Neumann probe invasion.
I could expand further on this idea if you’d like.
Point of no-return
I’m unsure about this. Possible such points:
a space race gets going in earnest (with geopolitical realities making a Long Reflection infeasible), the first ASI is created and it does not have the goal of preventing s- and x-risks, the first (self-sustaining) space colony gets political independance, the first interstellar mission (to create a colony) leaves the solar system, a sub-par, real-world implementation of the Governance Structure breaks down somewhere in human-settled space.
My current view is still that the two most impactful things (at the moment) are 1) ensuring that any ASI that gets developed is safe and benevolent, 2) improving how global and space politics is conducted.
Any specific “points of no-return” seem to me like very contingent on the exact circumstances at that point. Nevertheless, thinking ahead about what situations might be especially dangerous or crucial, seems like a worthwhile persuit to me.
Hi Birk. Thank you for your very in-depth response, I found it very interesting. That’s pretty much how i imagined the governance system when I wrote the post. I actually had it as a description like that originally but I hated the implications for liberalism, so i took a step back and listed requirements instead (which didn’t actually help).
The “points of no return” do seem quite contingent, and I’m always sceptical about the tractability of trying to prevent something from happening—usually my approach is: it’s probably gonna happen, how do we prepare? But besides that, I’m going to look into more specific “points of no return” as there could be a needle hiding in the noodles somewhere. I feel like this is the kind of area where we could be missing something, e.g. the point of no return is really close, or there could be a tractable way to influence the implementation of that point of no return.
probably near 100% if digital sentience is possible… it only takes one
Can you expand on this? I guess the stipulation of thousands of advanced colonies does some of the work here, but this still seems overconfident to me given how little we understand about digital sentience.
Yeah sure, it’s like the argument that if you get infinite chimpanzees and put them in front of type writers, then one of them would write Shakespeare. If you have a galactic civilisation, it would be very dispersed and most likely each ‘colony’ occupying each solar system would govern itself independently. So they could be treated as independent actors sharing the same space, and there might be hundreds of millions of them. In that case, the probability that one of those millions of independent actors creates astronomical suffering becomes extremely high, near 100%. I used digital sentience as an example because its the risk of astronomical suffering that I see as the most terrifying—like IF digital sentience is possible, then the amount of suffering beings that it would be possible to create could conceivably outweigh the value of a galactic civilisation. That ‘IF’ contains a lot of uncertainty on my part.
But this also applies to tyrannous governments, how many of those independent civilisations across a galaxy will become tyrannous and cause great suffering to their inhabitants? How many of those civilisations will terraform other planets and start biospheres of suffering beings?
The same logic also applies to x-risks that affect a galactic civilisation:
all it takes is one civilization of alien ass-hat griefers who send out just one Von Neumann Probe programmed to replicate, build N-D lasers, and zap any planet showing signs of technological civilization, and the result is a galaxy sterile of interplanetary civilizations until the end of the stelliferous era (at which point, stars able to power an N-D laser will presumably become rare). (Charlie Stross)
Stopping these things from happening seems really hard. It’s like a galactic civilisation needs to be designed right from the beginning to make sure that no future colony does this.
Thanks. In the original quick take, you wrote “thousands of independent and technologically advanced colonies”, but here you write “hundreds of millions”.
If you think there’s a 1 in 10,000 or 1 in a million chance of any independent and technologically advanced colony creating astronomical suffering, it matters if there are thousands or millions of colonies. Maybe you think it’s more like 1 in 100, and then thousands (or more) would make it extremely likely.
I think 1000 is where I would start to get very worried intuitively, but there would be hundreds of millions of habitable planets in the Milky Way, so theoretically a galactic civilisation could have that many if it didn’t kill itself before then.
I guess the probability of one of these civilisations initiating an s-risk or galactic x-risk would just increase with the size of the galactic civilisation. So the more that humanity expands throughout the galaxy, the greater the risk.
Elon Musk recently presented SpaceX’s roadmap for establishing a self-sustaining civilisation on Mars (by 2033 lol). Aside from the timeline, I think there may be some important questions to consider with regards to space colonisation and s-risks:
In a galactic civilisation of thousands of independent and technologically advanced colonies, what is the probability that one of those colonies will create trillions of suffering digital sentient beings? (probably near 100% if digital sentience is possible… it only takes one)
Is it possible to create a governance structure that would prevent any person in a whole galactic civilisation from creating digital sentience capable of suffering? (sounds really hard especially given the huge distances and potential time delays in messaging… no idea)
What is the point of no-return where a domino is knocked over that inevitably leads to self-perpetuating human expansion and the creation of galactic civilisation? (somewhere around a self-sustaining civilisation on Mars I think).
If the answer to question 3 is “Mars colony”, then it’s possible that creating a colony on Mars is a huge s-risk if we don’t first answer question 2.
Would appreciate some thoughts.
Stuart Armstrong and Anders Sandberg’s article on expanding throughout the galaxy rapidly, and Charlie Stross’ blog post about griefers influenced this quick take.
Interesting ideas! I’ve read your post Interstellar travel will probably doom the long-term future with enthusiasm and have had similar concerns for some years now. Regarding your questions, here are my thoughts:
Probability of s-risk I agree that in a sufficiently large space civilization (that isn’t controlled by your Governance Structure), the probability of s-risk is almost 100% (but not just from digital minds). Let’s unpack this: Our galaxy has roughly 200 billion stars (2*10^11). This means 10^10 viable settleable star systems at least. A dyson swarm around a sun-like start could conservatively support 10^20 biological humans (Today, we are 10^10 and this number was extrapolated from how much sunlight is needed to sustain on human with conventional farming). 80k defines an s-risk as “something causing vastly more suffering than has existed on Earth so far”. This could easily be “achieved” even w/o digital minds if just one colony out of the 10^10 decides they want to create lots of wildlife preserves and their dyson swarm consists of mostly those. With around 10^10 more living area as on Earth and as many more wild animals, one year would go buy around this star and the cumulative suffering experienced by all of them would exceed the total suffering from all of Earth’s history (with only ~ 1 billion (10^9) years of animal life). This would not necessarily mean that the whole galactic civ was morally net bad. A galaxy with 10,000 hellish star systems, 10 million heavenly systems and a 10 billion rather normal but good systems would still be a pretty awesome future from a total utility standpoint. My point is that s-risk being defined in terms of Earth suffering becomes an increasingly low bar to cross the larger your civilization is. At some point you’d have to have insanely good “quality control” in every corner of your civilzation. This would be analogous to having to ensure that every single one of the 10^10 humans today on earth is happy and never gets hurt even once. That seems like a bit too high a standard to have for how good the future should go.
But that nitpick aside, I currently expect that a space future without some kind of governance system you’re describing still has a high chance of ending up net bad.
How to create the Governance Structure (GS) Here is my idea how this could look like: A superintelligence (could also be post-human) creates countless identical but independent GS copies of itself that expand through the universe and accompany every settlement mission. Their detailed value system is made virtually unalterable, built to last for trillions of years. This I think, is technically achieveable: strong copy-error and damage protections, not updatable via new evidence, strongly defended against outside manipulation attacks. The GS copies largely act on their own in their respective star system colony but have protocols in place on how to coordinate in a loose manner across star systems and millions of years. I think this could work a bit analogous to an ant colony: Lots of small, selfless agents locally interacting with on another; everyone has the exact same values and probably secure intra-hive communication methods; They could still mount an impressively coordinated galactic response to say a von Neumann probe invasion. I could expand further on this idea if you’d like.
Point of no-return I’m unsure about this. Possible such points: a space race gets going in earnest (with geopolitical realities making a Long Reflection infeasible), the first ASI is created and it does not have the goal of preventing s- and x-risks, the first (self-sustaining) space colony gets political independance, the first interstellar mission (to create a colony) leaves the solar system, a sub-par, real-world implementation of the Governance Structure breaks down somewhere in human-settled space.
My current view is still that the two most impactful things (at the moment) are 1) ensuring that any ASI that gets developed is safe and benevolent, 2) improving how global and space politics is conducted. Any specific “points of no-return” seem to me like very contingent on the exact circumstances at that point. Nevertheless, thinking ahead about what situations might be especially dangerous or crucial, seems like a worthwhile persuit to me.
Hi Birk. Thank you for your very in-depth response, I found it very interesting. That’s pretty much how i imagined the governance system when I wrote the post. I actually had it as a description like that originally but I hated the implications for liberalism, so i took a step back and listed requirements instead (which didn’t actually help).
The “points of no return” do seem quite contingent, and I’m always sceptical about the tractability of trying to prevent something from happening—usually my approach is: it’s probably gonna happen, how do we prepare? But besides that, I’m going to look into more specific “points of no return” as there could be a needle hiding in the noodles somewhere. I feel like this is the kind of area where we could be missing something, e.g. the point of no return is really close, or there could be a tractable way to influence the implementation of that point of no return.
Can you expand on this? I guess the stipulation of thousands of advanced colonies does some of the work here, but this still seems overconfident to me given how little we understand about digital sentience.
Yeah sure, it’s like the argument that if you get infinite chimpanzees and put them in front of type writers, then one of them would write Shakespeare. If you have a galactic civilisation, it would be very dispersed and most likely each ‘colony’ occupying each solar system would govern itself independently. So they could be treated as independent actors sharing the same space, and there might be hundreds of millions of them. In that case, the probability that one of those millions of independent actors creates astronomical suffering becomes extremely high, near 100%. I used digital sentience as an example because its the risk of astronomical suffering that I see as the most terrifying—like IF digital sentience is possible, then the amount of suffering beings that it would be possible to create could conceivably outweigh the value of a galactic civilisation. That ‘IF’ contains a lot of uncertainty on my part.
But this also applies to tyrannous governments, how many of those independent civilisations across a galaxy will become tyrannous and cause great suffering to their inhabitants? How many of those civilisations will terraform other planets and start biospheres of suffering beings?
The same logic also applies to x-risks that affect a galactic civilisation:
Stopping these things from happening seems really hard. It’s like a galactic civilisation needs to be designed right from the beginning to make sure that no future colony does this.
Thanks. In the original quick take, you wrote “thousands of independent and technologically advanced colonies”, but here you write “hundreds of millions”.
If you think there’s a 1 in 10,000 or 1 in a million chance of any independent and technologically advanced colony creating astronomical suffering, it matters if there are thousands or millions of colonies. Maybe you think it’s more like 1 in 100, and then thousands (or more) would make it extremely likely.
Yeah that’s true.
I think 1000 is where I would start to get very worried intuitively, but there would be hundreds of millions of habitable planets in the Milky Way, so theoretically a galactic civilisation could have that many if it didn’t kill itself before then.
I guess the probability of one of these civilisations initiating an s-risk or galactic x-risk would just increase with the size of the galactic civilisation. So the more that humanity expands throughout the galaxy, the greater the risk.