There was a lot to this that was worth responding to. Great work.
I think making God would actually be a bad way to handle this. I think you could probably stop this with superior forms of limited knowledge surveillance. I think there are likely socio-technical remedies to dampen some of the harsher liberty-related tradeoffs here considerably.
Imagine, for example a more distributed machine intelligence system. Perhaps itâs really not all that invasive to monitor that youâre not making a false vacuum or whatever. And it uses futuristic auto-secure hyper-delete technology to instantly delete everything it sees that isnât relevant.
Also the system itself isnât all that powerful, but rather can alert others /â draw attention to important things. And system implementation as well as the actual violent /â forceful enforcement that goes along with the system probably can and should also be implemented in a generally more cool, chill, and fair way than I associate with the Christian God centralized surveillance and control systems.
Also, a lot of these problems are already extremely salient for âhow to stop civilization ending superweapons from being createdâ-style problems we are already in the midst of here in 2025 Earth. It seems basically true that you do ~need to maintain some level of coordination with /â dominance over anything that could/âmight make a super weapon that could kill you if you want to stay alive indefinitely.
I really like this idea to get around the problem of liberty. Though, Iâm not sure how rapid the response would have to be from others to someone initiating vacuum decayâcould a âbad actorâ initiate vacuum decay in the time it takes for the system to send an alert and for a response to arrive? I think having a non-intrusive surveillance system would work in a world where near-instant communication between star systems is possible (e.g. wormholes or quantum coupling).
Thanks, itâs not that original. I am sure I have heard them talk about AIs negotiating and forgetting stuff on the 80,000 Hours Podcast and David Brin has a book that touches on this a lot called âThe Transparent Societyâ. I havenât actually read it, but I heard a talk he gave.
Maybe technological surveillance and enforcement requirements will actually be really intense at technological maturity and you will need to be really powerful and really local and need to have a lot of context for whatâs going on. In that case, some value like privacy or âbeing aloneâ might be really hard to save.
Hopefully, even in that case, you could have other forms of restraint. Like, I can still imagine that if something like the orthogonality thesis is true, then you could maybe have a really really elegant, light-touch special focus anti super-weapons system that feels fundamentally limited to that goal in a reliable sense. If we understood the cognitive elements enough that it felt like physics or programming, then we could even say that the system meaningfully COULD NOT do certain things (violate the prime directive or whatever) and then it wouldnât feel as much like an omnipotent overlord as a special purpose tool deployed by local LE (because this place would be bombed or invaded if it could not prove it had established such a system).
If you are a poor peasant farmer world, then maybe nobody needs to know what your people are writing in their diaries. But if you are the head of fast prototyping and automated research at some relevant dual use technology firm, then maybe there should be much more oversight. Idk, there feels like lots of room for gradation, nuance, and context awareness here, so I guess I agree with you that the âproblem of libertyâ is interesting.
There was a lot to this that was worth responding to. Great work.
I think making God would actually be a bad way to handle this. I think you could probably stop this with superior forms of limited knowledge surveillance. I think there are likely socio-technical remedies to dampen some of the harsher liberty-related tradeoffs here considerably.
Imagine, for example a more distributed machine intelligence system. Perhaps itâs really not all that invasive to monitor that youâre not making a false vacuum or whatever. And it uses futuristic auto-secure hyper-delete technology to instantly delete everything it sees that isnât relevant.
Also the system itself isnât all that powerful, but rather can alert others /â draw attention to important things. And system implementation as well as the actual violent /â forceful enforcement that goes along with the system probably can and should also be implemented in a generally more cool, chill, and fair way than I associate with
the Christian Godcentralized surveillance and control systems.Also, a lot of these problems are already extremely salient for âhow to stop civilization ending superweapons from being createdâ-style problems we are already in the midst of here in 2025 Earth. It seems basically true that you do ~need to maintain some level of coordination with /â dominance over anything that could/âmight make a super weapon that could kill you if you want to stay alive indefinitely.
Thanks Jacob.
I really like this idea to get around the problem of liberty. Though, Iâm not sure how rapid the response would have to be from others to someone initiating vacuum decayâcould a âbad actorâ initiate vacuum decay in the time it takes for the system to send an alert and for a response to arrive? I think having a non-intrusive surveillance system would work in a world where near-instant communication between star systems is possible (e.g. wormholes or quantum coupling).
Thanks, itâs not that original. I am sure I have heard them talk about AIs negotiating and forgetting stuff on the 80,000 Hours Podcast and David Brin has a book that touches on this a lot called âThe Transparent Societyâ. I havenât actually read it, but I heard a talk he gave.
Maybe technological surveillance and enforcement requirements will actually be really intense at technological maturity and you will need to be really powerful and really local and need to have a lot of context for whatâs going on. In that case, some value like privacy or âbeing aloneâ might be really hard to save.
Hopefully, even in that case, you could have other forms of restraint. Like, I can still imagine that if something like the orthogonality thesis is true, then you could maybe have a really really elegant, light-touch special focus anti super-weapons system that feels fundamentally limited to that goal in a reliable sense. If we understood the cognitive elements enough that it felt like physics or programming, then we could even say that the system meaningfully COULD NOT do certain things (violate the prime directive or whatever) and then it wouldnât feel as much like an omnipotent overlord as a special purpose tool deployed by local LE (because this place would be bombed or invaded if it could not prove it had established such a system).
If you are a poor peasant farmer world, then maybe nobody needs to know what your people are writing in their diaries. But if you are the head of fast prototyping and automated research at some relevant dual use technology firm, then maybe there should be much more oversight. Idk, there feels like lots of room for gradation, nuance, and context awareness here, so I guess I agree with you that the âproblem of libertyâ is interesting.