Very interesting, and props in particular for assembling the cosmic threats dataset—that does seem like a lot of work!
I tend to agree with you and Joseph that there isn’t anything on the object level to be done about these things yet, beyond just trying to ensure we get a long reflection before intersetellar colonisation, as you suggest.
On hot take 2, this relies on the risks from each start system being roughly independent, so breaking this assumption seems like a good solution, but then each star system being very correlated maybe seems bad for liberalism and diversity of forms of flourishing and so forth. But maybe some amount of regularity and conformity is the price we need to pay for galactic security.
Acausal trade/​cooperation may end up being crucial here too once civilisation is spread across distances where it is hard or impossible to interact normally.
On hot take 2, this relies on the risks from each start system being roughly independent, so breaking this assumption seems like a good solution, but then each star system being very correlated maybe seems bad for liberalism and diversity of forms of flourishing and so forth. But maybe some amount of regularity and conformity is the price we need to pay for galactic security.
I think liberalism is unfortunately on a timer that will almost certainly expire pretty soon, no matter what we do.
We either technologically regress due to the human population falling and more anti-democratic civilizations winning outright due to the zero/​negative sum games being played, or we create AIs that replace us and due to the incentives plus the sheer difference in power, that AIs by default create something closer to a dictatorship for humans, and in particular value alignment is absolutely critical in the long run for AIs that can take every human job.
Modern civilization is not stable at all.
Acausal trade/​cooperation may end up being crucial here too once civilisation is spread across distances where it is hard or impossible to interact normally.
Yeah, assuming no FTL, acausal trade/​cooperation is necessary if you want anything like a unified galactic/​universal polity.
Yeah there are so many horrible trade-offs to figure out around long-term resilience and liberty/​diversity. I’m hopeful that these are solvable with a long reflection (and superintelligence!).
Very interesting, and props in particular for assembling the cosmic threats dataset—that does seem like a lot of work!
I tend to agree with you and Joseph that there isn’t anything on the object level to be done about these things yet, beyond just trying to ensure we get a long reflection before intersetellar colonisation, as you suggest.
On hot take 2, this relies on the risks from each start system being roughly independent, so breaking this assumption seems like a good solution, but then each star system being very correlated maybe seems bad for liberalism and diversity of forms of flourishing and so forth. But maybe some amount of regularity and conformity is the price we need to pay for galactic security.
Acausal trade/​cooperation may end up being crucial here too once civilisation is spread across distances where it is hard or impossible to interact normally.
I think liberalism is unfortunately on a timer that will almost certainly expire pretty soon, no matter what we do.
We either technologically regress due to the human population falling and more anti-democratic civilizations winning outright due to the zero/​negative sum games being played, or we create AIs that replace us and due to the incentives plus the sheer difference in power, that AIs by default create something closer to a dictatorship for humans, and in particular value alignment is absolutely critical in the long run for AIs that can take every human job.
Modern civilization is not stable at all.
Yeah, assuming no FTL, acausal trade/​cooperation is necessary if you want anything like a unified galactic/​universal polity.
Thanks Oscar :)
Yeah there are so many horrible trade-offs to figure out around long-term resilience and liberty/​diversity. I’m hopeful that these are solvable with a long reflection (and superintelligence!).