It therefore seems better to prioritize our descendant moral patients conditional on our survival because there are far far more of them.
I think in practical terms this isn’t mutually exclusive with ensuring our survival. The immediate way to secure our survival, at least for the next decade or so, is a global moratorium on ASI. This also reduces s-risks from ASI, and keeps our options open for reducing human-caused s-risk (i.e. we can still avoid factory farming in space colonization).
That seems true, but I’m not convinced it’s the best way to reduce s-risks on the margin. See, for example, Vinding, 2024.
I’d also want to see a fuller analysis of ways it could backfire. For example, a pause might make multipolar scenarios more likely by giving more groups time to build AGI, which could increase the risks of conflict-based s-risks.
a pause might make multipolar scenarios more likely by giving more groups time to build AGI
That wouldn’t really be a pause! A proper Pause (or moratorium) would include a global taboo on AGI research to the point where as few people would be doing it as are working on eugenics now (and they would be relatively easy to stop).
A pause would still give more groups more time catch up on existing research and to build infrastructure for AGI (energy, datacenters), right? Then when the pause is lifted, we could have more players at the research frontier and ready to train frontier models.
There is a key point on which I agree strongly with advocates for an AI pause: there is a massive moral urgency in ensuring that we do not end up with horrific AI-controlled outcomes. Too few people appreciate this insight, and even fewer seem to be deeply moved by it.
At the same time, I think there is a similarly massive urgency in ensuring that we do not end up with horrific human-controlled outcomes. And humanity’s current trajectory is unfortunately not all that reassuring with respect to either of these broad classes of risks …
The upshot for me is that there is a roughly equal moral urgency in avoiding each of these categories of worst-case risks
But he does not justify this equality. It seems highly likely to me that ASI-induced s-risks are on a much larger scale than human-induced ones (down to ASI being much more powerful than humanity), creating a (massive) asymmetry in favour of preventing ASI.
I think in practical terms this isn’t mutually exclusive with ensuring our survival. The immediate way to secure our survival, at least for the next decade or so, is a global moratorium on ASI. This also reduces s-risks from ASI, and keeps our options open for reducing human-caused s-risk (i.e. we can still avoid factory farming in space colonization).
That seems true, but I’m not convinced it’s the best way to reduce s-risks on the margin. See, for example, Vinding, 2024.
I’d also want to see a fuller analysis of ways it could backfire. For example, a pause might make multipolar scenarios more likely by giving more groups time to build AGI, which could increase the risks of conflict-based s-risks.
That wouldn’t really be a pause! A proper Pause (or moratorium) would include a global taboo on AGI research to the point where as few people would be doing it as are working on eugenics now (and they would be relatively easy to stop).
A pause would still give more groups more time catch up on existing research and to build infrastructure for AGI (energy, datacenters), right? Then when the pause is lifted, we could have more players at the research frontier and ready to train frontier models.
Any realistic Pause would not be lifted absent a global consensus on proceeding with whatever risk remains.
Vinding says:
But he does not justify this equality. It seems highly likely to me that ASI-induced s-risks are on a much larger scale than human-induced ones (down to ASI being much more powerful than humanity), creating a (massive) asymmetry in favour of preventing ASI.