One argument for the long reflection that I think has been missed in a lot of this discussion is that it’s a proposal for taking Nick’s Astronomical Waste argument (AWA) seriously. Nick argues that it’s worth spending millennia to reduce existential risk by a couple percent. But launching for example, a superintelligence with the values of humanity in 2025 could itself constitute an existential risk, in light of future human values. So AWA implies that a sufficiently wise and capable society would be prepared to wait millennia before jumping in to such an action.
Now we may practically never be capable enough to coordinate to do so, but the theory makes sense.
One argument for the long reflection that I think has been missed in a lot of this discussion is that it’s a proposal for taking Nick’s Astronomical Waste argument (AWA) seriously. Nick argues that it’s worth spending millennia to reduce existential risk by a couple percent. But launching for example, a superintelligence with the values of humanity in 2025 could itself constitute an existential risk, in light of future human values. So AWA implies that a sufficiently wise and capable society would be prepared to wait millennia before jumping in to such an action.
Now we may practically never be capable enough to coordinate to do so, but the theory makes sense.