I appreciate this thoughtful comment with such clearly laid out cruxes.
I think, based on this comment, that I am much more concerned about the possibility that created minds will suffer because my prior is much more heavily weighted toward suffering when making a draw from mindspace. I hope to cover the details of my prior distribution in a future post (but doing that topic justice will require a lot of time I may not have).
Additionally, I am a “Great Asymmetry” person, and I don’t think it is wrong not to create life that may thrive even though it is wrong to create life to suffer. (I don’t think the Great Asymmetry position fits the most elegantly with other utilitarian views that I hold, like valuing positive states— I just think it is true.) Even if I were trying to be a classical utilitarian on this, I still think the risk of creating suffering that we don’t know about and perhaps in principle could never know about is huge and should dominate our calculus.
I agree that our next moves on AI will likely set the tone for future risk tolerance. I just think the unfortunate truth is that we don’t know what we would need to know to proceed responsibly with creating new minds and setting precedents for creating new minds. I hope that one day we know everything we need to know and can fill the Lightcone with happy beings, and I regret that the right move now to prevent suffering today could potentially make it harder to proliferate happy life one day, but I don’t see a responsible way to set pro-creation values today that adequately takes welfare into account.
I appreciate this thoughtful comment with such clearly laid out cruxes.
I think, based on this comment, that I am much more concerned about the possibility that created minds will suffer because my prior is much more heavily weighted toward suffering when making a draw from mindspace. I hope to cover the details of my prior distribution in a future post (but doing that topic justice will require a lot of time I may not have).
Additionally, I am a “Great Asymmetry” person, and I don’t think it is wrong not to create life that may thrive even though it is wrong to create life to suffer. (I don’t think the Great Asymmetry position fits the most elegantly with other utilitarian views that I hold, like valuing positive states— I just think it is true.) Even if I were trying to be a classical utilitarian on this, I still think the risk of creating suffering that we don’t know about and perhaps in principle could never know about is huge and should dominate our calculus.
I agree that our next moves on AI will likely set the tone for future risk tolerance. I just think the unfortunate truth is that we don’t know what we would need to know to proceed responsibly with creating new minds and setting precedents for creating new minds. I hope that one day we know everything we need to know and can fill the Lightcone with happy beings, and I regret that the right move now to prevent suffering today could potentially make it harder to proliferate happy life one day, but I don’t see a responsible way to set pro-creation values today that adequately takes welfare into account.