paragraphs 2,3 make total sense for me. (Well, actually I guess that because there are perhaps much more efficient ways of creating meaningful sentient lives rather than making human copies, which can result in much more value).
Not sure that I understand you correctly in the last paragraph. Are you are claiming that worlds in which AI is only aligned with some parts of our current understanding of ethics won’t realize a meaningful amount of value? And then should therefore be disregarded in our calculations, as we are betting on improving the chance of alignment with what we would want our ethics to eventually become?
paragraphs 2,3 make total sense for me. (Well, actually I guess that because there are perhaps much more efficient ways of creating meaningful sentient lives rather than making human copies, which can result in much more value).
Not sure that I understand you correctly in the last paragraph. Are you are claiming that worlds in which AI is only aligned with some parts of our current understanding of ethics won’t realize a meaningful amount of value? And then should therefore be disregarded in our calculations, as we are betting on improving the chance of alignment with what we would want our ethics to eventually become?