My main point was that in any case what matters are the degree of alignment of the AI systems, and not their consciousness. But I agree with what you are saying.
If our plan for building AI depends on having clarity about our values, then it’s important to achieve such clarity before we build AI—whether that’s clarity about consciousness, population ethics, what kinds of experience are actually good, how to handle infinities, weird simulation stuff, or whatever else.
I agree consciousness is a big ? in our axiology, though it’s not clear if the value you’d lose from saying “only create creatures physiologically identical to humans” is large compared to all the other value we are losing from the other kinds of uncertainty.
I tend to think that in such worlds we are in very deep trouble anyway and won’t realize a meaningful amount of value regardless of how well we understand consciousness. So while I may care about them a bit from the perspective of parochial values (like “is Paul happy?”) I don’t care about them much from the perspective of impartial moral concerns (which is the main perspective where I care about clarifying concepts like consciousness).
paragraphs 2,3 make total sense for me. (Well, actually I guess that because there are perhaps much more efficient ways of creating meaningful sentient lives rather than making human copies, which can result in much more value).
Not sure that I understand you correctly in the last paragraph. Are you are claiming that worlds in which AI is only aligned with some parts of our current understanding of ethics won’t realize a meaningful amount of value? And then should therefore be disregarded in our calculations, as we are betting on improving the chance of alignment with what we would want our ethics to eventually become?
My main point was that in any case what matters are the degree of alignment of the AI systems, and not their consciousness. But I agree with what you are saying.
If our plan for building AI depends on having clarity about our values, then it’s important to achieve such clarity before we build AI—whether that’s clarity about consciousness, population ethics, what kinds of experience are actually good, how to handle infinities, weird simulation stuff, or whatever else.
I agree consciousness is a big ? in our axiology, though it’s not clear if the value you’d lose from saying “only create creatures physiologically identical to humans” is large compared to all the other value we are losing from the other kinds of uncertainty.
I tend to think that in such worlds we are in very deep trouble anyway and won’t realize a meaningful amount of value regardless of how well we understand consciousness. So while I may care about them a bit from the perspective of parochial values (like “is Paul happy?”) I don’t care about them much from the perspective of impartial moral concerns (which is the main perspective where I care about clarifying concepts like consciousness).
paragraphs 2,3 make total sense for me. (Well, actually I guess that because there are perhaps much more efficient ways of creating meaningful sentient lives rather than making human copies, which can result in much more value).
Not sure that I understand you correctly in the last paragraph. Are you are claiming that worlds in which AI is only aligned with some parts of our current understanding of ethics won’t realize a meaningful amount of value? And then should therefore be disregarded in our calculations, as we are betting on improving the chance of alignment with what we would want our ethics to eventually become?