I don’t think it matters that much (for the long-term) if the AI systems we build in the next century are conscious. What matters is how they think about what possible futures they can bring about.
If AI systems are aligned with us, but turned out not to be conscious or not very conscious, then they would continue this project of figuring out what is morally valuable and so bring about a world we’d regard as good (even though it likely contains very few minds that resemble either us or them).
If AI systems are conscious but not at all aligned with us, then why think that they would create conscious and flourishing successors?
So my view is that alignment is the main AI issue here (and reflecting well is the big non-AI issue), with questions about consciousness being in the giant bag of complex questions we should try to punt to tomorrow.
This argument presupposes that the resulting AI systems are either totally aligned with us (and our extrapolated moral values) or totally misaligned.
If there is much room for successful partial alignment (say, maximising on some partial values we have), and we can do actual work to steer that to something which is better, then it may well be the case that we should work on that. Specifically, if we imagine the AI systems to maximise some hard coded value (or something which was learned from a single database) then it is seems easy to make a case for working on understanding what is morally valuable before working on alignment.
I’m sure that there are existing discussions on this question which I’m not familiar with. I’d be interested in relevant references.
My main point was that in any case what matters are the degree of alignment of the AI systems, and not their consciousness. But I agree with what you are saying.
If our plan for building AI depends on having clarity about our values, then it’s important to achieve such clarity before we build AI—whether that’s clarity about consciousness, population ethics, what kinds of experience are actually good, how to handle infinities, weird simulation stuff, or whatever else.
I agree consciousness is a big ? in our axiology, though it’s not clear if the value you’d lose from saying “only create creatures physiologically identical to humans” is large compared to all the other value we are losing from the other kinds of uncertainty.
I tend to think that in such worlds we are in very deep trouble anyway and won’t realize a meaningful amount of value regardless of how well we understand consciousness. So while I may care about them a bit from the perspective of parochial values (like “is Paul happy?”) I don’t care about them much from the perspective of impartial moral concerns (which is the main perspective where I care about clarifying concepts like consciousness).
paragraphs 2,3 make total sense for me. (Well, actually I guess that because there are perhaps much more efficient ways of creating meaningful sentient lives rather than making human copies, which can result in much more value).
Not sure that I understand you correctly in the last paragraph. Are you are claiming that worlds in which AI is only aligned with some parts of our current understanding of ethics won’t realize a meaningful amount of value? And then should therefore be disregarded in our calculations, as we are betting on improving the chance of alignment with what we would want our ethics to eventually become?
I don’t think it matters that much (for the long-term) if the AI systems we build in the next century are conscious. What matters is how they think about what possible futures they can bring about.
If AI systems are aligned with us, but turned out not to be conscious or not very conscious, then they would continue this project of figuring out what is morally valuable and so bring about a world we’d regard as good (even though it likely contains very few minds that resemble either us or them).
If AI systems are conscious but not at all aligned with us, then why think that they would create conscious and flourishing successors?
So my view is that alignment is the main AI issue here (and reflecting well is the big non-AI issue), with questions about consciousness being in the giant bag of complex questions we should try to punt to tomorrow.
This argument presupposes that the resulting AI systems are either totally aligned with us (and our extrapolated moral values) or totally misaligned.
If there is much room for successful partial alignment (say, maximising on some partial values we have), and we can do actual work to steer that to something which is better, then it may well be the case that we should work on that. Specifically, if we imagine the AI systems to maximise some hard coded value (or something which was learned from a single database) then it is seems easy to make a case for working on understanding what is morally valuable before working on alignment.
I’m sure that there are existing discussions on this question which I’m not familiar with. I’d be interested in relevant references.
My main point was that in any case what matters are the degree of alignment of the AI systems, and not their consciousness. But I agree with what you are saying.
If our plan for building AI depends on having clarity about our values, then it’s important to achieve such clarity before we build AI—whether that’s clarity about consciousness, population ethics, what kinds of experience are actually good, how to handle infinities, weird simulation stuff, or whatever else.
I agree consciousness is a big ? in our axiology, though it’s not clear if the value you’d lose from saying “only create creatures physiologically identical to humans” is large compared to all the other value we are losing from the other kinds of uncertainty.
I tend to think that in such worlds we are in very deep trouble anyway and won’t realize a meaningful amount of value regardless of how well we understand consciousness. So while I may care about them a bit from the perspective of parochial values (like “is Paul happy?”) I don’t care about them much from the perspective of impartial moral concerns (which is the main perspective where I care about clarifying concepts like consciousness).
paragraphs 2,3 make total sense for me. (Well, actually I guess that because there are perhaps much more efficient ways of creating meaningful sentient lives rather than making human copies, which can result in much more value).
Not sure that I understand you correctly in the last paragraph. Are you are claiming that worlds in which AI is only aligned with some parts of our current understanding of ethics won’t realize a meaningful amount of value? And then should therefore be disregarded in our calculations, as we are betting on improving the chance of alignment with what we would want our ethics to eventually become?