I’m not sure whether we could ever truly know if an AGI was conscious or experienced qualia (which are by definition not quantifiable). And you’re probably right that being a pet of a benevolent ASI wouldn’t be a miserable thing (but it is still an x-risk … because it permanently ends humanity’s status as a dominant species).
I would caution against thinking the Hard Problem of Consciousness is unsolvable “by definition” (if it is solved, qualia will likely become quantifiable). I think the reasonable thing is to presume it is solvable. But until it is solved we must not allow AGI takeover, and even if AGIs stay under human control, it could lead to a previously unimaginable power imbalance between a few humans and the rest of us.
I’m not sure whether we could ever truly know if an AGI was conscious or experienced qualia (which are by definition not quantifiable). And you’re probably right that being a pet of a benevolent ASI wouldn’t be a miserable thing (but it is still an x-risk … because it permanently ends humanity’s status as a dominant species).
I would caution against thinking the Hard Problem of Consciousness is unsolvable “by definition” (if it is solved, qualia will likely become quantifiable). I think the reasonable thing is to presume it is solvable. But until it is solved we must not allow AGI takeover, and even if AGIs stay under human control, it could lead to a previously unimaginable power imbalance between a few humans and the rest of us.