To me, it’s important whether the AGIs are benevolent and have qualia/consciousness. If AGIs are ordinary computers but smart, I may agree; if they are conscious and benevolent, I’m okay being a pet.
I’m not sure whether we could ever truly know if an AGI was conscious or experienced qualia (which are by definition not quantifiable). And you’re probably right that being a pet of a benevolent ASI wouldn’t be a miserable thing (but it is still an x-risk … because it permanently ends humanity’s status as a dominant species).
I would caution against thinking the Hard Problem of Consciousness is unsolvable “by definition” (if it is solved, qualia will likely become quantifiable). I think the reasonable thing is to presume it is solvable. But until it is solved we must not allow AGI takeover, and even if AGIs stay under human control, it could lead to a previously unimaginable power imbalance between a few humans and the rest of us.
To me, it’s important whether the AGIs are benevolent and have qualia/consciousness. If AGIs are ordinary computers but smart, I may agree; if they are conscious and benevolent, I’m okay being a pet.
I’m not sure whether we could ever truly know if an AGI was conscious or experienced qualia (which are by definition not quantifiable). And you’re probably right that being a pet of a benevolent ASI wouldn’t be a miserable thing (but it is still an x-risk … because it permanently ends humanity’s status as a dominant species).
I would caution against thinking the Hard Problem of Consciousness is unsolvable “by definition” (if it is solved, qualia will likely become quantifiable). I think the reasonable thing is to presume it is solvable. But until it is solved we must not allow AGI takeover, and even if AGIs stay under human control, it could lead to a previously unimaginable power imbalance between a few humans and the rest of us.