Well said. I also think it’s important to define what is meant by “catastrophe.” Just as an example, I personally would consider it catastrophic to see a future in which humanity is sidelined and subjugated by an AGI (even a “friendly,” aligned one), but many here would likely disagree with me that this would be a catastrophe. I’ve even heard otherwise rational (non-EA) people claim a future in which humans are ‘pampered pets’ of an aligned ASI to be ‘utopian,’ which just goes to show the level of disagreement.
To me, it’s important whether the AGIs are benevolent and have qualia/consciousness. If AGIs are ordinary computers but smart, I may agree; if they are conscious and benevolent, I’m okay being a pet.
I’m not sure whether we could ever truly know if an AGI was conscious or experienced qualia (which are by definition not quantifiable). And you’re probably right that being a pet of a benevolent ASI wouldn’t be a miserable thing (but it is still an x-risk … because it permanently ends humanity’s status as a dominant species).
I would caution against thinking the Hard Problem of Consciousness is unsolvable “by definition” (if it is solved, qualia will likely become quantifiable). I think the reasonable thing is to presume it is solvable. But until it is solved we must not allow AGI takeover, and even if AGIs stay under human control, it could lead to a previously unimaginable power imbalance between a few humans and the rest of us.
Well said. I also think it’s important to define what is meant by “catastrophe.” Just as an example, I personally would consider it catastrophic to see a future in which humanity is sidelined and subjugated by an AGI (even a “friendly,” aligned one), but many here would likely disagree with me that this would be a catastrophe. I’ve even heard otherwise rational (non-EA) people claim a future in which humans are ‘pampered pets’ of an aligned ASI to be ‘utopian,’ which just goes to show the level of disagreement.
To me, it’s important whether the AGIs are benevolent and have qualia/consciousness. If AGIs are ordinary computers but smart, I may agree; if they are conscious and benevolent, I’m okay being a pet.
I’m not sure whether we could ever truly know if an AGI was conscious or experienced qualia (which are by definition not quantifiable). And you’re probably right that being a pet of a benevolent ASI wouldn’t be a miserable thing (but it is still an x-risk … because it permanently ends humanity’s status as a dominant species).
I would caution against thinking the Hard Problem of Consciousness is unsolvable “by definition” (if it is solved, qualia will likely become quantifiable). I think the reasonable thing is to presume it is solvable. But until it is solved we must not allow AGI takeover, and even if AGIs stay under human control, it could lead to a previously unimaginable power imbalance between a few humans and the rest of us.