I think 2) is not generally accepted as a given. Rather, AGI should not be assumed to experience welfare. It might, but it’s not obviously necessary that it is sentient, which seems a necessary feature for experiencing welfare. A thermostat has goal-directed behaviour. Some might argue that even a thermostat is sentient, but it’s a controversial position.
It doesn’t seem obvious to me that abstract reasoning necessarily requires subjective experience. Experience might just as well be a product of animals evolving as embodied agents in the world. The thin layer of abstract thought on the outer parts of our brains don’t seem to me to be the thing generating our qualia. Subjective experience seems more primal to my intuition.
If we create digital minds that can experience welfare, they matter as much as us in the moral calculus. To flesh out the implications of that would require a fully general understanding of what we mean by a mind. Considering how uncertain we are about insect minds, this seems to require a lot of progress. It would be preferable if the creation of digital minds could be avoided until that progress has been made. In a world with digital minds alignment might mean creating a superintelligence that’s compatible with the flourishing of minds in a more general sense.
I think 2) is not generally accepted as a given. Rather, AGI should not be assumed to experience welfare. It might, but it’s not obviously necessary that it is sentient, which seems a necessary feature for experiencing welfare. A thermostat has goal-directed behaviour. Some might argue that even a thermostat is sentient, but it’s a controversial position.
It doesn’t seem obvious to me that abstract reasoning necessarily requires subjective experience. Experience might just as well be a product of animals evolving as embodied agents in the world. The thin layer of abstract thought on the outer parts of our brains don’t seem to me to be the thing generating our qualia. Subjective experience seems more primal to my intuition.
If we create digital minds that can experience welfare, they matter as much as us in the moral calculus. To flesh out the implications of that would require a fully general understanding of what we mean by a mind. Considering how uncertain we are about insect minds, this seems to require a lot of progress. It would be preferable if the creation of digital minds could be avoided until that progress has been made. In a world with digital minds alignment might mean creating a superintelligence that’s compatible with the flourishing of minds in a more general sense.