Your interpretation isn’t exactly wrong, I’m proposing an onthological shift on the understanding of what’s more central to the self, the thing to care about (i.e. is the moral patient fundamentally a qualia that has or can have an agent, or an agent that has or can have qualia?).
The intuition is that if qualia, on its own, is generic and completely interchangeable among moral patients, it might not be what makes them such, even if it’s an important value. A blindmind upload has ultimately far more in common with the sentient person they are based on than said person has with a phenomenal experience devoid of all the content that makes up their agency.
Thus it would be the agent the thing that primarily values the qualia (and everything else), rather than the reverse. This decenters qualia even if it is exceptionally valuable, being valuable not a priori (and thus, the agent being valued instrumentally in order to ensure its existence) but because it was chosen (and the thing that would have intrinsic value would be that which can make such choices).
A blindmind that doesn’t want qualia would be valuable then in this capacity to value things about the world in general, of which qualia is just a particular type (even if very valuable for sentient agents).
The appropiate type to compare rather than a Paperclip Maximizer (who, in Part 2 I argue, represents a type of agent whose values are inherently an aggression against the possibility of universal cooperation) would be aliens with strange and hard to comprehend values but no more intrinsically tied to the destruction of everything else than human values. If the moral patiency in them is only their qualia, then the best thing we could do for them is to just give them positive feelings, routing around whatever they valued in particular as means to that (and thus ultimately not really being about changing the outer world).
Respecting their agency would mean at least trying to understand what they are trying to do, from their perspective, not necessarily to give them everything they want (that’s subject to many considerations and their particular values), but to respect their goals in the sense that, when a human wants to make some great art, we take that helping them means helping them with that, rather than puting them in an experience machine where they think they did it.
Your interpretation isn’t exactly wrong, I’m proposing an onthological shift on the understanding of what’s more central to the self, the thing to care about (i.e. is the moral patient fundamentally a qualia that has or can have an agent, or an agent that has or can have qualia?).
The intuition is that if qualia, on its own, is generic and completely interchangeable among moral patients, it might not be what makes them such, even if it’s an important value. A blindmind upload has ultimately far more in common with the sentient person they are based on than said person has with a phenomenal experience devoid of all the content that makes up their agency.
Thus it would be the agent the thing that primarily values the qualia (and everything else), rather than the reverse. This decenters qualia even if it is exceptionally valuable, being valuable not a priori (and thus, the agent being valued instrumentally in order to ensure its existence) but because it was chosen (and the thing that would have intrinsic value would be that which can make such choices).
A blindmind that doesn’t want qualia would be valuable then in this capacity to value things about the world in general, of which qualia is just a particular type (even if very valuable for sentient agents).
The appropiate type to compare rather than a Paperclip Maximizer (who, in Part 2 I argue, represents a type of agent whose values are inherently an aggression against the possibility of universal cooperation) would be aliens with strange and hard to comprehend values but no more intrinsically tied to the destruction of everything else than human values. If the moral patiency in them is only their qualia, then the best thing we could do for them is to just give them positive feelings, routing around whatever they valued in particular as means to that (and thus ultimately not really being about changing the outer world).
Respecting their agency would mean at least trying to understand what they are trying to do, from their perspective, not necessarily to give them everything they want (that’s subject to many considerations and their particular values), but to respect their goals in the sense that, when a human wants to make some great art, we take that helping them means helping them with that, rather than puting them in an experience machine where they think they did it.