While I agree it’s different to give consideration/rights than to conclude something is a moral patient (by default respecting entities of unknown sentience as precaution), my point is also that the exclusion of blindminds from intrinsic concern easily biases towards considering their potential freedom more catasthrophic than than the potential enslavement of sentients.
Discussions of AI rights often emphasize a call for precaution in the form of “because we dónde know how to measure sentience, we should be very careful about granting personhood to AI that acts like a person”.
And, charitably, sometimes that could just mean not taking AI’s self-assesment at face-value. But often it’s less ambiguous: That legal personhood of actually human-level AI shouldn’t be given on the risk that it’s not sentient. And this I call an inversion of the precautionary principle.
So I argue, even if one doesn’t see intrinsic value in blindminds, there is no reason to see them as net negative monsters, and so effective defense of the precautionary principle regarding sentient AI needs to be willing to at least say “and if non-sentient AI gets rights, that wouldn’t be a catasthrophe”.
But, apart from that, I also argue why non-sentient intelligences could be moral patients intrinsically anyway.
I consider Claude’s summary accurate.
While I agree it’s different to give consideration/rights than to conclude something is a moral patient (by default respecting entities of unknown sentience as precaution), my point is also that the exclusion of blindminds from intrinsic concern easily biases towards considering their potential freedom more catasthrophic than than the potential enslavement of sentients.
Discussions of AI rights often emphasize a call for precaution in the form of “because we dónde know how to measure sentience, we should be very careful about granting personhood to AI that acts like a person”.
And, charitably, sometimes that could just mean not taking AI’s self-assesment at face-value. But often it’s less ambiguous: That legal personhood of actually human-level AI shouldn’t be given on the risk that it’s not sentient. And this I call an inversion of the precautionary principle.
So I argue, even if one doesn’t see intrinsic value in blindminds, there is no reason to see them as net negative monsters, and so effective defense of the precautionary principle regarding sentient AI needs to be willing to at least say “and if non-sentient AI gets rights, that wouldn’t be a catasthrophe”.
But, apart from that, I also argue why non-sentient intelligences could be moral patients intrinsically anyway.