You seem to conflate moral patienthood with legal rights/taking entities into account? We can take into account the possibility that some seemingly non-sentient agents might be sentient. But we don’t need to definitively conclude they’re moral patients to do this.
In general, I found it hard to assess which arguments you’re making, and I would suggest stating it in analytic philosophy style: a set of premises connected by logic to a conclusion. I had Claude do a first pass:
https://poe.com/s/lf0dxf0N64iNJVmTbHQk
While I agree it’s different to give consideration/rights than to conclude something is a moral patient (by default respecting entities of unknown sentience as precaution), my point is also that the exclusion of blindminds from intrinsic concern easily biases towards considering their potential freedom more catasthrophic than than the potential enslavement of sentients.
Discussions of AI rights often emphasize a call for precaution in the form of “because we dónde know how to measure sentience, we should be very careful about granting personhood to AI that acts like a person”.
And, charitably, sometimes that could just mean not taking AI’s self-assesment at face-value. But often it’s less ambiguous: That legal personhood of actually human-level AI shouldn’t be given on the risk that it’s not sentient. And this I call an inversion of the precautionary principle.
So I argue, even if one doesn’t see intrinsic value in blindminds, there is no reason to see them as net negative monsters, and so effective defense of the precautionary principle regarding sentient AI needs to be willing to at least say “and if non-sentient AI gets rights, that wouldn’t be a catasthrophe”.
But, apart from that, I also argue why non-sentient intelligences could be moral patients intrinsically anyway.
You seem to conflate moral patienthood with legal rights/taking entities into account? We can take into account the possibility that some seemingly non-sentient agents might be sentient. But we don’t need to definitively conclude they’re moral patients to do this.
In general, I found it hard to assess which arguments you’re making, and I would suggest stating it in analytic philosophy style: a set of premises connected by logic to a conclusion. I had Claude do a first pass: https://poe.com/s/lf0dxf0N64iNJVmTbHQk
I consider Claude’s summary accurate.
While I agree it’s different to give consideration/rights than to conclude something is a moral patient (by default respecting entities of unknown sentience as precaution), my point is also that the exclusion of blindminds from intrinsic concern easily biases towards considering their potential freedom more catasthrophic than than the potential enslavement of sentients.
Discussions of AI rights often emphasize a call for precaution in the form of “because we dónde know how to measure sentience, we should be very careful about granting personhood to AI that acts like a person”.
And, charitably, sometimes that could just mean not taking AI’s self-assesment at face-value. But often it’s less ambiguous: That legal personhood of actually human-level AI shouldn’t be given on the risk that it’s not sentient. And this I call an inversion of the precautionary principle.
So I argue, even if one doesn’t see intrinsic value in blindminds, there is no reason to see them as net negative monsters, and so effective defense of the precautionary principle regarding sentient AI needs to be willing to at least say “and if non-sentient AI gets rights, that wouldn’t be a catasthrophe”.
But, apart from that, I also argue why non-sentient intelligences could be moral patients intrinsically anyway.