My thought experiment was aimed at showing that direct intuitive responses to such thought experiments are irrationally sensitive to framing and how concrete the explanations are.
The asymbolic child is almost identical to a typical child and acts the same way, so you would think people would be less hesitant to dismiss their apparent pain than a robot’s. But I would guess people dismiss the asymbolic child’s pain more easily.
My explanation for why the asymbolic child’s pain doesn’t matter (much) actually shouldn’t make you more sure of the fact than the explanation given in the robot case. I’ve explained how and why the child is asymbolic, but in the robot case, we’ve just said “our best science reveals to us—correctly—that they are not sentient”. “correctly” means 100% certainty that they aren’t sentient. Making the explanation more concrete makes it more believable, easier to entertain and easier for intuitions to reflect appropriately. But it doesn’t make it more probable!
However, on reflection, these probably push the other way and undermine my claim of irrational intuitive responses:
My opportunity cost framing, e.g. thinking it’s better to give the painkillers to the typical child doesn’t mean you would normally want to perform surgery on the asymbolic child without painkillers, if they’re cheap and not very supply-limited and the asymbolic child would protest less (pretend to be in pain less) if given painkillers.
People aren’t sure moral patienthood requires sentience, a still vague concept that may evolve into something they don’t take to be necessary, but they’re pretty sure that the pain responses in the asymbolic child don’t indicate something that matters much, whatever the correct account of moral patienthood and value. It can be easier to identify and be confident in specific negative cases than put trust in a rule separating negative and positive cases.
My thought experiment was aimed at showing that direct intuitive responses to such thought experiments are irrationally sensitive to framing and how concrete the explanations are.
The asymbolic child is almost identical to a typical child and acts the same way, so you would think people would be less hesitant to dismiss their apparent pain than a robot’s. But I would guess people dismiss the asymbolic child’s pain more easily.
My explanation for why the asymbolic child’s pain doesn’t matter (much) actually shouldn’t make you more sure of the fact than the explanation given in the robot case. I’ve explained how and why the child is asymbolic, but in the robot case, we’ve just said “our best science reveals to us—correctly—that they are not sentient”. “correctly” means 100% certainty that they aren’t sentient. Making the explanation more concrete makes it more believable, easier to entertain and easier for intuitions to reflect appropriately. But it doesn’t make it more probable!
However, on reflection, these probably push the other way and undermine my claim of irrational intuitive responses:
My opportunity cost framing, e.g. thinking it’s better to give the painkillers to the typical child doesn’t mean you would normally want to perform surgery on the asymbolic child without painkillers, if they’re cheap and not very supply-limited and the asymbolic child would protest less (pretend to be in pain less) if given painkillers.
People aren’t sure moral patienthood requires sentience, a still vague concept that may evolve into something they don’t take to be necessary, but they’re pretty sure that the pain responses in the asymbolic child don’t indicate something that matters much, whatever the correct account of moral patienthood and value. It can be easier to identify and be confident in specific negative cases than put trust in a rule separating negative and positive cases.