My thought experiment was aimed at showing that direct intuitive responses to such thought experiments are irrationally sensitive to framing and how concrete the explanations are.
The asymbolic child is almost identical to a typical child and acts the same way, so you would think people would be less hesitant to dismiss their apparent pain than a robotâs. But I would guess people dismiss the asymbolic childâs pain more easily.
My explanation for why the asymbolic childâs pain doesnât matter (much) actually shouldnât make you more sure of the fact than the explanation given in the robot case. Iâve explained how and why the child is asymbolic, but in the robot case, weâve just said âour best science reveals to usâcorrectlyâthat they are not sentientâ. âcorrectlyâ means 100% certainty that they arenât sentient. Making the explanation more concrete makes it more believable, easier to entertain and easier for intuitions to reflect appropriately. But it doesnât make it more probable!
However, on reflection, these probably push the other way and undermine my claim of irrational intuitive responses:
My opportunity cost framing, e.g. thinking itâs better to give the painkillers to the typical child doesnât mean you would normally want to perform surgery on the asymbolic child without painkillers, if theyâre cheap and not very supply-limited and the asymbolic child would protest less (pretend to be in pain less) if given painkillers.
People arenât sure moral patienthood requires sentience, a still vague concept that may evolve into something they donât take to be necessary, but theyâre pretty sure that the pain responses in the asymbolic child donât indicate something that matters much, whatever the correct account of moral patienthood and value. It can be easier to identify and be confident in specific negative cases than put trust in a rule separating negative and positive cases.
My thought experiment was aimed at showing that direct intuitive responses to such thought experiments are irrationally sensitive to framing and how concrete the explanations are.
The asymbolic child is almost identical to a typical child and acts the same way, so you would think people would be less hesitant to dismiss their apparent pain than a robotâs. But I would guess people dismiss the asymbolic childâs pain more easily.
My explanation for why the asymbolic childâs pain doesnât matter (much) actually shouldnât make you more sure of the fact than the explanation given in the robot case. Iâve explained how and why the child is asymbolic, but in the robot case, weâve just said âour best science reveals to usâcorrectlyâthat they are not sentientâ. âcorrectlyâ means 100% certainty that they arenât sentient. Making the explanation more concrete makes it more believable, easier to entertain and easier for intuitions to reflect appropriately. But it doesnât make it more probable!
However, on reflection, these probably push the other way and undermine my claim of irrational intuitive responses:
My opportunity cost framing, e.g. thinking itâs better to give the painkillers to the typical child doesnât mean you would normally want to perform surgery on the asymbolic child without painkillers, if theyâre cheap and not very supply-limited and the asymbolic child would protest less (pretend to be in pain less) if given painkillers.
People arenât sure moral patienthood requires sentience, a still vague concept that may evolve into something they donât take to be necessary, but theyâre pretty sure that the pain responses in the asymbolic child donât indicate something that matters much, whatever the correct account of moral patienthood and value. It can be easier to identify and be confident in specific negative cases than put trust in a rule separating negative and positive cases.