Excerpting from and expanding on a bit of point 1 of my reply to akash above. Here are four philosophical areas where I feel like total hedonic utilitarianism (as reflected in common animal-welfare calculations) might be missing the mark:
Something akin to “experience size” (very well-described by that recent blog post!)
The importance of sapience—if an experience of suffering is happening “all on its own”, floating adrift in the universe with nobody to think “I am suffering”, “I hope this will end soon”, etc, does this make the suffering experience worse-than, or not-as-bad-as, human suffering where the experience is tied together with a rich tapestry of other conscious experiences? Maybe it’s incoherent to ask questions like this, or I am thinking about this in totally the wrong way? But it seems like an important question to me. The similiarities between layers of “neurons” in image-classifying AIs, and the actual layouts of literal neurons in the human retina + optical cortex (both humans and AIs have a layer for initial inputs, then for edge-detection, then for corners and curves, then simple shapes and textures, then eventually for higher concepts and whole objects) makes me think that possibly image-classifiers are having a genuine “experience of vision” (ie qualia), but an experience that is disconnected (of course) from any sense of self or sense of wellbeing-vs-suffering or wider understanding of its situation. I think many animals might have experiences that are intermediate in various ways between humans and this hypothetical isolated-experience-of-vision that might be happening in an AI image classifier.
How good of an approximation is it to linearly “add up” positive experiences when the experiences are near-identical? ie, there are two identical computer simulations of a suffering emulated mind, any worse than one simulation? what about a single simulation on a computer with double-thick wires? what about a simulation identical in every respect except one? I haven’t thought super hard about this, but I feel like these questions might have important real-world consequences for simple creatures like blackflies or shrimp, whose experiences might not add linearly across billions/trillions of creatures, because at some point the experiences become pretty similar to each other and you’d be “double-counting”.
Something about “higher pleasures”, or Neitzcheanism, or the complexity of value, that maybe there’s more to life than just adding up positive and negative valence?? Personally, if I got to decide right now what happens to the future of human civilization, I would definitely want to try and end suffering (insomuch as this is feasible), but I wouldn’t want to try and max out happiness, and certainly not via any kind of rats-on-heroin style approach. I would rather take the opposite tack, and construct a smaller number of god-like superhuman minds, who might not even be very “happy” in any of the usual senses (ie, perhaps they are meditating on the nature of existence with great equanimity), but who in some sense are able to like… maximize the potential of the universe to know itself and explore the possibilities of consciousness. Or something...
I don’t have time to reply to all of these, but I think it’s worth saying re point 1, that inasmuch as hedonism ‘struggles’ with this, it’s because it’s basically the only axiology to commit to addressing it at all. I don’t consider that a weakness, since there clearly is some level of comparability between my stubbing my toe and my watching a firework.
Preference utilitarianism sort of ducks around this by equivocating between whether determining a preference requires understanding the happiness its satisfaction brings (in which case it has the same problem) or whether preferences rely on some even more mysterious forces with even weirder implications. I wrote much more on this equivocation here.
Also re size specifically, he literally says size ‘is closely analogous to the sense in which (if welfare is aggregable at all) one population can have more welfare than another due to its size. It’s common to joke about ‘hedons’, but I see no reason one should both be materialist and not expect to find some minimum physical unit of happiness in conscious entities. Then the more hedons an entity has, the sizier its happiness would be. It’s also possible that that we find multiple indivisible hedon-like objects, in which case the philosophy gets harder again gets harder (and at the very least, it’s going to be tough to have an objective weighting between hedons and antihedons, since there’s no a priori reason to assume it should be 1-to-1). But I don’t think hedonists should have to assume the latter, or prove that it’s not true.
Excerpting from and expanding on a bit of point 1 of my reply to akash above. Here are four philosophical areas where I feel like total hedonic utilitarianism (as reflected in common animal-welfare calculations) might be missing the mark:
Something akin to “experience size” (very well-described by that recent blog post!)
The importance of sapience—if an experience of suffering is happening “all on its own”, floating adrift in the universe with nobody to think “I am suffering”, “I hope this will end soon”, etc, does this make the suffering experience worse-than, or not-as-bad-as, human suffering where the experience is tied together with a rich tapestry of other conscious experiences? Maybe it’s incoherent to ask questions like this, or I am thinking about this in totally the wrong way? But it seems like an important question to me. The similiarities between layers of “neurons” in image-classifying AIs, and the actual layouts of literal neurons in the human retina + optical cortex (both humans and AIs have a layer for initial inputs, then for edge-detection, then for corners and curves, then simple shapes and textures, then eventually for higher concepts and whole objects) makes me think that possibly image-classifiers are having a genuine “experience of vision” (ie qualia), but an experience that is disconnected (of course) from any sense of self or sense of wellbeing-vs-suffering or wider understanding of its situation. I think many animals might have experiences that are intermediate in various ways between humans and this hypothetical isolated-experience-of-vision that might be happening in an AI image classifier.
How good of an approximation is it to linearly “add up” positive experiences when the experiences are near-identical? ie, there are two identical computer simulations of a suffering emulated mind, any worse than one simulation? what about a single simulation on a computer with double-thick wires? what about a simulation identical in every respect except one? I haven’t thought super hard about this, but I feel like these questions might have important real-world consequences for simple creatures like blackflies or shrimp, whose experiences might not add linearly across billions/trillions of creatures, because at some point the experiences become pretty similar to each other and you’d be “double-counting”.
Something about “higher pleasures”, or Neitzcheanism, or the complexity of value, that maybe there’s more to life than just adding up positive and negative valence?? Personally, if I got to decide right now what happens to the future of human civilization, I would definitely want to try and end suffering (insomuch as this is feasible), but I wouldn’t want to try and max out happiness, and certainly not via any kind of rats-on-heroin style approach. I would rather take the opposite tack, and construct a smaller number of god-like superhuman minds, who might not even be very “happy” in any of the usual senses (ie, perhaps they are meditating on the nature of existence with great equanimity), but who in some sense are able to like… maximize the potential of the universe to know itself and explore the possibilities of consciousness. Or something...
I don’t have time to reply to all of these, but I think it’s worth saying re point 1, that inasmuch as hedonism ‘struggles’ with this, it’s because it’s basically the only axiology to commit to addressing it at all. I don’t consider that a weakness, since there clearly is some level of comparability between my stubbing my toe and my watching a firework.
Preference utilitarianism sort of ducks around this by equivocating between whether determining a preference requires understanding the happiness its satisfaction brings (in which case it has the same problem) or whether preferences rely on some even more mysterious forces with even weirder implications. I wrote much more on this equivocation here.
Also re size specifically, he literally says size ‘is closely analogous to the sense in which (if welfare is aggregable at all) one population can have more welfare than another due to its size. It’s common to joke about ‘hedons’, but I see no reason one should both be materialist and not expect to find some minimum physical unit of happiness in conscious entities. Then the more hedons an entity has, the sizier its happiness would be. It’s also possible that that we find multiple indivisible hedon-like objects, in which case the philosophy gets harder again gets harder (and at the very least, it’s going to be tough to have an objective weighting between hedons and antihedons, since there’s no a priori reason to assume it should be 1-to-1). But I don’t think hedonists should have to assume the latter, or prove that it’s not true.