I think also that considering life from a hedonistic standpoint of enjoyment/suffering, as if it could sum to a total and thus judge the life worthwhile or not by the total is fundamentally incorrect. I think it’s super weird that so many people commenting here are taking that assumption seemingly for granted without even acknowledging the assumption. Is not a life which has a few moments of glory, perhaps leaves some lasting creative achievement, but has a sum of negative hedonistic experiences, a life worth living? Would you say to someone experiencing chronic pain that you were going to murder them because you believed their life was net negative since they were experiencing more suffering than pleasure?
I’m pretty sympathetic to your view here[1] and preference- and desire-based theories generally. But I’m also skeptical that these dramatically favour humans over nonhuman animals, to the point that global health beats animal welfare.
I suspect the cognitive versions of preferences and desires are not actually interpersonally comparable in general, with utilitarian preferences vs deontologist preferences as a special case. They may also exist in simple forms in other animals, and I give that non-negligible probability. There may be no fact that points to humans mattering more (or other animals mattering more than humans). We may just need to normalize or use Pareto, say. See my posts Types of subjective welfare, Which animals realize which types of subjective welfare? and Solution to the two envelopes problem for moral weights.
In the human case, it would be mistaken to give a kind of hour-by-hour accounting. You know, “I had +4 level of experience for this hour, then I had −2 for the next hour, and then I had −1” — and you sort of sum to try to work out the total… And I came to think that something like that will be applicable in some of the animal cases as well… There are achievements, there are experiences, there are things that can be done in the face of difficulty that might be seen as having the same kind of redemptive role, as casting into a different light the difficult events that led up to it.
The example I use is watching some birds successfully raising some young, fighting off a couple of rather aggressive parrots of another species that wanted to fight them, prevailing against difficult odds — and doing so in a way that was so wholly successful. It seemed to me that if you wanted to do an accounting of how things had gone for those birds, you would not want to do the naive thing of just counting up difficult and less-difficult hours. There’s something special about what’s achieved at the end of that process.
I agree that there are difficult unresolved philosophical questions in regards to hypothetical not-yet-extant people who are varyingly likely to exist depending on the actions of currently extant people (which may be a group that includes blastocysts, for instance).
In regards to non-human animals, and digital entities, I think we need to lean more heavily into computational functionalism (as the video you shared discussed). This point too, is up for debate, but I personally feel much more confident about supporting computational functionalism than biological chauvinism.
In the case of complex-brained animals (e.g. parrots), I do think that there is something importantly distinct about them as compared to simple-brained animals (e.g. invertebrates).
I think that in order to differentiate the underlying qualia associated with this behavior in insects versus the qualia experienced by the parrots defending their young, we must turn to neuroscience.
In a bird or mammal neuroscience is able to offer evidence of the computations of specific sets of neurons carrying out computations such as self-modeling and other-modeling, and things like fondness or dislike of specific other modelled agents. In insects (and shrimp, jellyfish, etc), neuroscience can show us that the insect brains consistently lack sets of neurons which could plausibly be carrying out such complex self/other social modeling. Insect brains have various sets of neurons for sensory processing, for motor control, and other such basic functions. Recently, we have made a comprehensive map of every neuron and nearly all their associated synapses in the preserved brain of an individual fruit fly. We can analyze this entire connectome and label the specific functions of every neuron. I recently attended a talk by a neuroscientist who built a computational model of a portion of this fruit fly connectome, and showed that a specific set of simulated inputs (presentation of sugar to taste sensors on legs) resulted in the expected stereotypical reaction of the simulated body (extending the proboscis).
That, to me, is a good start on compelling evidence that our model of the functions of these neurons is correct.
Thus, I would argue that parrots are in a fundamentally different moral category from fruit flies.
For the case of comparing complex-brained non-human animals to humans, the neuroscientific evidence is less clear cut and more complex. I believe there is a case to be made, but it is beyond the scope of this comment.
Thanks for your thoughtful engagement on this matter.
I’m pretty sympathetic to your view here[1] and preference- and desire-based theories generally. But I’m also skeptical that these dramatically favour humans over nonhuman animals, to the point that global health beats animal welfare.
I suspect the cognitive versions of preferences and desires are not actually interpersonally comparable in general, with utilitarian preferences vs deontologist preferences as a special case. They may also exist in simple forms in other animals, and I give that non-negligible probability. There may be no fact that points to humans mattering more (or other animals mattering more than humans). We may just need to normalize or use Pareto, say. See my posts Types of subjective welfare, Which animals realize which types of subjective welfare? and Solution to the two envelopes problem for moral weights.
I think many other animals have access to things like love and achievement, e.g. animals who raise their own offspring. Here’s a nice illustration from Peter Godfrey-Smith’s recent 80,000 Hours podcast episode:
For keeping people alive, not bringing them into existence, given my person-affecting intuitions.
I agree that there are difficult unresolved philosophical questions in regards to hypothetical not-yet-extant people who are varyingly likely to exist depending on the actions of currently extant people (which may be a group that includes blastocysts, for instance).
In regards to non-human animals, and digital entities, I think we need to lean more heavily into computational functionalism (as the video you shared discussed). This point too, is up for debate, but I personally feel much more confident about supporting computational functionalism than biological chauvinism.
In the case of complex-brained animals (e.g. parrots), I do think that there is something importantly distinct about them as compared to simple-brained animals (e.g. invertebrates).
Some invertebrates do tend to their young, even potentially sacrificing their own lives on behalf of their brood. See: https://entomologytoday.org/2018/05/11/research-confirms-insect-moms-are-the-best/
I think that in order to differentiate the underlying qualia associated with this behavior in insects versus the qualia experienced by the parrots defending their young, we must turn to neuroscience.
In a bird or mammal neuroscience is able to offer evidence of the computations of specific sets of neurons carrying out computations such as self-modeling and other-modeling, and things like fondness or dislike of specific other modelled agents. In insects (and shrimp, jellyfish, etc), neuroscience can show us that the insect brains consistently lack sets of neurons which could plausibly be carrying out such complex self/other social modeling. Insect brains have various sets of neurons for sensory processing, for motor control, and other such basic functions. Recently, we have made a comprehensive map of every neuron and nearly all their associated synapses in the preserved brain of an individual fruit fly. We can analyze this entire connectome and label the specific functions of every neuron. I recently attended a talk by a neuroscientist who built a computational model of a portion of this fruit fly connectome, and showed that a specific set of simulated inputs (presentation of sugar to taste sensors on legs) resulted in the expected stereotypical reaction of the simulated body (extending the proboscis).
That, to me, is a good start on compelling evidence that our model of the functions of these neurons is correct.
Thus, I would argue that parrots are in a fundamentally different moral category from fruit flies.
For the case of comparing complex-brained non-human animals to humans, the neuroscientific evidence is less clear cut and more complex. I believe there is a case to be made, but it is beyond the scope of this comment.
Thanks for your thoughtful engagement on this matter.