Providing such an in depth writeup is really useful, thanks. At the risk of derailing into an academic philosophy discussion, here are some clarificatory questions about what you value (which I’m particularly interested in because I think your values are relatively common among EAs):
I value having enjoyable experiences and avoiding unpleasant experiences. If I value these experiences for myself, then it’s reasonable for me to value them in general. That’s the two-sentence version of why I’m a hedonistic utilitarian.
Why do you think that these are the only things of value?
Pleasurable and painful experiences in non-humans have moral value. Non-humans includes non-human animals, computer simulations of sentient beings, artificial biological beings, and anything else that can experience pleasure and suffering.
Leaving aside (presumably hypothetical) computer simulations and artificial biological beings, do you think non-humans like chickens and fish have equally bad experiences in a month in a factory farm as a human would? If not, roughly how much worse or less bad would you guess they are? (I’m talking about a similar equivalence to that described in this Facebook poll, but focusing purely on morally relevant attributes of experiences.)
The best possible outcome would be to use fill the universe with beings that experience as much joy as possible for their entire lives.
Can you give an example of the ideal form of joy? Would an intense, simple experience of physical pleasure be a decent candidate? (Picking an example of such an experience could be left as an exercise for the reader.)
I am unpersuaded by arguments of the form “utilitarianism produces an unintuitive result in this contrived thought experiment”
What’s the most unintuitive result that you’re prepared to accept, and which gives you most pause?
The great thing about nested comments is derailments are easy to isolate. :)
Why do you think that these are the only things of value?
I don’t understand what it would mean for anything other than positive and negative experiences to have value. I believe that when people say they inherently value art (or something along those lines), the reason they say this is because the thought of art existing makes them happy and the thought of art not existing makes them unhappy, and it’s the happy or unhappy feelings that have actual value, not the existence of art itself. If people thought art existed but it actually didn’t, that would be just as good as if art existed. Of course, when I say that you might react negatively to the idea of art not existing even if people don’t know it exists; but now you know that it doesn’t exist so you still experience the negative feelings associated with art not existing. If you didn’t experience those feelings, it wouldn’t matter.
do you think non-humans like chickens and fish have equally bad experiences in a month in a factory farm as a human would?
I expect there’s a high probability (maybe 50%) that factory farms are just as bad for chickens as they are for humans, and a somewhat lower probability (maybe 25%) that they are just as bad for fish. I expect it’s more likely that factory farms are worse for humans than that they’re worse for chickens/fish, so in expectation, they’re worse for humans, but not much worse.
I don’t know how consciousness works, although I believe it’s fundamentally an empirical question. My best guess is that certain types of mental structures produce heightened consciousness in a way that gives a being greater moral value, but that most of the additional neurons that humans have do not contribute at all to heightened consciousness. For example, humans have tons of brain space devoted to facial recognition, but I don’t expect that we can feel greater levels of pleasure or pain as a result of having this brain space.
Can you give an example of the ideal form of joy?
The best I can do is introspect about what types of pleasure I enjoy most and how I’m willing to trade them off against each other. I expect that the happiest possible being can be much happier than any animal; I also expect that it’s possible in principle to make interpersonal utility comparisons, so we could know what a super-happy being looks like. We’re still a long way away from being able to do this in practice.
What’s the most unintuitive result that you’re prepared to accept, and which gives you most pause?
There are a lot of results that used to make me feel uncomfortable, but I didn’t consider this good evidence that utilitarianism is false. They don’t make me uncomfortable anymore because I’ve gotten used to them. Whichever result gives me the most pause is one that I haven’t heard of before, so I haven’t gotten used to it. I predict that the next time I hear a novel thought experiment where utilitarianism leads to some unintuitive conclusion, it will make me feel uncomfortable but I won’t change my mind because I don’t consider discomfort to be good evidence. Our intuitions are often wrong about how the physical world works, so why should we expect them to always be right about how the moral world works?
At some point we have to use intuition to make moral decisions—I have a strong intuition that nothing matters other than happiness or suffering, and I apply this. But anti-utilitarian thought experiments usually prey on some identifiable cognitive bias. For example, the repugnant conclusion takes advantage of people’s scope insensitivity and inability to aggregate value across separate individuals.
I expect there’s a high probability (maybe 50%) that factory farms are just as bad for chickens as they are for humans, and a somewhat lower probability (maybe 25%) that they are just as bad for fish. I expect it’s more likely that factory farms are worse for humans than that they’re worse for chickens/fish, so in expectation, they’re worse for humans, but not much worse.
Woaha, I didn’t realize that anyone thought that, it would make me change my views greatly if I did.
Providing such an in depth writeup is really useful, thanks. At the risk of derailing into an academic philosophy discussion, here are some clarificatory questions about what you value (which I’m particularly interested in because I think your values are relatively common among EAs):
Why do you think that these are the only things of value?
Leaving aside (presumably hypothetical) computer simulations and artificial biological beings, do you think non-humans like chickens and fish have equally bad experiences in a month in a factory farm as a human would? If not, roughly how much worse or less bad would you guess they are? (I’m talking about a similar equivalence to that described in this Facebook poll, but focusing purely on morally relevant attributes of experiences.)
Can you give an example of the ideal form of joy? Would an intense, simple experience of physical pleasure be a decent candidate? (Picking an example of such an experience could be left as an exercise for the reader.)
What’s the most unintuitive result that you’re prepared to accept, and which gives you most pause?
The great thing about nested comments is derailments are easy to isolate. :)
I don’t understand what it would mean for anything other than positive and negative experiences to have value. I believe that when people say they inherently value art (or something along those lines), the reason they say this is because the thought of art existing makes them happy and the thought of art not existing makes them unhappy, and it’s the happy or unhappy feelings that have actual value, not the existence of art itself. If people thought art existed but it actually didn’t, that would be just as good as if art existed. Of course, when I say that you might react negatively to the idea of art not existing even if people don’t know it exists; but now you know that it doesn’t exist so you still experience the negative feelings associated with art not existing. If you didn’t experience those feelings, it wouldn’t matter.
I expect there’s a high probability (maybe 50%) that factory farms are just as bad for chickens as they are for humans, and a somewhat lower probability (maybe 25%) that they are just as bad for fish. I expect it’s more likely that factory farms are worse for humans than that they’re worse for chickens/fish, so in expectation, they’re worse for humans, but not much worse.
I don’t know how consciousness works, although I believe it’s fundamentally an empirical question. My best guess is that certain types of mental structures produce heightened consciousness in a way that gives a being greater moral value, but that most of the additional neurons that humans have do not contribute at all to heightened consciousness. For example, humans have tons of brain space devoted to facial recognition, but I don’t expect that we can feel greater levels of pleasure or pain as a result of having this brain space.
The best I can do is introspect about what types of pleasure I enjoy most and how I’m willing to trade them off against each other. I expect that the happiest possible being can be much happier than any animal; I also expect that it’s possible in principle to make interpersonal utility comparisons, so we could know what a super-happy being looks like. We’re still a long way away from being able to do this in practice.
There are a lot of results that used to make me feel uncomfortable, but I didn’t consider this good evidence that utilitarianism is false. They don’t make me uncomfortable anymore because I’ve gotten used to them. Whichever result gives me the most pause is one that I haven’t heard of before, so I haven’t gotten used to it. I predict that the next time I hear a novel thought experiment where utilitarianism leads to some unintuitive conclusion, it will make me feel uncomfortable but I won’t change my mind because I don’t consider discomfort to be good evidence. Our intuitions are often wrong about how the physical world works, so why should we expect them to always be right about how the moral world works?
At some point we have to use intuition to make moral decisions—I have a strong intuition that nothing matters other than happiness or suffering, and I apply this. But anti-utilitarian thought experiments usually prey on some identifiable cognitive bias. For example, the repugnant conclusion takes advantage of people’s scope insensitivity and inability to aggregate value across separate individuals.
Woaha, I didn’t realize that anyone thought that, it would make me change my views greatly if I did.