I was trying to keep the discussions of ‘which kind of pain is morally relevant’ and of your proposed system of giving people a chance to be helped in proportion to their suffering sperate. It might be that they are so intertwined as for this to be unproductive, but I think I would like you to response to my comment about the latter before we discuss it further.
I think I see the original argument you were going for. The argument against my approach-minus-the-who-suffers-matters-bit is that it renders all resulting states of affairs equally bad, morally speaking, because all resulting states of affairs would involve the same total pain. Given that we should prevent the morally worst case, this means that my approach would have it that we shouldn’t take any action, and that’s just absurd. Therefore, my way of determining total pain is problematic. Here “a resulting state of affairs” is broadly understood as the indefinite span of time following a possible action, as opposed to any particular point in time following a possible action. On this broad understanding, it seems undeniable that each possible action will result in a state of affairs with the same total maximal pain, since there will surely be someone who suffers maximally at some point in time in each indefinite span of time.
Well, if who suffered didn’t matter, then I think leximin should be used to determine which resulting state of affairs is morally worse. According to leximin, we determine which state of affairs is morally better as follows:
Step 1: From each state of affairs, select a person among the worst off in that state of affairs. Compare these people. If there is a person who is better off than the rest, then that state of affairs is morally better than all the others. If these people are all just as badly off, then move onto Step 2.
Step 2: From each state of affairs, select a person among the worst off in that state of affairs, except for the person who has already been selected. Compare these people. If there is a person who is better off than the rest, then that state of affairs is morally better than all the others. If these people are all just as badly off, then move onto Step 3. And so forth…
According to this method, even though all resulting states of affairs will involve the same total pain, certain resulting states of affairs will be morally better than others, and we should act so as to realize them.
My appeal to leximin is not ad hoc because it takes an individual’s suffering seriously, which is inline with my approach. Notice that leximin can be used to justify saving Susie and Amy over Bob. I don’t actually endorse leximin because leximin does not take an individual’s identity seriously (i.e. it doesn’t treat who suffers as morally relevant, whereas I do. I think who suffers matters).
So that is one response I have to your argument: it grants you that the total pain in each resulting state of affairs would be the same and then argues that this does not mean that all resulting state of affairs would be morally just as bad.
Another response I have is that, most probably, different states of affairs will involve different amounts of pain, and so some states of affairs will be morally worse than others just based on total pain involved. This becomes more plausible when we keep in mind what the maximum amount of pain is on my approach. It is not the most intense pain, e.g. a torture session. It is not the longest pain, e.g. a minor headache that lasts one’s entire life. Rather, it is the most intense pain over the longest period of time. The person who suffers maximum pain is the person who suffers the most intense pain for the longest period of time. Realizing this, it is unlikely that each possible action will lead to a state of affairs involving this. (Note that this is to deny A1.)
Anyways, if literally each possible action I could take would inevitably result in a different person suffering maximal pain (i.e. if A1 and A2 are true), I think I ought to assign an equal chance to each possible action (even though physically speaking I cannot).
But this seems extremely far removed from any day to day intuitions we would have about morality, no? If you flipped a coin to decide whether you should murder each person you met, (a very implementable approximation of this result) I doubt many would find this justified on the basis that someone in the future is going to be suffering much more than them.
To give each possible action an equal chance is certainly not to flip a coin between murdering someone or not. At any given moment, I have thousands (or perhaps an infinite number) of possible actions I could take. Murdering the person in front of me is but one. (There are many complexities here that make the discussion hard like what counts as a distinct action.)
However, I understand that the point of your objection is that my approach can allow the murder of an innocent. In this way, your objection is like that classical argument against utilitarianism. Anyways, I guess, like effective altruism, I can recognize rules that forbid murdering etc. I should clarify that my goal is not to come up with a complete moral theory as such. Rather it is to show that we shouldn’t use the utilitarian way of determining “total pain”, which underlies effective altruism.
I have argued for this by
1) arguing that the utilitarian way of determining “total pain” goes against the spirit of why we take pain to matter in the first place. In response, you have suggested a different framing of utilitarianism on which they are determining a “total moral value” based on people’s pains, which is different from determining a total pain. I still need to address this point.
2) responding to your objection against my way of determining “total pain” (first half of this reply)
I think I see the original argument you were going for. The argument against my approach-minus-the-who-suffers-matters-bit is that it renders all resulting states of affairs equally bad, morally speaking, because all resulting states of affairs would involve the same total pain. Given that we should prevent the morally worst case, this means that my approach would have it that we shouldn’t take any action, and that’s just absurd. Therefore, my way of determining total pain is problematic. Here “a resulting state of affairs” is broadly understood as the indefinite span of time following a possible action, as opposed to any particular point in time following a possible action. On this broad understanding, it seems undeniable that each possible action will result in a state of affairs with the same total maximal pain, since there will surely be someone who suffers maximally at some point in time in each indefinite span of time.
Well, if who suffered didn’t matter, then I think leximin should be used to determine which resulting state of affairs is morally worse. According to leximin, we determine which state of affairs is morally better as follows: Step 1: From each state of affairs, select a person among the worst off in that state of affairs. Compare these people. If there is a person who is better off than the rest, then that state of affairs is morally better than all the others. If these people are all just as badly off, then move onto Step 2. Step 2: From each state of affairs, select a person among the worst off in that state of affairs, except for the person who has already been selected. Compare these people. If there is a person who is better off than the rest, then that state of affairs is morally better than all the others. If these people are all just as badly off, then move onto Step 3. And so forth… According to this method, even though all resulting states of affairs will involve the same total pain, certain resulting states of affairs will be morally better than others, and we should act so as to realize them.
My appeal to leximin is not ad hoc because it takes an individual’s suffering seriously, which is inline with my approach. Notice that leximin can be used to justify saving Susie and Amy over Bob. I don’t actually endorse leximin because leximin does not take an individual’s identity seriously (i.e. it doesn’t treat who suffers as morally relevant, whereas I do. I think who suffers matters).
So that is one response I have to your argument: it grants you that the total pain in each resulting state of affairs would be the same and then argues that this does not mean that all resulting state of affairs would be morally just as bad.
Another response I have is that, most probably, different states of affairs will involve different amounts of pain, and so some states of affairs will be morally worse than others just based on total pain involved. This becomes more plausible when we keep in mind what the maximum amount of pain is on my approach. It is not the most intense pain, e.g. a torture session. It is not the longest pain, e.g. a minor headache that lasts one’s entire life. Rather, it is the most intense pain over the longest period of time. The person who suffers maximum pain is the person who suffers the most intense pain for the longest period of time. Realizing this, it is unlikely that each possible action will lead to a state of affairs involving this. (Note that this is to deny A1.)
To give each possible action an equal chance is certainly not to flip a coin between murdering someone or not. At any given moment, I have thousands (or perhaps an infinite number) of possible actions I could take. Murdering the person in front of me is but one. (There are many complexities here that make the discussion hard like what counts as a distinct action.)
However, I understand that the point of your objection is that my approach can allow the murder of an innocent. In this way, your objection is like that classical argument against utilitarianism. Anyways, I guess, like effective altruism, I can recognize rules that forbid murdering etc. I should clarify that my goal is not to come up with a complete moral theory as such. Rather it is to show that we shouldn’t use the utilitarian way of determining “total pain”, which underlies effective altruism.
I have argued for this by
1) arguing that the utilitarian way of determining “total pain” goes against the spirit of why we take pain to matter in the first place. In response, you have suggested a different framing of utilitarianism on which they are determining a “total moral value” based on people’s pains, which is different from determining a total pain. I still need to address this point.
2) responding to your objection against my way of determining “total pain” (first half of this reply)