(Posted as top-level comment as I has some general things to say, was originally a response here)
I just wanted to say I thought this comment did a good job explaining the basis behind your moral intuitions, which I had not really felt a strong motivation for before now. I still don’t find it particularly compelling myself, but I can understand why others could find it important.
Overall I find this post confusing though, since the framing seems to be ‘Effective Altruism is making an intellectual mistake’ whereas you just actually seem to have a different set of moral intuitions from those involved in EA, which are largely incompatible with effective altruism as it currently practiced. Whilst you could describe moral differences as intellectual mistakes, this does not seem to be a standard or especially helpful usage.
The comments etc. then just seem to have mostly been people explaining why they don’t find your moral intuition that ‘non-purely experientially determined’ and ‘purely experientially determined’ amounts of pain cannot be compared compelling. Since we seem to have reached a point where there seems to be a fundamental disagreement about considered moral values, it does not seem that attempting to change each others minds is very fruitful.
I think I would have found this post more conceptually clear if it had been structured:
EA conclusions actually require an additional moral assumption/axiom—and so if you don’t agree with this assumption then you should not obviously follow EA advice.
(Optionally) Why you find the moral assumption unconvincing/unlikely
(Extra Optionally) Tentative suggestions for what should be done in the absence of the assumption.
Where throughout the assumption is the commensuratabilitly of ‘non-purely experientially determined’ and ‘purely experientially determined’ experience.
In general I am not very sure what you had in mind as the ideal outcome of this post. I’m surprised if you thought most EAs agreed with you on your moral intuition, since so much of EA is predicated on its converse (as is much of established consequential thinking etc.). But equally I am not sure what value we can especially bring to you if you feel very sure in your conviction that the assumption does not hold.
I didn’t know what the source of my disagreement with EAs would be, so I hope you can understand why I couldn’t structure my post in a way that would have already taken into account all the subsequent discussions. But thanks for your suggestion. I may write another post with a much simpler structure if my discussion with kbog reaches a point where either I realize I’m wrong or he realizes he’s wrong. If I’m wrong, I hope to realize it asap.
Also, I agree with kbog. I think it’s much likelier that one of us is just confused. Either kbog is right that there is an intelligible sense in which 5 minor headaches spread among 5 people can involve more pain than 1 major headache had by one person or he is not.
After figuring that out, there is the question of which sense of “involves more pain than” is more morally important: is it the “is experientially worse than” sense or kbog’s sense? Perhaps that comes down to intuitions.
Thanks for your reply—I’m extremely confused if you think there is no ’intelligible sense in which 5 minor headaches spread among 5 people can involve more pain than 1 major headache had by one person” since (as has been discussed in these comments) if you view/define total pain as being measured by intensity-weighted number of experiences this gives a clear metric that matches consequentialist usage.
I had assumed you were arguing at the ‘which is morally important’ level, which I think might well come down to intuitions.
Thanks for your reply. I can understand why you’d be extremely confused because I think I was in error to deny the intelligibility of the utilitarian sense of “more pain”.
I have recently replied to kbog acknowledging this mistake, outlining how I understand the utilitarian sense of “more pain”, and then presenting an argument for why my sense of “more pain” is the one that really matters.
Thanks for getting back to me, I’ve read your reply to kblog, but I don’t find your argument especially different to those you laid out previously (which given that I always thought you were trying to make the moral case should maybe not be surprising). Again I see why there is a distinction one could care about, but I don’t find it personally compelling.
(Indeed I think many people here would explicitly embrace the assumption than your P3 in your second reply to kblog, typically framed as ‘two people experiencing the same pain is twice as bad as one person experiencing that pain’ (there is some change from discussing ‘total pain’ to ‘badness’ here, but I think it still fits with our usage).)
A couple of brief points in favour of the classical approach:
It in some sense ‘embeds naturally’ in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).
As discussed in other comments, it also has other pleasing properties, such as the veil of ignorance as discussed in other comments.
One additional thing to note is that dropping the comparability of ‘non-purely experientially determined’ and ‘purely experientially determined’ experiences (henceforth ‘Comparability’) does not seem to naturally lead to a specific way of evaluating different situations or weighing them against each other.
For example, you suggest in your post that without Comparability the morally correct course of action would be to give each person a chance of being helped in proportion to their suffering, but this does not necessarily follow. One could imagine others who also disagreed with Comparability, but thought the appropriate solution was to always help the person suffering the most, and not care at all about anyone else. To take things to the opposite extreme, someone could also deny Comparability but think that the most important thing was minimizing the number of people suffering at all and not take into account intensity whatsoever (although they would likely justify rejecting Comparability on different grounds to you).
Thanks again for taking the time to read my conversation with kbog and replying. I have a few thoughts in response:
(Indeed I think many people here would explicitly embrace the assumption than your P3 in your second reply to kblog, typically framed as ‘two people experiencing the same pain is twice as bad as one person experiencing that pain’ (there is some change from discussing ‘total pain’ to ‘badness’ here, but I think it still fits with our usage).)
When you say that many people here would embrace the assumption that “two people experiencing the same pain is twice as bad as one person experiencing that pain”, are you using “bad” to mean “morally bad?”
I ask because I would agree if you meant morally bad IF the single person was a subset of the two people. For example, I would agree that Amy and Susie each suffering is twice as morally bad as just Amy suffering. However, I would not agree IF the single person was not a subset of the two (e.g., if the single person was Bob). If the single person was Bob, I would think the two cases are morally just as bad.
Now, one basic premise that kbog and I have been working with is this:
If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering.
However, based on my preferred sense of “more pain”, two people suffering involves the same amount of pain as one person suffering, irrespective of whether the single person is a subset or not.
Therefore, you might wonder how I am able to arrive at the different opinions above. More specifically, if I think Amy and Susie each suffering involves the same amount of pain as just Amy suffering, shouldn’t I be committed to saying that the former is morally just as bad as the latter, rather than twice as morally bad (which is what I want to say?)
I don’t think so. I think the pareto principle provides an adequate reason for taking Amy and Susie each suffering to be morally worse than just Amy’s suffering. As Otsuka (a philosopher at Harvard) puts it, the Pareto states that “One distribution of benefits over a population is strictly Pareto superior to another distribution of benefits over that same population just in case (i) at least one person is better off under the former distribution than she would be under the latter and (ii) nobody is worse off under the former than she would be under the latter.” Since just Amy suffering (i.e. Susie not suffering) is Pareto superior to Amy and Susie each suffering, therefore just Amy suffering is morally better than Amy and Susie each suffering. In other words, Amy and Susie each suffering is morally worse than just Amy suffering. Notice, however, that if the single person were Bob, condition (ii) would not be satisfied because Bob would be made worse off. The Pareto principle is based on the appealing idea that we shouldn’t begrudge another person an improvement that costs us nothing. Amy shouldn’t begrudge Susie an improvement that costs her nothing.
Anyways, I just wanted to make that aspect of my thinking clear. So I would agree with you that more people suffering is morally worse than fewer people suffering as long as the smaller group of people is a subset of the larger group, due to the Pareto principle. But I would not agree with you that more people suffering is morally worse than fewer people suffering if those fewer people are not a subset of the larger group, since the Pareto principle is not a basis for it, nor is there more pain in the former case than the latter case on my preferred sense of “more pain”. And since I think my preferred sense of “more pain” is the one that ultimately matters because it respects the fact that pain matters solely because of how it feels, I think others should agree with me.
A couple of brief points in favour of the classical approach: It in some sense ‘embeds naturally’ in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).
I’m not sure I see the advantage here, or what the alleged advantage is. I don’t see why my view commits me to pay any attention towards people who I cannot possibly affect via my actions (even though I may care about them). My view simply commits me to giving those who I can possibly affect a chance of being helped proportional to their suffering.
As discussed in other comments, it also has other pleasing properties, such as the veil of ignorance as discussed in other comments.
The veil of ignorance approach at minimum supports a policy of helping the greater number (given the stipulation that each person has an equal chance of occupying anyone’s position). However, as I argued, this stipulation is not true OF the real world because each of us didn’t actually have an equal chance of being in any of our position, and what we should do should be based on the facts, and not on a stipulation. In kbog’s latest reply to me regarding the veil of ignorance, he seems to argue that the stipulation should determine what we ought to do (irrespective of whether it is true in the actual world) because “The reason we look at what they would agree to from behind the veil of ignorance as opposed to outside is that it ensures that they give equal consideration to everyone, which is a basic principle that appeals to us as a cornerstone of any decent moral system.” I have yet to respond to this latest reply because I have been too busy arguing about our senses of “more pain”, but if I were to respond, I would say this:
“I agree that we should give equal consideration to everyone, which is why I believe we should give each person a chance of being helped proportional to the suffering they face. The only difference is that this is giving equal consideration to everyone in a way that respects the facts of the world.” Anyways, I don’t want to say too much here, because kbog might not see it and it wouldn’t be fair if you only heard my side. I’ll respond to kbog’s reply eventually (haha) and you can follow the discussion there if you wish.
Let me just add one thing: Based on Singer’s intro to Utilitarianism, Harsanyi argued that the veil of ignorance also entails a form of utilitarianism on which we ought to maximize average utility, as opposed to Rawls’ claim that it entails giving priority to the worst off. If this is right, then the veil of ignorance approach doesn’t support classical utilitarianism which just says we ought to maximize utility and not average utility.
One could imagine others who also disagreed with Comparability, but thought the appropriate solution was to always help the person suffering the most, and not care at all about anyone else.
Yes, they could, but I also argued that who suffers matters in my response to Objection 2, and to simply help the person suffering the most is to ignore this fact. Thus, even if one person suffering a lot is experientially worse (and thus morally worse) than many others each suffering something less, I believe we should give the others some chance of being helped. That is to say, in light of the fact that who suffers matters, I believe it is not always right to prevent the morally worse case.
To take things to the opposite extreme, someone could also deny Comparability but think that the most important thing was minimizing the number of people suffering at all and not take into account intensity whatsoever (although they would likely justify rejecting Comparability on different grounds to you).
While this is a possible position to hold, it is not a plausible one, because it effectively entails that the numbers matter in itself. That is, such a person thinks he should save the many over one other person not because he thinks the many suffering involves more pain than the one suffering (for he denies that a non-purely experientially determined amount of pain can be compared with a purely experientially determined amount of pain). Rather, he thinks he should save the many solely because they are many. But it is hard to see how numbers matter in itself.
A couple of brief points in favour of the classical approach: It in some sense ‘embeds naturally’ in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).
I’m not sure I see the advantage here, or what the alleged advantage is. I don’t see why my view commits me to pay any attention towards people who I cannot possibly affect via my actions (even though I may care about them). My view simply commits me to giving those who I can possibly affect a chance of being helped proportional to their suffering.
The argument is that if:
The amount of ‘total pain’ is determined by the maximum amount of suffering people experienced by any given person (Which I think is what you are arguing)
There could be an alien civilization containing a being experiencing more suffering than any human is capable of experiencing (you could also just use a human being tortured if you liked for a less extreme but clearly applicable case)
In this case, then the amount of ‘total pain’ is always at least that very large number, such that none of your actions can change it at all.
Thus (and you would disagree with this implication due to your adoption of the Pareto principle) since the level of ‘total pain’ is the morally important thing, all of your possible actions are morally equivalent.
As I mention I think you escape this basic formulation of the problem by your adoption of the Pareto principle, but a more complicated version causes the same issue:
This is essentially just applying the non-identity problem to the example above. (weirdly enough I think the best explanation I’ve seen of the non-identity problem is the second half of the ‘the future’ section of Derek Parfit wikipedia page )
The argument goes something like:
D1 If we adopt that ‘total pain’ is the maximal pain experienced by any person for whom we can effect how much pain their experience (an attempt to incorporate the Pareto principle into the definition for simplicity’s sake).
A1 At some point in the far future there is almost certainly going to be someone experiencing extreme pain. (Even if humanity is wiped out, so most of the future has no one in it, that wiping out is likely to involve extreme pain for some).
A2 Due to chaotic nature of the world, and the strong dependence on birth timings of personal identity (if the circumstances of ones conception change even very slightly then your identity will almost certainly be completely different) any actions in the world now will within a few generations result in a completely different set of people existing.
C1 Thus by A1 the future is going to contain someone experiencing extreme pain, but by A2 exactly who this person is will vary with any different courses of action, thus by D1 the ‘total pain’ in all cases is uniformly vary high.
This is similar to the point made in JanBrauner, however I did not find your response to their comment particularly engaged the core point of the extreme unpredictability of the maximum pain caused by an act.
After your most recent comment I am generally unsure exactly what you are arguing for in terms of moral theories. When arguing on which form of pain is morally important you seem to make a strong case that one should consider the ‘total pain’ in a situation solely by whatever pain involved is most extreme. However when discussing moral recommendations you don’t completely focus on this. Thus I’m not sure if this comments and its examples will miss the mark completely.
(There are also more subtle defenses, such as those relating to how much one cares about future people etc. which have thusfar been left out of the discussion).
Thanks for the exposition. I see the argument now.
You’re saying that, if we determined “total pain” by my preferred approach, then all possible actions will certainly result in states of affairs in which the total pains are uniformly high with the only difference between the states of affairs being the identity of those who suffers it.
I’ve since made clear to you that who suffers matters to me too, so if the above is right, then according to my moral theory, what we ought to do is assign an equal chance to any possible action we could take, since each possible action gives rise to the same total pain, just suffered by different individuals.
Your argument would continue: Any moral theory that gave this absurd recommendation cannot be correct. Since the root of the absurdity is my preferred approach to determining total pain, that approach to determining total pain must be problematic too.
My response:
JanBrauner, if I remember correctly was talking about extreme unpredictability, but your argument doesn’t seem to be based on unpredictability. If A1 and A2 are true, then each possible action more-or-less seems to inevitably result in a different person suffering maximal pain.
Anyways, if literally each possible action I could take would inevitably result in a different person suffering maximal pain (i.e. if A1 and A2 are true), I think I ought to assign an equal chance to each possible action (even though physically speaking I cannot).
I think there is no more absurdity to assigning each possible action an equal chance (assuming A1 and A2 are true) than there is in, say, flipping a coin between saving a million people on one island from being burned alive and saving one other person on another island from being burned alive. Since I don’t find the latter absurd at all (keeping in mind that none of the million will suffer anything worse than the one, i.e. that the one would suffer no less than any one of the million), I would not find the former absurd either. Indeed, giving each person an equal chance of being saved from being burned alive seems to me like the right thing to do given that each person has the same amount to suffer. So I would feel similarly about assigning each possible action an equal chance (assuming A1 and A2 are true).
I was trying to keep the discussions of ‘which kind of pain is morally relevant’ and of your proposed system of giving people a chance to be helped in proportion to their suffering sperate. It might be that they are so intertwined as for this to be unproductive, but I think I would like you to response to my comment about the latter before we discuss it further.
You’re saying that, if we determined “total pain” by my preferred approach, then all possible actions will certainly result in states of affairs in which the total pains are uniformly high with the only difference between the states of affairs being the identity of those who suffers it.
Given that you were initially arguing (with kblog etc.) for this definition of total pain, independent of any other identity considerations, this seems very relevant to that discussion.
Anyways, if literally each possible action I could take would inevitably result in a different person suffering maximal pain (i.e. if A1 and A2 are true), I think I ought to assign an equal chance to each possible action (even though physically speaking I cannot).
But this seems extremely far removed from any day to day intuitions we would have about morality, no? If you flipped a coin to decide whether you should murder each person you met, (a very implementable approximation of this result) I doubt many would find this justified on the basis that someone in the future is going to be suffering much more than them.
I think there is no more absurdity to assigning each possible action an equal chance (assuming A1 and A2 are true) than there is in, say, flipping a coin between saving a million people on one island from being burned alive and saving one other person on another island from being burned alive.
The issue is this also applied to the case of deciding whether to set the island on fire at all
I was trying to keep the discussions of ‘which kind of pain is morally relevant’ and of your proposed system of giving people a chance to be helped in proportion to their suffering sperate. It might be that they are so intertwined as for this to be unproductive, but I think I would like you to response to my comment about the latter before we discuss it further.
I think I see the original argument you were going for. The argument against my approach-minus-the-who-suffers-matters-bit is that it renders all resulting states of affairs equally bad, morally speaking, because all resulting states of affairs would involve the same total pain. Given that we should prevent the morally worst case, this means that my approach would have it that we shouldn’t take any action, and that’s just absurd. Therefore, my way of determining total pain is problematic. Here “a resulting state of affairs” is broadly understood as the indefinite span of time following a possible action, as opposed to any particular point in time following a possible action. On this broad understanding, it seems undeniable that each possible action will result in a state of affairs with the same total maximal pain, since there will surely be someone who suffers maximally at some point in time in each indefinite span of time.
Well, if who suffered didn’t matter, then I think leximin should be used to determine which resulting state of affairs is morally worse. According to leximin, we determine which state of affairs is morally better as follows:
Step 1: From each state of affairs, select a person among the worst off in that state of affairs. Compare these people. If there is a person who is better off than the rest, then that state of affairs is morally better than all the others. If these people are all just as badly off, then move onto Step 2.
Step 2: From each state of affairs, select a person among the worst off in that state of affairs, except for the person who has already been selected. Compare these people. If there is a person who is better off than the rest, then that state of affairs is morally better than all the others. If these people are all just as badly off, then move onto Step 3. And so forth…
According to this method, even though all resulting states of affairs will involve the same total pain, certain resulting states of affairs will be morally better than others, and we should act so as to realize them.
My appeal to leximin is not ad hoc because it takes an individual’s suffering seriously, which is inline with my approach. Notice that leximin can be used to justify saving Susie and Amy over Bob. I don’t actually endorse leximin because leximin does not take an individual’s identity seriously (i.e. it doesn’t treat who suffers as morally relevant, whereas I do. I think who suffers matters).
So that is one response I have to your argument: it grants you that the total pain in each resulting state of affairs would be the same and then argues that this does not mean that all resulting state of affairs would be morally just as bad.
Another response I have is that, most probably, different states of affairs will involve different amounts of pain, and so some states of affairs will be morally worse than others just based on total pain involved. This becomes more plausible when we keep in mind what the maximum amount of pain is on my approach. It is not the most intense pain, e.g. a torture session. It is not the longest pain, e.g. a minor headache that lasts one’s entire life. Rather, it is the most intense pain over the longest period of time. The person who suffers maximum pain is the person who suffers the most intense pain for the longest period of time. Realizing this, it is unlikely that each possible action will lead to a state of affairs involving this. (Note that this is to deny A1.)
Anyways, if literally each possible action I could take would inevitably result in a different person suffering maximal pain (i.e. if A1 and A2 are true), I think I ought to assign an equal chance to each possible action (even though physically speaking I cannot).
But this seems extremely far removed from any day to day intuitions we would have about morality, no? If you flipped a coin to decide whether you should murder each person you met, (a very implementable approximation of this result) I doubt many would find this justified on the basis that someone in the future is going to be suffering much more than them.
To give each possible action an equal chance is certainly not to flip a coin between murdering someone or not. At any given moment, I have thousands (or perhaps an infinite number) of possible actions I could take. Murdering the person in front of me is but one. (There are many complexities here that make the discussion hard like what counts as a distinct action.)
However, I understand that the point of your objection is that my approach can allow the murder of an innocent. In this way, your objection is like that classical argument against utilitarianism. Anyways, I guess, like effective altruism, I can recognize rules that forbid murdering etc. I should clarify that my goal is not to come up with a complete moral theory as such. Rather it is to show that we shouldn’t use the utilitarian way of determining “total pain”, which underlies effective altruism.
I have argued for this by
1) arguing that the utilitarian way of determining “total pain” goes against the spirit of why we take pain to matter in the first place. In response, you have suggested a different framing of utilitarianism on which they are determining a “total moral value” based on people’s pains, which is different from determining a total pain. I still need to address this point.
2) responding to your objection against my way of determining “total pain” (first half of this reply)
Yes. I bring up that most people would accept this different framing of P3 (even when the people involved are different) as a fundamental piece of their morality. To most of the people here this is the natural, obvious and intuitively correct way of aggregating experience. (Hence why I started my very first comment by saying you are unlikely to get many people to change their minds!)
I think thinking in terms of ‘total pain’ is not normally how this is approached, instead one thinks about converting each persons experience into ‘utility’ (or ‘moral badness’ etc.) on a personal level, but then aggregates all the different personal utilities into a global figure. I don’t know if you find this formulation more intuitively acceptable (it is some sense feels like it respects your reason for caring about pain more).
I bring this up since you are approaching this from a different angle than the usual, which makes peoples standard lines of reasoning seem more complex.
A couple of brief points in favour of the classical approach: It in some sense ‘embeds naturally’ in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).
I’m not sure I see the advantage here, or what the alleged advantage is. I don’t see why my view commits me to pay any attention towards people who I cannot possibly affect via my actions (even though I may care about them). My view simply commits me to giving those who I can possibly affect a chance of being helped proportional to their suffering.
I’ll discuss this in a separate comment since I think it is one of the strongest argument against your position.
I don’t know much about the veil of ignorance, so I am happy to give you that it does not support total utilitarianism.
I believe it is not always right to prevent the morally worse case.
Then I am really not sure at all what you are meaning by ‘morally worse’ (or ‘right’!). In light of this, I am now completely unsure of what you have been arguing the entire time.
Yes. I bring up that most people would accept this different framing of P3 (even when the people involved are different) as a fundamental piece of their morality. To most of the people here this is the natural, obvious and intuitively correct way of aggregating experience. (Hence why I started my very first comment by saying you are unlikely to get many people to change their minds!)
I think thinking in terms of ‘total pain’ is not normally how this is approached, instead one thinks about converting each persons experience into ‘utility’ (or ‘moral badness’ etc.) on a personal level, but then aggregates all the different personal utilities into a global figure. I don’t know if you find this formulation more intuitively acceptable (it is some sense feels like it respects your reason for caring about pain more).
So you’re suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way:
Assign a moral value to each person’s experiences based on its overall what-it’s-like. For example, if someone is to experience 5 headaches, we are to assign a single moral value to his 5 headaches based on how experientially bad the what-it’s-like-of-going-through-5-headaches is. If going through 5 such headaches is about experientially as bad as going through 1 major headache, then we would assign the same moral value to someone’s 5 minor headaches as we would to someone else’s 1 major headache.
We then add up the moral value assigned to each person’s experiences to get a global moral value, and compare this moral value to the other global values corresponding to the other states of affairs we could bring about.
This approach reminds me of trade-off situations that involve saving lives instead of saving people from suffering. For example, suppose we can either save Amy’s and Susie’s life or Bob’s life, but we cannot save all. Who do we save? Most people would reason that we should save Amy’s and Susie’s life because each life is assigned a certain positive moral value, so 2 lives is twice the moral value as 1 life. I purposely avoided talking about trade-off situations involving saving lives because I don’t think a life has moral value in itself, yet I anticipated that people would appeal to life having some sort of positive moral value in itself and I didn’t want to spend time arguing about that. In any case, if life does have positive moral value in itself, then I think it makes sense to add those values just as it makes sense to add the dollar values of different merchandise. This would result in Amy’s and Susie’s death being a morally worse thing than Bob’s death, and so I would at least agree that what we ought to do in this case wouldn’t be to give everyone a 50% chance.
In any case, if we assign a moral value to each person’s experience in the same way that we might assign a moral value to each person’s life, then I can see how people reach the conclusion that more people suffering a given pain is morally worse than fewer people suffering the given pain (even if the fewer are other people). Moreover, given step 1., I agree that this approach, at least prima facie, respects [the fact that pain matters solely because of how it FEELS] more than the approach that I’ve attributed to kbog). (I added the “[...]” to make the sentence structure more clear.) As such, this is an interesting approach that I would need to think more about, so thanks for bringing it up. But, even granting this approach, I don’t think what we ought to do is to OUTRIGHT prevent the morally worse case; rather we ought to give a higher chance to preventing the morally worse case proportional to how much morally worse it is than the other case. I will say more about this below.
Then I am really not sure at all what you are meaning by ‘morally worse’ (or ‘right’!). In light of this, I am now completely unsure of what you have been arguing the entire time.
Please don’t be alarmed (haha). I assume you’re aware that there are other moral theories that recognize the moral value of experience (just as utilitarianism does), but also recognizes other side constraints such that, on these moral theories, the right thing to do is not always to OUTRIGHT prevent the morally worst consequence. For example, if a side constraint is true of some situation, then the right thing to do would not be to prevent the morally worst consequence if doing so violates the side constraint. That is why these moral theories are not consequentialist.
You can think of my moral position as like one of these non-consequentialist theories. The one and only side constraint that I recognize is captured by the fact that who suffers matters. Interestingly, this side constraint arises from the fact that experience matters, so it is closer to utilitarianism than other moral theories in this respect. Here’s an example of the side constraint in action: Suppose I can either save 100 people from a minor headache or 1 other person from a major headache. Going by my sense of “more pain” (i.e. my way of quantifying and comparing pains), the single person suffering the major headache is morally worse than the 100 people each suffering a minor headache because his major headache is experientially worse than any of the other people’s minor headache. But in this case, I would not think the right thing to do is to OUTRIGHT save the person with the major headache (even though his suffering is the morally worse case). I would think that the right thing to do is to give him a higher chance of being saved proportional to how much worse his suffering is experientially speaking than any one of the others (i.e. how much morally worse his suffering is relative to the 100′s suffering).
Similarly, if we adopted the approach you outlined above, maybe the 100 people each suffering a minor headache would be the morally worse case. If so, given the side constraint, I would still similarly think that it would not be right to OUTRIGHT save the 100 from their minor headaches. I would again think that the right thing to do would be to give the 100 people a higher chance of being saved proportional to how much morally worse their suffering is relative to the single person’s suffering.
On ‘people should have a chance to be helped in proportion to how much we can help them’ (versus just always helping whoever we can help the most).
(Again, my preferred usage of ‘morally worse/better’ is basically defined so as to mean one always ‘should’ always pick the ‘morally best’ action. You could do that in this case, by saying cases are morally worse than one another if people do not have chances of being helped in proportion to how badly off they are. This however leads directly into my next point… )
How much would you be willing to trade off helping people verses the help being distributed fairly? e.g. if you could either have a 95% chance of helping people in proportion to their suffering, but a 5% chance of helping no one, verses a 100% chance of only helping the person suffering the most.
In your reply to JanBrauner you are very willing to basically completely sacrifice this principle in response to practical considerations, so it seems possibly you are not willing to trade off any amount of ‘actually helping people’ in favour of it, but then it seems strange you argue for it so forcefully.
As a separate point, this form of reasoning seems rather incompatible with your claims about ‘total pain’ being morally important, and also determined solely by whoever is experiencing the most pain. Thus, if you follow your approach and give some chance of helping people not experiencing the most pain, in the case when you do help them, the ‘total pain’ does not change at all!
For example:
Suppose Alice is experiencing 10 units of suffering (by some common metric)
10n people (call them group B) are experiencing 1 units of suffering each
We can help exactly one person, and reduce their suffering to 0
In this case your principle says we should give Alice a 10/(10+10n) = 1/(n+1) chance of being helped, and each person in group B a 1/(10+10n) chance of being helped. But in the case we help someone from group B the level of ‘total pain’ remains at 10 as Alice is not helped.
This means that n/(n+1) proportion of the time the ‘total pain’ remains unchanged. i.e. we can make the chance of actually effecting the thing you say is morally important arbitrarily small. It seems strange to say your morally is motivated by x if your actions are so distanced from it that your chance of actually effecting x can go to zero.
Finally I find the claim that this is actually the fairer or more empathetic approach unconvincing. I would argue that whatever fairness you gain by letting there be some chance you help the person experiencing the second-most suffering is outweighed by your unfairness to the person suffering the most.
Indeed, for another example:
Say a child (child A) is about to be tortured for the rest of their life, which you can prevent for £2.
However another child (child B) has just dropped their ice cream, which has slightly upset them (although not much, they are just a little sad). You could buy them another ice cream for £2, which would cheer them up.
You only have £2, so you can only help one of the children. Under your system you would give some (admittedly (hopefully!) very small) chance that you would help child B. However in the case that you rolled your 3^^^3 sided die and it come up in favour of B, as you started walking over to the ice cream van it seems like it would be hard to say you were acting in accordance with “reason and empathy”.
(This was perhaps a needlessly emotive example, but I wanted to hammer home how completely terrible it could be to help the person not suffering the most. If you have a choice between not rolling a die, and rolling a die with a chance of terrible consequences, why take the chance?)
Hey Alex! Sorry for the super late response! I have a self-control problem and my life got derailed a bit in the past week >< Anyways, I’m back :P
How much would you be willing to trade off helping people verses the help being distributed fairly? e.g. if you could either have a 95% chance of helping people in proportion to their suffering, but a 5% chance of helping no one, verses a 100% chance of only helping the person suffering the most.
This is an interesting question, adding another layer of chance to the original scenario. As you know, if (there was a 100% chance) I could give each person a chance of being saved in proportional to his/her suffering, I would do that instead of outright saving the person who has the worst to suffer. After all, this is what I think we should do, given that suffering matters, but who suffers also matters. Here, there seems to me a nice harmony between these two morally relevant factors – the suffering and the identity of who suffers, where both have a sufficient impact on what we ought to do: we ought to give each person a chance of being saved because who suffers matters, but each person’s chance ought to be in proportion to what he/she has to suffer because suffering also matters.
Now you’re asking me what I would do if there was only a 95% chance that I could give each person a chance of being saved in proportion to his/her suffering with a 5% chance of not helping anyone at all: would I accept the 95% chance or outright save the person who has the worst to suffer?
Well, what should I do? I must admit it’s not clear. I think it comes down to how much weight we should place on the morally relevant factor of identity. The more weight it has, the more likely the answer is that we should accept the 95% chance. I think it’s plausible that it has enough weight such that we should accept a 95% chance, but not a 40% chance. If one is a moral realist, one can accept that there is a correct objective answer yet not know what it is.
One complication is that you mention the notion of fairness. On my account of what matters, the fair thing to do – as you suggest—seems to be to give each person a chance in proportion to his/her suffering. Fairness is often thought of as a morally relevant factor in of itself, but if what the fair thing to do in any given situation is grounded in other morally relevant factors (e.g. experience and identity), then its moral relevance might be derived. If so, I think we can ignore the notion of fairness.
For example:
• Suppose Alice is experiencing 10 units of suffering (by some common metric)
• 10n people (call them group B) are experiencing 1 units of suffering each
• We can help exactly one person, and reduce their suffering to 0
In this case your principle says we should give Alice a 10/(10+10n) = 1/(n+1) chance of being helped, and each person in group B a 1/(10+10n) chance of being helped. But in the case we help someone from group B the level of ‘total pain’ remains at 10 as Alice is not helped.
This means that n/(n+1) proportion of the time the ‘total pain’ remains unchanged. i.e. we can make the chance of actually effecting the thing you say is morally important arbitrarily small. It seems strange to say your morally is motivated by x if your actions are so distanced from it that your chance of actually effecting x can go to zero.
This is a fantastic objection. This objection is very much in the spirit of the objection I was raising against utilitarianism: both objections show that the respective approaches can trivialize suffering given enough people (i.e. given that n is large enough). I think this objection shows a serious problem with giving each person a chance of being saved proportional to his/her suffering insofar as it shows that doing so can lead us to give a very very small chance to someone who has a lot to suffer when it intuitively seems to me that we should give him a much higher chance of being saved given how much more he/she has to suffer relative to any other person.
So perhaps we ought to outright save the person who has the most to suffer. But this conclusion doesn’t seem right either in a trade-off situation involving him and one other person who has just a little less to suffer, but still a whole lot. In such a situation, it intuitively seems that we should give one a slightly higher chance of being saved than the other, just as it intuitively seems that we should give each an equal chance of being saved in a trade-off situation where they each have the same amount to suffer.
I also have an intuition against utilitarianism. So if we use intuitions as our guide, it seems to leave us nowhere. Maybe one or more of these intuitions can be “evolutionarily debunked”, sparing one of the three approaches, but I don’t really have an idea of how that would go.
Indeed, for another example:
• Say a child (child A) is about to be tortured for the rest of their life, which you can prevent for £2.
• However another child (child B) has just dropped their ice cream, which has slightly upset them (although not much, they are just a little sad). You could buy them another ice cream for £2, which would cheer them up.
You only have £2, so you can only help one of the children. Under your system you would give some (admittedly (hopefully!) very small) chance that you would help child B. However in the case that you rolled your 3^^^3 sided die and it come up in favour of B, as you started walking over to the ice cream van it seems like it would be hard to say you were acting in accordance with “reason and empathy”.
I had anticipated this objection when I wrote my post. In footnote 4, I wrote:
“Notice that with certain types of pain episodes, such as a torture episode vs a minor headache, there is such a big gap in amount of suffering between them that any clear-headed person in the world would rather endure an infinite number of minor headaches (i.e. live with very frequent minor headaches in an immortal life) than to endure the torture episode. This would explain why in a choice situation in which we can either save a person from torture or x number of persons from a minor headache (or 1 person from x minor headaches), we would just save the person who would be tortured rather than give the other(s) even the slightest chance of being helped. And I think this accords with our intuition well.”
Admittedly, there are two potential problems with what I say in my footnote.
1) It’s not clear that any clear-headed person would do as I say, since it seems possible that the what-it’s-like-of-going-through-infinite-minor-headaches can be experientially worse than the what-it’s-like-of-going-through-a-torture-session.
2) Even if any clear-headed person would do as I say, it’s not clear that this can yield the result that we should outright save the one person from torture. It depends on how the math works out, and I’m terrible at math lol. Does 1/infinity = 0? If so, then it seems we ought to give the person who would suffer the minor headache a 0% chance (i.e. we ought to outright save the other person from torture).
But the biggest problem is that even if what I say in my footnote can adequately address this objection, it cannot adequately address your previous objection. This is because in your previous example concerning Alice, I think she should have a high chance of being saved (e.g. around 90%) no matter how big n is, and what I say in footnote 4 cannot help me get that result.
All in all, your previous objection shows that my own approach leads to a result that I cannot accept. Thanks for that (haha). However, I should note that it doesn’t make the utilitarian view more plausible to me because, as I said, your previous objection is very much in the spirit of my own objection against utilitarianism.
I wonder if dropping the idea that we should give each person a chance of being saved proportional to his/her suffering requires dropping the idea that who suffers matters… I used the latter idea to justify the former idea, but maybe the latter idea can also be used to justify something weaker—something more acceptable to me… (although I feel doubtful about this).
So you’re suggesting that most people aggregate different people’s experiences as follows:
Well most EAs, probably not most people :P
But yes, I think most EAs apply this ‘merchandise’ approach weighed by conscious experience.
In regards to your discussion of moral theories, side constraints:
I know there are a range of moral theories that can have rules etc. My objection was that if you were not in fact arguing that total pain (or whatever) is the sole determiner of what action is right then you should make this clear from the start (and ideally baked into what you mean by ‘morally worse’).
Basically I think sentences like:
“I don’t think what we ought to do is to OUTRIGHT prevent the morally worse case”
are sufficiently far from standard usage (at least in EA circles) you should flag up that you are using ‘morally worse’ in a nonstandard way (and possibly use a different term). I have the intuition that if you say “X is the morally relevant factor” then which actions you say are right will depend solely on how they effect X.
Hence if you say ‘what is morally relevant is the maximal pain being experienced by someone’ when I expect all I need to tell you abut actions for you to decide between them is how they effect the maximal pain being experienced by someone.
Obviously language is flexible but I think if you deviate from this without clear disclaimers it is liable to cause confusion. (Again, at least in EA circles).
I think your argument that people should have a chance to be helped in proportion to how much we could help them is completely separate from your point about Comparability, and we should keep the discussions separate to avoid the chance of confusion. I’ll make a separate comment to discuss it,
So you’re suggesting that most people aggregate different people’s experiences as follows:
FYI, I have since reworded this as “So you’re suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way:”
I think it is a more precise formulation. In any case, we’re on the same page.
Basically I think sentences like:
“I don’t think what we ought to do is to OUTRIGHT prevent the morally worse case”
are sufficiently far from standard usage (at least in EA circles) you should flag up that you are using ‘morally worse’ in a nonstandard way (and possibly use a different term). I have the intuition that if you say “X is the morally relevant factor” then which actions you say are right will depend solely on how they effect X.
The way I phrased Objection 1 was as follows: “One might reply that two instances of suffering is morally worse than one instance of the same kind of suffering and that we should prevent the morally worse case (e.g., the two instances of suffering), so we should help Amy and Susie.”
Notice that this objection in argument form is as follows:
P1) Two people suffering a given pain is morally worse than one other person suffering the given pain.
P2) We ought to prevent the morally worst case.
C) Therefore, we should help Amy and Susie over Bob.
My argument with kbog concerns P1). As I mentioned, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering.
Given this premise, I’ve been arguing that two people suffering a given pain does not involve more pain than one person suffering the given pain, and thus P1) is false. And kbog has been arguing that two people suffering a given pain does involve more pain than one person suffering the given pain, and thus P1) is true. Of course, both of us are right on our respective preferred sense of “involves more pain than”. So I recently started arguing that my sense is the sense that really matters.
Anyways, notice that P2) has not been debated. I understand that consequentialists would accept P2). But for other moral theorists, they would not because not all things that they take to matter (i.e. to be morally relevant, to have moral value, etc) can be baked into/captured by the moral worseness/goodness of a state of affairs. Thus, it seems natural for them to talk of side constraints, etc. For me, two things matter: experience matters, and who suffers it matters. I think the latter morally relevant thing is best captured as a side constraint.
However, you are right that I should make this aspect of my work more clear.
Some of your quotes are broken in your comment, you need a > for each paragraph (and two >s for double quotes etc.)
I know for most of your post you were arguing with standard definitions, but that made it all the more jarring when you switched!
I actually think most (maybe all?) moral theories can be baked into goodness/badness of sates of affairs. If you want incorporate a side-constraint you can just define any state of affairs in which you violate that constraint as being worse than all other states of affairs. I do agree this can be less natural, but the formulations are not incompatible.
In any case as I have given you plenty of other comment threads to think about I am happy to leave this one here—my point was just a call for clarity.
I certainly did not mean to cause confusion, and I apologize for wasting any of your time that you spent trying to make sense of things.
By “you switched”, do you mean that in my response to Objection 1, I gave the impression that only experience matters to me, such that when I mentioned in my response to Objection 2 that who suffers matters to me too, it seems like I’ve switched?
And thanks, I have fixed the broken quote. Btw, do you know how to italicize words?
Yes, “switched” was a bit strong, I meant that by default people will assume a standard usage, so if you only reveal later that actually you are using a non-standard definition people will be surprised. I guess despite your response to Objection 2 I was unsure in this case whether you were arguing in terms of (what are at least to me) conventional definitions or not, and I had assumed you were.
To italicize works puts *s on either side, like *this* (when you are replying to a comment there is a ‘show help’ button that explains some of these things.)
(Posted as top-level comment as I has some general things to say, was originally a response here)
I just wanted to say I thought this comment did a good job explaining the basis behind your moral intuitions, which I had not really felt a strong motivation for before now. I still don’t find it particularly compelling myself, but I can understand why others could find it important.
Overall I find this post confusing though, since the framing seems to be ‘Effective Altruism is making an intellectual mistake’ whereas you just actually seem to have a different set of moral intuitions from those involved in EA, which are largely incompatible with effective altruism as it currently practiced. Whilst you could describe moral differences as intellectual mistakes, this does not seem to be a standard or especially helpful usage.
The comments etc. then just seem to have mostly been people explaining why they don’t find your moral intuition that ‘non-purely experientially determined’ and ‘purely experientially determined’ amounts of pain cannot be compared compelling. Since we seem to have reached a point where there seems to be a fundamental disagreement about considered moral values, it does not seem that attempting to change each others minds is very fruitful.
I think I would have found this post more conceptually clear if it had been structured:
EA conclusions actually require an additional moral assumption/axiom—and so if you don’t agree with this assumption then you should not obviously follow EA advice.
(Optionally) Why you find the moral assumption unconvincing/unlikely
(Extra Optionally) Tentative suggestions for what should be done in the absence of the assumption.
Where throughout the assumption is the commensuratabilitly of ‘non-purely experientially determined’ and ‘purely experientially determined’ experience.
In general I am not very sure what you had in mind as the ideal outcome of this post. I’m surprised if you thought most EAs agreed with you on your moral intuition, since so much of EA is predicated on its converse (as is much of established consequential thinking etc.). But equally I am not sure what value we can especially bring to you if you feel very sure in your conviction that the assumption does not hold.
Little disagreement in philosophy comes down to a matter of bare differences in moral intuition. Sometimes people are just confused.
Hey Alex, thanks for your comment!
I didn’t know what the source of my disagreement with EAs would be, so I hope you can understand why I couldn’t structure my post in a way that would have already taken into account all the subsequent discussions. But thanks for your suggestion. I may write another post with a much simpler structure if my discussion with kbog reaches a point where either I realize I’m wrong or he realizes he’s wrong. If I’m wrong, I hope to realize it asap.
Also, I agree with kbog. I think it’s much likelier that one of us is just confused. Either kbog is right that there is an intelligible sense in which 5 minor headaches spread among 5 people can involve more pain than 1 major headache had by one person or he is not.
After figuring that out, there is the question of which sense of “involves more pain than” is more morally important: is it the “is experientially worse than” sense or kbog’s sense? Perhaps that comes down to intuitions.
Thanks for your reply—I’m extremely confused if you think there is no ’intelligible sense in which 5 minor headaches spread among 5 people can involve more pain than 1 major headache had by one person” since (as has been discussed in these comments) if you view/define total pain as being measured by intensity-weighted number of experiences this gives a clear metric that matches consequentialist usage.
I had assumed you were arguing at the ‘which is morally important’ level, which I think might well come down to intuitions.
I hope you manage to work it out with kblog!
Hey Alex,
Thanks for your reply. I can understand why you’d be extremely confused because I think I was in error to deny the intelligibility of the utilitarian sense of “more pain”.
I have recently replied to kbog acknowledging this mistake, outlining how I understand the utilitarian sense of “more pain”, and then presenting an argument for why my sense of “more pain” is the one that really matters.
I’d be interested to know what you think.
Thanks for getting back to me, I’ve read your reply to kblog, but I don’t find your argument especially different to those you laid out previously (which given that I always thought you were trying to make the moral case should maybe not be surprising). Again I see why there is a distinction one could care about, but I don’t find it personally compelling.
(Indeed I think many people here would explicitly embrace the assumption than your P3 in your second reply to kblog, typically framed as ‘two people experiencing the same pain is twice as bad as one person experiencing that pain’ (there is some change from discussing ‘total pain’ to ‘badness’ here, but I think it still fits with our usage).)
A couple of brief points in favour of the classical approach:
It in some sense ‘embeds naturally’ in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).
As discussed in other comments, it also has other pleasing properties, such as the veil of ignorance as discussed in other comments.
One additional thing to note is that dropping the comparability of ‘non-purely experientially determined’ and ‘purely experientially determined’ experiences (henceforth ‘Comparability’) does not seem to naturally lead to a specific way of evaluating different situations or weighing them against each other.
For example, you suggest in your post that without Comparability the morally correct course of action would be to give each person a chance of being helped in proportion to their suffering, but this does not necessarily follow. One could imagine others who also disagreed with Comparability, but thought the appropriate solution was to always help the person suffering the most, and not care at all about anyone else. To take things to the opposite extreme, someone could also deny Comparability but think that the most important thing was minimizing the number of people suffering at all and not take into account intensity whatsoever (although they would likely justify rejecting Comparability on different grounds to you).
Hey Alex,
Thanks again for taking the time to read my conversation with kbog and replying. I have a few thoughts in response:
When you say that many people here would embrace the assumption that “two people experiencing the same pain is twice as bad as one person experiencing that pain”, are you using “bad” to mean “morally bad?”
I ask because I would agree if you meant morally bad IF the single person was a subset of the two people. For example, I would agree that Amy and Susie each suffering is twice as morally bad as just Amy suffering. However, I would not agree IF the single person was not a subset of the two (e.g., if the single person was Bob). If the single person was Bob, I would think the two cases are morally just as bad.
Now, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering.
However, based on my preferred sense of “more pain”, two people suffering involves the same amount of pain as one person suffering, irrespective of whether the single person is a subset or not.
Therefore, you might wonder how I am able to arrive at the different opinions above. More specifically, if I think Amy and Susie each suffering involves the same amount of pain as just Amy suffering, shouldn’t I be committed to saying that the former is morally just as bad as the latter, rather than twice as morally bad (which is what I want to say?)
I don’t think so. I think the pareto principle provides an adequate reason for taking Amy and Susie each suffering to be morally worse than just Amy’s suffering. As Otsuka (a philosopher at Harvard) puts it, the Pareto states that “One distribution of benefits over a population is strictly Pareto superior to another distribution of benefits over that same population just in case (i) at least one person is better off under the former distribution than she would be under the latter and (ii) nobody is worse off under the former than she would be under the latter.” Since just Amy suffering (i.e. Susie not suffering) is Pareto superior to Amy and Susie each suffering, therefore just Amy suffering is morally better than Amy and Susie each suffering. In other words, Amy and Susie each suffering is morally worse than just Amy suffering. Notice, however, that if the single person were Bob, condition (ii) would not be satisfied because Bob would be made worse off. The Pareto principle is based on the appealing idea that we shouldn’t begrudge another person an improvement that costs us nothing. Amy shouldn’t begrudge Susie an improvement that costs her nothing.
Anyways, I just wanted to make that aspect of my thinking clear. So I would agree with you that more people suffering is morally worse than fewer people suffering as long as the smaller group of people is a subset of the larger group, due to the Pareto principle. But I would not agree with you that more people suffering is morally worse than fewer people suffering if those fewer people are not a subset of the larger group, since the Pareto principle is not a basis for it, nor is there more pain in the former case than the latter case on my preferred sense of “more pain”. And since I think my preferred sense of “more pain” is the one that ultimately matters because it respects the fact that pain matters solely because of how it feels, I think others should agree with me.
I’m not sure I see the advantage here, or what the alleged advantage is. I don’t see why my view commits me to pay any attention towards people who I cannot possibly affect via my actions (even though I may care about them). My view simply commits me to giving those who I can possibly affect a chance of being helped proportional to their suffering.
The veil of ignorance approach at minimum supports a policy of helping the greater number (given the stipulation that each person has an equal chance of occupying anyone’s position). However, as I argued, this stipulation is not true OF the real world because each of us didn’t actually have an equal chance of being in any of our position, and what we should do should be based on the facts, and not on a stipulation. In kbog’s latest reply to me regarding the veil of ignorance, he seems to argue that the stipulation should determine what we ought to do (irrespective of whether it is true in the actual world) because “The reason we look at what they would agree to from behind the veil of ignorance as opposed to outside is that it ensures that they give equal consideration to everyone, which is a basic principle that appeals to us as a cornerstone of any decent moral system.” I have yet to respond to this latest reply because I have been too busy arguing about our senses of “more pain”, but if I were to respond, I would say this: “I agree that we should give equal consideration to everyone, which is why I believe we should give each person a chance of being helped proportional to the suffering they face. The only difference is that this is giving equal consideration to everyone in a way that respects the facts of the world.” Anyways, I don’t want to say too much here, because kbog might not see it and it wouldn’t be fair if you only heard my side. I’ll respond to kbog’s reply eventually (haha) and you can follow the discussion there if you wish.
Let me just add one thing: Based on Singer’s intro to Utilitarianism, Harsanyi argued that the veil of ignorance also entails a form of utilitarianism on which we ought to maximize average utility, as opposed to Rawls’ claim that it entails giving priority to the worst off. If this is right, then the veil of ignorance approach doesn’t support classical utilitarianism which just says we ought to maximize utility and not average utility.
Yes, they could, but I also argued that who suffers matters in my response to Objection 2, and to simply help the person suffering the most is to ignore this fact. Thus, even if one person suffering a lot is experientially worse (and thus morally worse) than many others each suffering something less, I believe we should give the others some chance of being helped. That is to say, in light of the fact that who suffers matters, I believe it is not always right to prevent the morally worse case.
While this is a possible position to hold, it is not a plausible one, because it effectively entails that the numbers matter in itself. That is, such a person thinks he should save the many over one other person not because he thinks the many suffering involves more pain than the one suffering (for he denies that a non-purely experientially determined amount of pain can be compared with a purely experientially determined amount of pain). Rather, he thinks he should save the many solely because they are many. But it is hard to see how numbers matter in itself.
The argument is that if:
The amount of ‘total pain’ is determined by the maximum amount of suffering people experienced by any given person (Which I think is what you are arguing)
There could be an alien civilization containing a being experiencing more suffering than any human is capable of experiencing (you could also just use a human being tortured if you liked for a less extreme but clearly applicable case)
In this case, then the amount of ‘total pain’ is always at least that very large number, such that none of your actions can change it at all.
Thus (and you would disagree with this implication due to your adoption of the Pareto principle) since the level of ‘total pain’ is the morally important thing, all of your possible actions are morally equivalent.
As I mention I think you escape this basic formulation of the problem by your adoption of the Pareto principle, but a more complicated version causes the same issue:
This is essentially just applying the non-identity problem to the example above. (weirdly enough I think the best explanation I’ve seen of the non-identity problem is the second half of the ‘the future’ section of Derek Parfit wikipedia page )
The argument goes something like:
D1 If we adopt that ‘total pain’ is the maximal pain experienced by any person for whom we can effect how much pain their experience (an attempt to incorporate the Pareto principle into the definition for simplicity’s sake).
A1 At some point in the far future there is almost certainly going to be someone experiencing extreme pain. (Even if humanity is wiped out, so most of the future has no one in it, that wiping out is likely to involve extreme pain for some).
A2 Due to chaotic nature of the world, and the strong dependence on birth timings of personal identity (if the circumstances of ones conception change even very slightly then your identity will almost certainly be completely different) any actions in the world now will within a few generations result in a completely different set of people existing.
C1 Thus by A1 the future is going to contain someone experiencing extreme pain, but by A2 exactly who this person is will vary with any different courses of action, thus by D1 the ‘total pain’ in all cases is uniformly vary high.
This is similar to the point made in JanBrauner, however I did not find your response to their comment particularly engaged the core point of the extreme unpredictability of the maximum pain caused by an act.
After your most recent comment I am generally unsure exactly what you are arguing for in terms of moral theories. When arguing on which form of pain is morally important you seem to make a strong case that one should consider the ‘total pain’ in a situation solely by whatever pain involved is most extreme. However when discussing moral recommendations you don’t completely focus on this. Thus I’m not sure if this comments and its examples will miss the mark completely.
(There are also more subtle defenses, such as those relating to how much one cares about future people etc. which have thusfar been left out of the discussion).
Thanks for the exposition. I see the argument now.
You’re saying that, if we determined “total pain” by my preferred approach, then all possible actions will certainly result in states of affairs in which the total pains are uniformly high with the only difference between the states of affairs being the identity of those who suffers it.
I’ve since made clear to you that who suffers matters to me too, so if the above is right, then according to my moral theory, what we ought to do is assign an equal chance to any possible action we could take, since each possible action gives rise to the same total pain, just suffered by different individuals.
Your argument would continue: Any moral theory that gave this absurd recommendation cannot be correct. Since the root of the absurdity is my preferred approach to determining total pain, that approach to determining total pain must be problematic too.
My response:
JanBrauner, if I remember correctly was talking about extreme unpredictability, but your argument doesn’t seem to be based on unpredictability. If A1 and A2 are true, then each possible action more-or-less seems to inevitably result in a different person suffering maximal pain.
Anyways, if literally each possible action I could take would inevitably result in a different person suffering maximal pain (i.e. if A1 and A2 are true), I think I ought to assign an equal chance to each possible action (even though physically speaking I cannot).
I think there is no more absurdity to assigning each possible action an equal chance (assuming A1 and A2 are true) than there is in, say, flipping a coin between saving a million people on one island from being burned alive and saving one other person on another island from being burned alive. Since I don’t find the latter absurd at all (keeping in mind that none of the million will suffer anything worse than the one, i.e. that the one would suffer no less than any one of the million), I would not find the former absurd either. Indeed, giving each person an equal chance of being saved from being burned alive seems to me like the right thing to do given that each person has the same amount to suffer. So I would feel similarly about assigning each possible action an equal chance (assuming A1 and A2 are true).
I was trying to keep the discussions of ‘which kind of pain is morally relevant’ and of your proposed system of giving people a chance to be helped in proportion to their suffering sperate. It might be that they are so intertwined as for this to be unproductive, but I think I would like you to response to my comment about the latter before we discuss it further.
Given that you were initially arguing (with kblog etc.) for this definition of total pain, independent of any other identity considerations, this seems very relevant to that discussion.
But this seems extremely far removed from any day to day intuitions we would have about morality, no? If you flipped a coin to decide whether you should murder each person you met, (a very implementable approximation of this result) I doubt many would find this justified on the basis that someone in the future is going to be suffering much more than them.
The issue is this also applied to the case of deciding whether to set the island on fire at all
I think I see the original argument you were going for. The argument against my approach-minus-the-who-suffers-matters-bit is that it renders all resulting states of affairs equally bad, morally speaking, because all resulting states of affairs would involve the same total pain. Given that we should prevent the morally worst case, this means that my approach would have it that we shouldn’t take any action, and that’s just absurd. Therefore, my way of determining total pain is problematic. Here “a resulting state of affairs” is broadly understood as the indefinite span of time following a possible action, as opposed to any particular point in time following a possible action. On this broad understanding, it seems undeniable that each possible action will result in a state of affairs with the same total maximal pain, since there will surely be someone who suffers maximally at some point in time in each indefinite span of time.
Well, if who suffered didn’t matter, then I think leximin should be used to determine which resulting state of affairs is morally worse. According to leximin, we determine which state of affairs is morally better as follows: Step 1: From each state of affairs, select a person among the worst off in that state of affairs. Compare these people. If there is a person who is better off than the rest, then that state of affairs is morally better than all the others. If these people are all just as badly off, then move onto Step 2. Step 2: From each state of affairs, select a person among the worst off in that state of affairs, except for the person who has already been selected. Compare these people. If there is a person who is better off than the rest, then that state of affairs is morally better than all the others. If these people are all just as badly off, then move onto Step 3. And so forth… According to this method, even though all resulting states of affairs will involve the same total pain, certain resulting states of affairs will be morally better than others, and we should act so as to realize them.
My appeal to leximin is not ad hoc because it takes an individual’s suffering seriously, which is inline with my approach. Notice that leximin can be used to justify saving Susie and Amy over Bob. I don’t actually endorse leximin because leximin does not take an individual’s identity seriously (i.e. it doesn’t treat who suffers as morally relevant, whereas I do. I think who suffers matters).
So that is one response I have to your argument: it grants you that the total pain in each resulting state of affairs would be the same and then argues that this does not mean that all resulting state of affairs would be morally just as bad.
Another response I have is that, most probably, different states of affairs will involve different amounts of pain, and so some states of affairs will be morally worse than others just based on total pain involved. This becomes more plausible when we keep in mind what the maximum amount of pain is on my approach. It is not the most intense pain, e.g. a torture session. It is not the longest pain, e.g. a minor headache that lasts one’s entire life. Rather, it is the most intense pain over the longest period of time. The person who suffers maximum pain is the person who suffers the most intense pain for the longest period of time. Realizing this, it is unlikely that each possible action will lead to a state of affairs involving this. (Note that this is to deny A1.)
To give each possible action an equal chance is certainly not to flip a coin between murdering someone or not. At any given moment, I have thousands (or perhaps an infinite number) of possible actions I could take. Murdering the person in front of me is but one. (There are many complexities here that make the discussion hard like what counts as a distinct action.)
However, I understand that the point of your objection is that my approach can allow the murder of an innocent. In this way, your objection is like that classical argument against utilitarianism. Anyways, I guess, like effective altruism, I can recognize rules that forbid murdering etc. I should clarify that my goal is not to come up with a complete moral theory as such. Rather it is to show that we shouldn’t use the utilitarian way of determining “total pain”, which underlies effective altruism.
I have argued for this by
1) arguing that the utilitarian way of determining “total pain” goes against the spirit of why we take pain to matter in the first place. In response, you have suggested a different framing of utilitarianism on which they are determining a “total moral value” based on people’s pains, which is different from determining a total pain. I still need to address this point.
2) responding to your objection against my way of determining “total pain” (first half of this reply)
Yes. I bring up that most people would accept this different framing of P3 (even when the people involved are different) as a fundamental piece of their morality. To most of the people here this is the natural, obvious and intuitively correct way of aggregating experience. (Hence why I started my very first comment by saying you are unlikely to get many people to change their minds!)
I think thinking in terms of ‘total pain’ is not normally how this is approached, instead one thinks about converting each persons experience into ‘utility’ (or ‘moral badness’ etc.) on a personal level, but then aggregates all the different personal utilities into a global figure. I don’t know if you find this formulation more intuitively acceptable (it is some sense feels like it respects your reason for caring about pain more).
I bring this up since you are approaching this from a different angle than the usual, which makes peoples standard lines of reasoning seem more complex.
I’ll discuss this in a separate comment since I think it is one of the strongest argument against your position.
I don’t know much about the veil of ignorance, so I am happy to give you that it does not support total utilitarianism.
Then I am really not sure at all what you are meaning by ‘morally worse’ (or ‘right’!). In light of this, I am now completely unsure of what you have been arguing the entire time.
So you’re suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way:
Assign a moral value to each person’s experiences based on its overall what-it’s-like. For example, if someone is to experience 5 headaches, we are to assign a single moral value to his 5 headaches based on how experientially bad the what-it’s-like-of-going-through-5-headaches is. If going through 5 such headaches is about experientially as bad as going through 1 major headache, then we would assign the same moral value to someone’s 5 minor headaches as we would to someone else’s 1 major headache.
We then add up the moral value assigned to each person’s experiences to get a global moral value, and compare this moral value to the other global values corresponding to the other states of affairs we could bring about.
This approach reminds me of trade-off situations that involve saving lives instead of saving people from suffering. For example, suppose we can either save Amy’s and Susie’s life or Bob’s life, but we cannot save all. Who do we save? Most people would reason that we should save Amy’s and Susie’s life because each life is assigned a certain positive moral value, so 2 lives is twice the moral value as 1 life. I purposely avoided talking about trade-off situations involving saving lives because I don’t think a life has moral value in itself, yet I anticipated that people would appeal to life having some sort of positive moral value in itself and I didn’t want to spend time arguing about that. In any case, if life does have positive moral value in itself, then I think it makes sense to add those values just as it makes sense to add the dollar values of different merchandise. This would result in Amy’s and Susie’s death being a morally worse thing than Bob’s death, and so I would at least agree that what we ought to do in this case wouldn’t be to give everyone a 50% chance.
In any case, if we assign a moral value to each person’s experience in the same way that we might assign a moral value to each person’s life, then I can see how people reach the conclusion that more people suffering a given pain is morally worse than fewer people suffering the given pain (even if the fewer are other people). Moreover, given step 1., I agree that this approach, at least prima facie, respects [the fact that pain matters solely because of how it FEELS] more than the approach that I’ve attributed to kbog). (I added the “[...]” to make the sentence structure more clear.) As such, this is an interesting approach that I would need to think more about, so thanks for bringing it up. But, even granting this approach, I don’t think what we ought to do is to OUTRIGHT prevent the morally worse case; rather we ought to give a higher chance to preventing the morally worse case proportional to how much morally worse it is than the other case. I will say more about this below.
Please don’t be alarmed (haha). I assume you’re aware that there are other moral theories that recognize the moral value of experience (just as utilitarianism does), but also recognizes other side constraints such that, on these moral theories, the right thing to do is not always to OUTRIGHT prevent the morally worst consequence. For example, if a side constraint is true of some situation, then the right thing to do would not be to prevent the morally worst consequence if doing so violates the side constraint. That is why these moral theories are not consequentialist.
You can think of my moral position as like one of these non-consequentialist theories. The one and only side constraint that I recognize is captured by the fact that who suffers matters. Interestingly, this side constraint arises from the fact that experience matters, so it is closer to utilitarianism than other moral theories in this respect. Here’s an example of the side constraint in action: Suppose I can either save 100 people from a minor headache or 1 other person from a major headache. Going by my sense of “more pain” (i.e. my way of quantifying and comparing pains), the single person suffering the major headache is morally worse than the 100 people each suffering a minor headache because his major headache is experientially worse than any of the other people’s minor headache. But in this case, I would not think the right thing to do is to OUTRIGHT save the person with the major headache (even though his suffering is the morally worse case). I would think that the right thing to do is to give him a higher chance of being saved proportional to how much worse his suffering is experientially speaking than any one of the others (i.e. how much morally worse his suffering is relative to the 100′s suffering).
Similarly, if we adopted the approach you outlined above, maybe the 100 people each suffering a minor headache would be the morally worse case. If so, given the side constraint, I would still similarly think that it would not be right to OUTRIGHT save the 100 from their minor headaches. I would again think that the right thing to do would be to give the 100 people a higher chance of being saved proportional to how much morally worse their suffering is relative to the single person’s suffering.
I hope that helps.
On ‘people should have a chance to be helped in proportion to how much we can help them’ (versus just always helping whoever we can help the most).
(Again, my preferred usage of ‘morally worse/better’ is basically defined so as to mean one always ‘should’ always pick the ‘morally best’ action. You could do that in this case, by saying cases are morally worse than one another if people do not have chances of being helped in proportion to how badly off they are. This however leads directly into my next point… )
How much would you be willing to trade off helping people verses the help being distributed fairly? e.g. if you could either have a 95% chance of helping people in proportion to their suffering, but a 5% chance of helping no one, verses a 100% chance of only helping the person suffering the most.
In your reply to JanBrauner you are very willing to basically completely sacrifice this principle in response to practical considerations, so it seems possibly you are not willing to trade off any amount of ‘actually helping people’ in favour of it, but then it seems strange you argue for it so forcefully.
As a separate point, this form of reasoning seems rather incompatible with your claims about ‘total pain’ being morally important, and also determined solely by whoever is experiencing the most pain. Thus, if you follow your approach and give some chance of helping people not experiencing the most pain, in the case when you do help them, the ‘total pain’ does not change at all!
For example:
Suppose Alice is experiencing 10 units of suffering (by some common metric)
10n people (call them group B) are experiencing 1 units of suffering each
We can help exactly one person, and reduce their suffering to 0
In this case your principle says we should give Alice a 10/(10+10n) = 1/(n+1) chance of being helped, and each person in group B a 1/(10+10n) chance of being helped. But in the case we help someone from group B the level of ‘total pain’ remains at 10 as Alice is not helped.
This means that n/(n+1) proportion of the time the ‘total pain’ remains unchanged. i.e. we can make the chance of actually effecting the thing you say is morally important arbitrarily small. It seems strange to say your morally is motivated by x if your actions are so distanced from it that your chance of actually effecting x can go to zero.
Finally I find the claim that this is actually the fairer or more empathetic approach unconvincing. I would argue that whatever fairness you gain by letting there be some chance you help the person experiencing the second-most suffering is outweighed by your unfairness to the person suffering the most.
Indeed, for another example:
Say a child (child A) is about to be tortured for the rest of their life, which you can prevent for £2.
However another child (child B) has just dropped their ice cream, which has slightly upset them (although not much, they are just a little sad). You could buy them another ice cream for £2, which would cheer them up.
You only have £2, so you can only help one of the children. Under your system you would give some (admittedly (hopefully!) very small) chance that you would help child B. However in the case that you rolled your 3^^^3 sided die and it come up in favour of B, as you started walking over to the ice cream van it seems like it would be hard to say you were acting in accordance with “reason and empathy”.
(This was perhaps a needlessly emotive example, but I wanted to hammer home how completely terrible it could be to help the person not suffering the most. If you have a choice between not rolling a die, and rolling a die with a chance of terrible consequences, why take the chance?)
Hey Alex! Sorry for the super late response! I have a self-control problem and my life got derailed a bit in the past week >< Anyways, I’m back :P
This is an interesting question, adding another layer of chance to the original scenario. As you know, if (there was a 100% chance) I could give each person a chance of being saved in proportional to his/her suffering, I would do that instead of outright saving the person who has the worst to suffer. After all, this is what I think we should do, given that suffering matters, but who suffers also matters. Here, there seems to me a nice harmony between these two morally relevant factors – the suffering and the identity of who suffers, where both have a sufficient impact on what we ought to do: we ought to give each person a chance of being saved because who suffers matters, but each person’s chance ought to be in proportion to what he/she has to suffer because suffering also matters.
Now you’re asking me what I would do if there was only a 95% chance that I could give each person a chance of being saved in proportion to his/her suffering with a 5% chance of not helping anyone at all: would I accept the 95% chance or outright save the person who has the worst to suffer?
Well, what should I do? I must admit it’s not clear. I think it comes down to how much weight we should place on the morally relevant factor of identity. The more weight it has, the more likely the answer is that we should accept the 95% chance. I think it’s plausible that it has enough weight such that we should accept a 95% chance, but not a 40% chance. If one is a moral realist, one can accept that there is a correct objective answer yet not know what it is.
One complication is that you mention the notion of fairness. On my account of what matters, the fair thing to do – as you suggest—seems to be to give each person a chance in proportion to his/her suffering. Fairness is often thought of as a morally relevant factor in of itself, but if what the fair thing to do in any given situation is grounded in other morally relevant factors (e.g. experience and identity), then its moral relevance might be derived. If so, I think we can ignore the notion of fairness.
This is a fantastic objection. This objection is very much in the spirit of the objection I was raising against utilitarianism: both objections show that the respective approaches can trivialize suffering given enough people (i.e. given that n is large enough). I think this objection shows a serious problem with giving each person a chance of being saved proportional to his/her suffering insofar as it shows that doing so can lead us to give a very very small chance to someone who has a lot to suffer when it intuitively seems to me that we should give him a much higher chance of being saved given how much more he/she has to suffer relative to any other person.
So perhaps we ought to outright save the person who has the most to suffer. But this conclusion doesn’t seem right either in a trade-off situation involving him and one other person who has just a little less to suffer, but still a whole lot. In such a situation, it intuitively seems that we should give one a slightly higher chance of being saved than the other, just as it intuitively seems that we should give each an equal chance of being saved in a trade-off situation where they each have the same amount to suffer.
I also have an intuition against utilitarianism. So if we use intuitions as our guide, it seems to leave us nowhere. Maybe one or more of these intuitions can be “evolutionarily debunked”, sparing one of the three approaches, but I don’t really have an idea of how that would go.
I had anticipated this objection when I wrote my post. In footnote 4, I wrote:
“Notice that with certain types of pain episodes, such as a torture episode vs a minor headache, there is such a big gap in amount of suffering between them that any clear-headed person in the world would rather endure an infinite number of minor headaches (i.e. live with very frequent minor headaches in an immortal life) than to endure the torture episode. This would explain why in a choice situation in which we can either save a person from torture or x number of persons from a minor headache (or 1 person from x minor headaches), we would just save the person who would be tortured rather than give the other(s) even the slightest chance of being helped. And I think this accords with our intuition well.”
Admittedly, there are two potential problems with what I say in my footnote.
1) It’s not clear that any clear-headed person would do as I say, since it seems possible that the what-it’s-like-of-going-through-infinite-minor-headaches can be experientially worse than the what-it’s-like-of-going-through-a-torture-session.
2) Even if any clear-headed person would do as I say, it’s not clear that this can yield the result that we should outright save the one person from torture. It depends on how the math works out, and I’m terrible at math lol. Does 1/infinity = 0? If so, then it seems we ought to give the person who would suffer the minor headache a 0% chance (i.e. we ought to outright save the other person from torture).
But the biggest problem is that even if what I say in my footnote can adequately address this objection, it cannot adequately address your previous objection. This is because in your previous example concerning Alice, I think she should have a high chance of being saved (e.g. around 90%) no matter how big n is, and what I say in footnote 4 cannot help me get that result.
All in all, your previous objection shows that my own approach leads to a result that I cannot accept. Thanks for that (haha). However, I should note that it doesn’t make the utilitarian view more plausible to me because, as I said, your previous objection is very much in the spirit of my own objection against utilitarianism.
I wonder if dropping the idea that we should give each person a chance of being saved proportional to his/her suffering requires dropping the idea that who suffers matters… I used the latter idea to justify the former idea, but maybe the latter idea can also be used to justify something weaker—something more acceptable to me… (although I feel doubtful about this).
Well most EAs, probably not most people :P
But yes, I think most EAs apply this ‘merchandise’ approach weighed by conscious experience.
In regards to your discussion of moral theories, side constraints: I know there are a range of moral theories that can have rules etc. My objection was that if you were not in fact arguing that total pain (or whatever) is the sole determiner of what action is right then you should make this clear from the start (and ideally baked into what you mean by ‘morally worse’).
Basically I think sentences like:
are sufficiently far from standard usage (at least in EA circles) you should flag up that you are using ‘morally worse’ in a nonstandard way (and possibly use a different term). I have the intuition that if you say “X is the morally relevant factor” then which actions you say are right will depend solely on how they effect X.
Hence if you say ‘what is morally relevant is the maximal pain being experienced by someone’ when I expect all I need to tell you abut actions for you to decide between them is how they effect the maximal pain being experienced by someone.
Obviously language is flexible but I think if you deviate from this without clear disclaimers it is liable to cause confusion. (Again, at least in EA circles).
I think your argument that people should have a chance to be helped in proportion to how much we could help them is completely separate from your point about Comparability, and we should keep the discussions separate to avoid the chance of confusion. I’ll make a separate comment to discuss it,
FYI, I have since reworded this as “So you’re suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way:”
I think it is a more precise formulation. In any case, we’re on the same page.
The way I phrased Objection 1 was as follows: “One might reply that two instances of suffering is morally worse than one instance of the same kind of suffering and that we should prevent the morally worse case (e.g., the two instances of suffering), so we should help Amy and Susie.”
Notice that this objection in argument form is as follows:
P1) Two people suffering a given pain is morally worse than one other person suffering the given pain.
P2) We ought to prevent the morally worst case.
C) Therefore, we should help Amy and Susie over Bob.
My argument with kbog concerns P1). As I mentioned, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering.
Given this premise, I’ve been arguing that two people suffering a given pain does not involve more pain than one person suffering the given pain, and thus P1) is false. And kbog has been arguing that two people suffering a given pain does involve more pain than one person suffering the given pain, and thus P1) is true. Of course, both of us are right on our respective preferred sense of “involves more pain than”. So I recently started arguing that my sense is the sense that really matters.
Anyways, notice that P2) has not been debated. I understand that consequentialists would accept P2). But for other moral theorists, they would not because not all things that they take to matter (i.e. to be morally relevant, to have moral value, etc) can be baked into/captured by the moral worseness/goodness of a state of affairs. Thus, it seems natural for them to talk of side constraints, etc. For me, two things matter: experience matters, and who suffers it matters. I think the latter morally relevant thing is best captured as a side constraint.
However, you are right that I should make this aspect of my work more clear.
Some of your quotes are broken in your comment, you need a > for each paragraph (and two >s for double quotes etc.)
I know for most of your post you were arguing with standard definitions, but that made it all the more jarring when you switched!
I actually think most (maybe all?) moral theories can be baked into goodness/badness of sates of affairs. If you want incorporate a side-constraint you can just define any state of affairs in which you violate that constraint as being worse than all other states of affairs. I do agree this can be less natural, but the formulations are not incompatible.
In any case as I have given you plenty of other comment threads to think about I am happy to leave this one here—my point was just a call for clarity.
I certainly did not mean to cause confusion, and I apologize for wasting any of your time that you spent trying to make sense of things.
By “you switched”, do you mean that in my response to Objection 1, I gave the impression that only experience matters to me, such that when I mentioned in my response to Objection 2 that who suffers matters to me too, it seems like I’ve switched?
And thanks, I have fixed the broken quote. Btw, do you know how to italicize words?
Yes, “switched” was a bit strong, I meant that by default people will assume a standard usage, so if you only reveal later that actually you are using a non-standard definition people will be surprised. I guess despite your response to Objection 2 I was unsure in this case whether you were arguing in terms of (what are at least to me) conventional definitions or not, and I had assumed you were.
To italicize works puts *s on either side, like *this* (when you are replying to a comment there is a ‘show help’ button that explains some of these things.)
I see the problem. I will fix this. Thanks.