So you’re suggesting that most people aggregate different people’s experiences as follows:
Well most EAs, probably not most people :P
But yes, I think most EAs apply this ‘merchandise’ approach weighed by conscious experience.
In regards to your discussion of moral theories, side constraints:
I know there are a range of moral theories that can have rules etc. My objection was that if you were not in fact arguing that total pain (or whatever) is the sole determiner of what action is right then you should make this clear from the start (and ideally baked into what you mean by ‘morally worse’).
Basically I think sentences like:
“I don’t think what we ought to do is to OUTRIGHT prevent the morally worse case”
are sufficiently far from standard usage (at least in EA circles) you should flag up that you are using ‘morally worse’ in a nonstandard way (and possibly use a different term). I have the intuition that if you say “X is the morally relevant factor” then which actions you say are right will depend solely on how they effect X.
Hence if you say ‘what is morally relevant is the maximal pain being experienced by someone’ when I expect all I need to tell you abut actions for you to decide between them is how they effect the maximal pain being experienced by someone.
Obviously language is flexible but I think if you deviate from this without clear disclaimers it is liable to cause confusion. (Again, at least in EA circles).
I think your argument that people should have a chance to be helped in proportion to how much we could help them is completely separate from your point about Comparability, and we should keep the discussions separate to avoid the chance of confusion. I’ll make a separate comment to discuss it,
So you’re suggesting that most people aggregate different people’s experiences as follows:
FYI, I have since reworded this as “So you’re suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way:”
I think it is a more precise formulation. In any case, we’re on the same page.
Basically I think sentences like:
“I don’t think what we ought to do is to OUTRIGHT prevent the morally worse case”
are sufficiently far from standard usage (at least in EA circles) you should flag up that you are using ‘morally worse’ in a nonstandard way (and possibly use a different term). I have the intuition that if you say “X is the morally relevant factor” then which actions you say are right will depend solely on how they effect X.
The way I phrased Objection 1 was as follows: “One might reply that two instances of suffering is morally worse than one instance of the same kind of suffering and that we should prevent the morally worse case (e.g., the two instances of suffering), so we should help Amy and Susie.”
Notice that this objection in argument form is as follows:
P1) Two people suffering a given pain is morally worse than one other person suffering the given pain.
P2) We ought to prevent the morally worst case.
C) Therefore, we should help Amy and Susie over Bob.
My argument with kbog concerns P1). As I mentioned, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering.
Given this premise, I’ve been arguing that two people suffering a given pain does not involve more pain than one person suffering the given pain, and thus P1) is false. And kbog has been arguing that two people suffering a given pain does involve more pain than one person suffering the given pain, and thus P1) is true. Of course, both of us are right on our respective preferred sense of “involves more pain than”. So I recently started arguing that my sense is the sense that really matters.
Anyways, notice that P2) has not been debated. I understand that consequentialists would accept P2). But for other moral theorists, they would not because not all things that they take to matter (i.e. to be morally relevant, to have moral value, etc) can be baked into/captured by the moral worseness/goodness of a state of affairs. Thus, it seems natural for them to talk of side constraints, etc. For me, two things matter: experience matters, and who suffers it matters. I think the latter morally relevant thing is best captured as a side constraint.
However, you are right that I should make this aspect of my work more clear.
Some of your quotes are broken in your comment, you need a > for each paragraph (and two >s for double quotes etc.)
I know for most of your post you were arguing with standard definitions, but that made it all the more jarring when you switched!
I actually think most (maybe all?) moral theories can be baked into goodness/badness of sates of affairs. If you want incorporate a side-constraint you can just define any state of affairs in which you violate that constraint as being worse than all other states of affairs. I do agree this can be less natural, but the formulations are not incompatible.
In any case as I have given you plenty of other comment threads to think about I am happy to leave this one here—my point was just a call for clarity.
I certainly did not mean to cause confusion, and I apologize for wasting any of your time that you spent trying to make sense of things.
By “you switched”, do you mean that in my response to Objection 1, I gave the impression that only experience matters to me, such that when I mentioned in my response to Objection 2 that who suffers matters to me too, it seems like I’ve switched?
And thanks, I have fixed the broken quote. Btw, do you know how to italicize words?
Yes, “switched” was a bit strong, I meant that by default people will assume a standard usage, so if you only reveal later that actually you are using a non-standard definition people will be surprised. I guess despite your response to Objection 2 I was unsure in this case whether you were arguing in terms of (what are at least to me) conventional definitions or not, and I had assumed you were.
To italicize works puts *s on either side, like *this* (when you are replying to a comment there is a ‘show help’ button that explains some of these things.)
Well most EAs, probably not most people :P
But yes, I think most EAs apply this ‘merchandise’ approach weighed by conscious experience.
In regards to your discussion of moral theories, side constraints: I know there are a range of moral theories that can have rules etc. My objection was that if you were not in fact arguing that total pain (or whatever) is the sole determiner of what action is right then you should make this clear from the start (and ideally baked into what you mean by ‘morally worse’).
Basically I think sentences like:
are sufficiently far from standard usage (at least in EA circles) you should flag up that you are using ‘morally worse’ in a nonstandard way (and possibly use a different term). I have the intuition that if you say “X is the morally relevant factor” then which actions you say are right will depend solely on how they effect X.
Hence if you say ‘what is morally relevant is the maximal pain being experienced by someone’ when I expect all I need to tell you abut actions for you to decide between them is how they effect the maximal pain being experienced by someone.
Obviously language is flexible but I think if you deviate from this without clear disclaimers it is liable to cause confusion. (Again, at least in EA circles).
I think your argument that people should have a chance to be helped in proportion to how much we could help them is completely separate from your point about Comparability, and we should keep the discussions separate to avoid the chance of confusion. I’ll make a separate comment to discuss it,
FYI, I have since reworded this as “So you’re suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way:”
I think it is a more precise formulation. In any case, we’re on the same page.
The way I phrased Objection 1 was as follows: “One might reply that two instances of suffering is morally worse than one instance of the same kind of suffering and that we should prevent the morally worse case (e.g., the two instances of suffering), so we should help Amy and Susie.”
Notice that this objection in argument form is as follows:
P1) Two people suffering a given pain is morally worse than one other person suffering the given pain.
P2) We ought to prevent the morally worst case.
C) Therefore, we should help Amy and Susie over Bob.
My argument with kbog concerns P1). As I mentioned, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering.
Given this premise, I’ve been arguing that two people suffering a given pain does not involve more pain than one person suffering the given pain, and thus P1) is false. And kbog has been arguing that two people suffering a given pain does involve more pain than one person suffering the given pain, and thus P1) is true. Of course, both of us are right on our respective preferred sense of “involves more pain than”. So I recently started arguing that my sense is the sense that really matters.
Anyways, notice that P2) has not been debated. I understand that consequentialists would accept P2). But for other moral theorists, they would not because not all things that they take to matter (i.e. to be morally relevant, to have moral value, etc) can be baked into/captured by the moral worseness/goodness of a state of affairs. Thus, it seems natural for them to talk of side constraints, etc. For me, two things matter: experience matters, and who suffers it matters. I think the latter morally relevant thing is best captured as a side constraint.
However, you are right that I should make this aspect of my work more clear.
Some of your quotes are broken in your comment, you need a > for each paragraph (and two >s for double quotes etc.)
I know for most of your post you were arguing with standard definitions, but that made it all the more jarring when you switched!
I actually think most (maybe all?) moral theories can be baked into goodness/badness of sates of affairs. If you want incorporate a side-constraint you can just define any state of affairs in which you violate that constraint as being worse than all other states of affairs. I do agree this can be less natural, but the formulations are not incompatible.
In any case as I have given you plenty of other comment threads to think about I am happy to leave this one here—my point was just a call for clarity.
I certainly did not mean to cause confusion, and I apologize for wasting any of your time that you spent trying to make sense of things.
By “you switched”, do you mean that in my response to Objection 1, I gave the impression that only experience matters to me, such that when I mentioned in my response to Objection 2 that who suffers matters to me too, it seems like I’ve switched?
And thanks, I have fixed the broken quote. Btw, do you know how to italicize words?
Yes, “switched” was a bit strong, I meant that by default people will assume a standard usage, so if you only reveal later that actually you are using a non-standard definition people will be surprised. I guess despite your response to Objection 2 I was unsure in this case whether you were arguing in terms of (what are at least to me) conventional definitions or not, and I had assumed you were.
To italicize works puts *s on either side, like *this* (when you are replying to a comment there is a ‘show help’ button that explains some of these things.)
I see the problem. I will fix this. Thanks.