Of course, it is possible that within the cow’s physical system’s life span, multiple subjects-of-experience are realized. This would be the case if not all of the experiences realized by the cow’s physical system are felt by a single subject.
That’s what I’m interested in a definition of. What makes it a “single subject”? How is this a binary term?
I am making a greater than/less than comparison. That comparison is with pain which results from the neural chemical reactions. There is more pain (more of these chemical reactions based experiences) in the 5 headaches than there is in the 1 whether or not they occur in a single subject. I don’t see any reason to treat this differently then the underlying chemical reactions.
You also write, “There is more pain (more of these chemical reactions based experiences) in the 5 headaches than there is in the 1 whether or not they occur in a single subject. I don’t see any reason to treat this differently then the underlying chemical reactions.”
Well, to me the reason is obvious: when we say that 5 minor pains in one person is greater than (i.e. worse than) a major pain in one person” we are using “greater than” in an EXPERIENTIAL sense. On the other hand, when we say that 10 neural impulses in one person is greater than 5 neural impulses in one person, we are using “greater than” in a QUANTITATIVE/NUMERICAL sense. These two comparisons are very different in their nature. The former is about the relative STRENGTH of the pains, the latter is about the relative QUANTITIES of neural impulses.
So just because 10 neural impulses is greater than 5 neural impulses in the numerical sense, whether the 10 impulses take place in 1 brain or 5 brains, that does NOT mean that 5 minor pains is greater than 1 major headache in the experiential sense, whether the 5 minor pains are realized in 1 brain or 5 brains.
This relates back to why I said it can be very misleading to represent pain comparisons in numerals like 5*2>5. Such representations do not distinguish between the two senses described above, and thus can easily lead one to conflate them.
Just to make sure we’re on the same page here, let me summarize where we’re at:
In choice situation 2 of my paper, I said that supposing that any person would rather endure 5 minor headaches of a certain sort than 1 major headache of a certain sort when put to the choice, then a case in which Al suffers 5 such minor headaches is morally worse than a case in which Emma suffers 1 such major headache. And the reason I gave for this is that Al’s 5 minor headaches is more painful (i.e. worse) than Emma’s major headache.
In choice situation 3, however, the 5 minor headaches are spread across 5 different people: Al and four others. Here I claim that the case in which Emma suffers a major headache is morally worse than a case in which the 5 people each suffer 1 minor headache. And the reason I gave for this is that Emma’s major headache is more painful (i.e. worse) than each of the 5 people’s minor headache.
Against this, you claim that if the supposition from choice situation 2 carries over to choice situation 3 - the supposition that any person would rather endure 5 minor headaches than 1 major headache if put to the choice -, then the case in which the 5 people each suffer 1 minor headache is morally worse than Emma suffering a major headache. And your reason for saying this is that you think 5 minor headaches spread across the 5 people is more painful (i.e. worse) than Emma’s major headache.
THAT is what I took you to mean when you wrote: “Conditional on agreeing 5 minor headaches in one person is worse than 1 major headache in one person, I would feel exactly the same if it were spread out over 5 people.”
As a result, this whole time, I have been trying to explain why it is that 5 minor headaches spread across five people CANNOT be more painful (i.e. worse) than a major headache, even while the same minor 5 headaches all had by one person can (and would be, under the supposition).
Importantly, I never took myself to be disagreeing with you on whether 5 instances of a minor headache is more than 1 instance of a major headache. Clearly, 5 instances of a minor headache is more than 1 instance of a major headache, regardless of whether the 5 instances were all experienced by a single subject-of-experience or spread across 5.
I took our disagreement to be about whether 5 instances of a minor headache, when spread across 5 people, is more painful (i.e. worse) than an instance of a major headache.
My view is that only when the 5 headaches are all had by one subject-of-experience could they be more painful (i.e. worse) than a major headache. Moreover, my view is that it literally makes no sense to say (or that it is at least false to say, even if it made sense) that the 5 headaches, when spread across 5 people, is more painful (i.e. worse) than a major headache, under the supposition.
If I am right, then in choice situation 3, the morally worse case should be the case in which Emma suffers one major headache, not the case in which 5 people each suffer one minor headache.
In response to your question, “what makes a single subject “a single subject”, here is another stab: Within any given physical system that can realize subjects of experience (e.g. a cow’s brain), the subject-of-experience at t-1 (S1) is numerically identical to the subjective-of-experience at t-2 (S2) if and only if an experience at t-1 (E1) and an experience at t-2 (E2) are both felt by S1. That is S1 = S2 iff S1 feels E1 and E2.
That in conjunction with the definition I provided earlier is probably the best I can do to communicate what I take a subject-of-experience to be, and what makes a particular subject-of-experience the numerically same subject-of-experience over time.
To your first comment, I disagree. I think it’s the same thing. Experiences are the result of chemical reactions. Are you advocating a form of dualism where experience is separated from the physical reactions in the brain?
I think there is more total pain. I’m not counting the # of headaches. I’m talking about the total amount of pain.
Can you define S1?
We may not, as these discussions tend to go. I’m fine calling it.
I think we have to get closer to defining a subject of experience, (S1); I think I would need this to go forward. But here’s my position on the issue: I think moral personhood doesn’t make sense as a binary concept (the mind from a brain is different at different times, sometimes vastly different such as in the case of a major brain injury) The matter in the brain is also different over time (ship of Theseus). I don’t see a good reason to call these the same person in a moral sense in a way that two minds of two coexisting brains wouldn’t be. The consciousness experiences are different between at different times and different brains; I see this as a matter of degree of similarity.
I removed the comment about worrying that we might not reach a consensus because I worried that it might send you the wrong idea (i.e. that I don’t want to talk anymore). It’s been tiring I have to admit, but also enjoyable and helpful. Anyways, you clearly saw my comment before I removed it. But yeah, I’m good with talking on.
I agree that experiences are the result of chemical reactions, however the nature of the relations “X being experientially worse than Y” and “X being greater in number than Y” are relevantly different. Someone by the name of “kbog” recently read my very first reply to you (the updated edition) and raised basically the same concern as you have here, and I think I have responded to him pretty aptly. So if you don’t mind, can you read my discussion with him:
I would have answered you here, but I’m honestly pretty drained from replying to kbog, so I hope you can understand. Let me know what you think.
Regarding defining S1, I don’t think I can do better than to say that S1 is a thing that has, or is capable of having, experience(s). I add the phrase ‘or is capable of having’ this time because it has just occurred to me that when I am in dreamless sleep, I have no experiences whatsoever, yet I’d like to think that I am still around—i.e. that the particular subject-of-experience that I am is still around. However, it’s also possible that a subject-of-experience exists only when it is experiencing something. If that is true, then the subject-of-experience that I am is going out of and coming into existence several times a night. That’s spooky, but perhaps true.
Anyways, I can’t seem to figure out why you need any better of a definition of a subject-of-experience than that. I feel like my definition sufficiently distinguishes it from other kinds of things. Moreover, I have provided you with a criteria for identity over time. Shouldn’t this be enough?
You write, “I think moral personhood doesn’t make sense as a binary concept (the mind from a brain is different at different times, sometimes vastly different such as in the case of a major brain injury) The matter in the brain is also different over time (ship of Theseus).”
I agree with all of this, but I would insist those NEED NOT BE numerical differences, just qualitative differences. A mind can be very qualitatively different (e.g. big personality change) from one moment to the next, but that does not necessarily mean that it is a numerically different mind. Likewise, a brain can be very qualitative different (e.g. big change in shape) from one moment to the next, but that does not necessarily mean that it is a numerically different brain.
You then write, “I don’t see a good reason to call these the same person in a moral sense in a way that two minds of two coexisting brains wouldn’t be.”
Well, if a particular mind is the numerically same mind before and after a big qualitative change (e.g., due to a brain injury), then clearly there is reason to call it the same mind/person in a way that two minds of two coexisting brains wouldn’t be. After all, it’s the numerically same mind, whereas two minds of two coexisting brains are clearly two numerically different minds.
You might agree that there is a literal reason to call it the same mind, but deny that there is a moral reason that wouldn’t be true of two minds of two coexisting brains. But I think the literal reason constitutes or provides the moral reason: if a mind is numerically the same mind before and after a big qualitative change (e.g. big personality change), then that means whatever experiences are had by that mind before and after the change are HAD BY THAT NUMERICALLY SAME MIND. So if that particular mind suffered a headache before the radical change and then suffered a headache after the change, it is THAT PARTICULAR MIND THAT SUFFERS BOTH. That is enough reason to also call that mind the same mind in a moral sense that wouldn’t also be true of two numerically different minds of two coexisting brains.
I didn’t quite understand the sentences after that.
That’s what I’m interested in a definition of. What makes it a “single subject”? How is this a binary term?
I am making a greater than/less than comparison. That comparison is with pain which results from the neural chemical reactions. There is more pain (more of these chemical reactions based experiences) in the 5 headaches than there is in the 1 whether or not they occur in a single subject. I don’t see any reason to treat this differently then the underlying chemical reactions.
No problem on the caps.
REVISED TO BE MORE CLEAR ON MAR 19:
You also write, “There is more pain (more of these chemical reactions based experiences) in the 5 headaches than there is in the 1 whether or not they occur in a single subject. I don’t see any reason to treat this differently then the underlying chemical reactions.”
Well, to me the reason is obvious: when we say that 5 minor pains in one person is greater than (i.e. worse than) a major pain in one person” we are using “greater than” in an EXPERIENTIAL sense. On the other hand, when we say that 10 neural impulses in one person is greater than 5 neural impulses in one person, we are using “greater than” in a QUANTITATIVE/NUMERICAL sense. These two comparisons are very different in their nature. The former is about the relative STRENGTH of the pains, the latter is about the relative QUANTITIES of neural impulses.
So just because 10 neural impulses is greater than 5 neural impulses in the numerical sense, whether the 10 impulses take place in 1 brain or 5 brains, that does NOT mean that 5 minor pains is greater than 1 major headache in the experiential sense, whether the 5 minor pains are realized in 1 brain or 5 brains.
This relates back to why I said it can be very misleading to represent pain comparisons in numerals like 5*2>5. Such representations do not distinguish between the two senses described above, and thus can easily lead one to conflate them.
Just to make sure we’re on the same page here, let me summarize where we’re at:
In choice situation 2 of my paper, I said that supposing that any person would rather endure 5 minor headaches of a certain sort than 1 major headache of a certain sort when put to the choice, then a case in which Al suffers 5 such minor headaches is morally worse than a case in which Emma suffers 1 such major headache. And the reason I gave for this is that Al’s 5 minor headaches is more painful (i.e. worse) than Emma’s major headache.
In choice situation 3, however, the 5 minor headaches are spread across 5 different people: Al and four others. Here I claim that the case in which Emma suffers a major headache is morally worse than a case in which the 5 people each suffer 1 minor headache. And the reason I gave for this is that Emma’s major headache is more painful (i.e. worse) than each of the 5 people’s minor headache.
Against this, you claim that if the supposition from choice situation 2 carries over to choice situation 3 - the supposition that any person would rather endure 5 minor headaches than 1 major headache if put to the choice -, then the case in which the 5 people each suffer 1 minor headache is morally worse than Emma suffering a major headache. And your reason for saying this is that you think 5 minor headaches spread across the 5 people is more painful (i.e. worse) than Emma’s major headache.
THAT is what I took you to mean when you wrote: “Conditional on agreeing 5 minor headaches in one person is worse than 1 major headache in one person, I would feel exactly the same if it were spread out over 5 people.”
As a result, this whole time, I have been trying to explain why it is that 5 minor headaches spread across five people CANNOT be more painful (i.e. worse) than a major headache, even while the same minor 5 headaches all had by one person can (and would be, under the supposition).
Importantly, I never took myself to be disagreeing with you on whether 5 instances of a minor headache is more than 1 instance of a major headache. Clearly, 5 instances of a minor headache is more than 1 instance of a major headache, regardless of whether the 5 instances were all experienced by a single subject-of-experience or spread across 5.
I took our disagreement to be about whether 5 instances of a minor headache, when spread across 5 people, is more painful (i.e. worse) than an instance of a major headache.
My view is that only when the 5 headaches are all had by one subject-of-experience could they be more painful (i.e. worse) than a major headache. Moreover, my view is that it literally makes no sense to say (or that it is at least false to say, even if it made sense) that the 5 headaches, when spread across 5 people, is more painful (i.e. worse) than a major headache, under the supposition.
If I am right, then in choice situation 3, the morally worse case should be the case in which Emma suffers one major headache, not the case in which 5 people each suffer one minor headache.
In response to your question, “what makes a single subject “a single subject”, here is another stab: Within any given physical system that can realize subjects of experience (e.g. a cow’s brain), the subject-of-experience at t-1 (S1) is numerically identical to the subjective-of-experience at t-2 (S2) if and only if an experience at t-1 (E1) and an experience at t-2 (E2) are both felt by S1. That is S1 = S2 iff S1 feels E1 and E2.
That in conjunction with the definition I provided earlier is probably the best I can do to communicate what I take a subject-of-experience to be, and what makes a particular subject-of-experience the numerically same subject-of-experience over time.
To your first comment, I disagree. I think it’s the same thing. Experiences are the result of chemical reactions. Are you advocating a form of dualism where experience is separated from the physical reactions in the brain?
I think there is more total pain. I’m not counting the # of headaches. I’m talking about the total amount of pain.
Can you define S1?
We may not, as these discussions tend to go. I’m fine calling it.
I think we have to get closer to defining a subject of experience, (S1); I think I would need this to go forward. But here’s my position on the issue: I think moral personhood doesn’t make sense as a binary concept (the mind from a brain is different at different times, sometimes vastly different such as in the case of a major brain injury) The matter in the brain is also different over time (ship of Theseus). I don’t see a good reason to call these the same person in a moral sense in a way that two minds of two coexisting brains wouldn’t be. The consciousness experiences are different between at different times and different brains; I see this as a matter of degree of similarity.
Hi Michael,
I removed the comment about worrying that we might not reach a consensus because I worried that it might send you the wrong idea (i.e. that I don’t want to talk anymore). It’s been tiring I have to admit, but also enjoyable and helpful. Anyways, you clearly saw my comment before I removed it. But yeah, I’m good with talking on.
I agree that experiences are the result of chemical reactions, however the nature of the relations “X being experientially worse than Y” and “X being greater in number than Y” are relevantly different. Someone by the name of “kbog” recently read my very first reply to you (the updated edition) and raised basically the same concern as you have here, and I think I have responded to him pretty aptly. So if you don’t mind, can you read my discussion with him:
http://effective-altruism.com/ea/1lt/is_effective_altruism_fundamentally_flawed/dmu
I would have answered you here, but I’m honestly pretty drained from replying to kbog, so I hope you can understand. Let me know what you think.
Regarding defining S1, I don’t think I can do better than to say that S1 is a thing that has, or is capable of having, experience(s). I add the phrase ‘or is capable of having’ this time because it has just occurred to me that when I am in dreamless sleep, I have no experiences whatsoever, yet I’d like to think that I am still around—i.e. that the particular subject-of-experience that I am is still around. However, it’s also possible that a subject-of-experience exists only when it is experiencing something. If that is true, then the subject-of-experience that I am is going out of and coming into existence several times a night. That’s spooky, but perhaps true.
Anyways, I can’t seem to figure out why you need any better of a definition of a subject-of-experience than that. I feel like my definition sufficiently distinguishes it from other kinds of things. Moreover, I have provided you with a criteria for identity over time. Shouldn’t this be enough?
You write, “I think moral personhood doesn’t make sense as a binary concept (the mind from a brain is different at different times, sometimes vastly different such as in the case of a major brain injury) The matter in the brain is also different over time (ship of Theseus).”
I agree with all of this, but I would insist those NEED NOT BE numerical differences, just qualitative differences. A mind can be very qualitatively different (e.g. big personality change) from one moment to the next, but that does not necessarily mean that it is a numerically different mind. Likewise, a brain can be very qualitative different (e.g. big change in shape) from one moment to the next, but that does not necessarily mean that it is a numerically different brain.
You then write, “I don’t see a good reason to call these the same person in a moral sense in a way that two minds of two coexisting brains wouldn’t be.”
Well, if a particular mind is the numerically same mind before and after a big qualitative change (e.g., due to a brain injury), then clearly there is reason to call it the same mind/person in a way that two minds of two coexisting brains wouldn’t be. After all, it’s the numerically same mind, whereas two minds of two coexisting brains are clearly two numerically different minds.
You might agree that there is a literal reason to call it the same mind, but deny that there is a moral reason that wouldn’t be true of two minds of two coexisting brains. But I think the literal reason constitutes or provides the moral reason: if a mind is numerically the same mind before and after a big qualitative change (e.g. big personality change), then that means whatever experiences are had by that mind before and after the change are HAD BY THAT NUMERICALLY SAME MIND. So if that particular mind suffered a headache before the radical change and then suffered a headache after the change, it is THAT PARTICULAR MIND THAT SUFFERS BOTH. That is enough reason to also call that mind the same mind in a moral sense that wouldn’t also be true of two numerically different minds of two coexisting brains.
I didn’t quite understand the sentences after that.
FYI, I’m pretty busy over the next few days, but I’d like to get back to this conversation at one point. If I do, it may be a bit though.
No worries!