To your first comment, I disagree. I think it’s the same thing. Experiences are the result of chemical reactions. Are you advocating a form of dualism where experience is separated from the physical reactions in the brain?
I think there is more total pain. I’m not counting the # of headaches. I’m talking about the total amount of pain.
Can you define S1?
We may not, as these discussions tend to go. I’m fine calling it.
I think we have to get closer to defining a subject of experience, (S1); I think I would need this to go forward. But here’s my position on the issue: I think moral personhood doesn’t make sense as a binary concept (the mind from a brain is different at different times, sometimes vastly different such as in the case of a major brain injury) The matter in the brain is also different over time (ship of Theseus). I don’t see a good reason to call these the same person in a moral sense in a way that two minds of two coexisting brains wouldn’t be. The consciousness experiences are different between at different times and different brains; I see this as a matter of degree of similarity.
I removed the comment about worrying that we might not reach a consensus because I worried that it might send you the wrong idea (i.e. that I don’t want to talk anymore). It’s been tiring I have to admit, but also enjoyable and helpful. Anyways, you clearly saw my comment before I removed it. But yeah, I’m good with talking on.
I agree that experiences are the result of chemical reactions, however the nature of the relations “X being experientially worse than Y” and “X being greater in number than Y” are relevantly different. Someone by the name of “kbog” recently read my very first reply to you (the updated edition) and raised basically the same concern as you have here, and I think I have responded to him pretty aptly. So if you don’t mind, can you read my discussion with him:
I would have answered you here, but I’m honestly pretty drained from replying to kbog, so I hope you can understand. Let me know what you think.
Regarding defining S1, I don’t think I can do better than to say that S1 is a thing that has, or is capable of having, experience(s). I add the phrase ‘or is capable of having’ this time because it has just occurred to me that when I am in dreamless sleep, I have no experiences whatsoever, yet I’d like to think that I am still around—i.e. that the particular subject-of-experience that I am is still around. However, it’s also possible that a subject-of-experience exists only when it is experiencing something. If that is true, then the subject-of-experience that I am is going out of and coming into existence several times a night. That’s spooky, but perhaps true.
Anyways, I can’t seem to figure out why you need any better of a definition of a subject-of-experience than that. I feel like my definition sufficiently distinguishes it from other kinds of things. Moreover, I have provided you with a criteria for identity over time. Shouldn’t this be enough?
You write, “I think moral personhood doesn’t make sense as a binary concept (the mind from a brain is different at different times, sometimes vastly different such as in the case of a major brain injury) The matter in the brain is also different over time (ship of Theseus).”
I agree with all of this, but I would insist those NEED NOT BE numerical differences, just qualitative differences. A mind can be very qualitatively different (e.g. big personality change) from one moment to the next, but that does not necessarily mean that it is a numerically different mind. Likewise, a brain can be very qualitative different (e.g. big change in shape) from one moment to the next, but that does not necessarily mean that it is a numerically different brain.
You then write, “I don’t see a good reason to call these the same person in a moral sense in a way that two minds of two coexisting brains wouldn’t be.”
Well, if a particular mind is the numerically same mind before and after a big qualitative change (e.g., due to a brain injury), then clearly there is reason to call it the same mind/person in a way that two minds of two coexisting brains wouldn’t be. After all, it’s the numerically same mind, whereas two minds of two coexisting brains are clearly two numerically different minds.
You might agree that there is a literal reason to call it the same mind, but deny that there is a moral reason that wouldn’t be true of two minds of two coexisting brains. But I think the literal reason constitutes or provides the moral reason: if a mind is numerically the same mind before and after a big qualitative change (e.g. big personality change), then that means whatever experiences are had by that mind before and after the change are HAD BY THAT NUMERICALLY SAME MIND. So if that particular mind suffered a headache before the radical change and then suffered a headache after the change, it is THAT PARTICULAR MIND THAT SUFFERS BOTH. That is enough reason to also call that mind the same mind in a moral sense that wouldn’t also be true of two numerically different minds of two coexisting brains.
I didn’t quite understand the sentences after that.
To your first comment, I disagree. I think it’s the same thing. Experiences are the result of chemical reactions. Are you advocating a form of dualism where experience is separated from the physical reactions in the brain?
I think there is more total pain. I’m not counting the # of headaches. I’m talking about the total amount of pain.
Can you define S1?
We may not, as these discussions tend to go. I’m fine calling it.
I think we have to get closer to defining a subject of experience, (S1); I think I would need this to go forward. But here’s my position on the issue: I think moral personhood doesn’t make sense as a binary concept (the mind from a brain is different at different times, sometimes vastly different such as in the case of a major brain injury) The matter in the brain is also different over time (ship of Theseus). I don’t see a good reason to call these the same person in a moral sense in a way that two minds of two coexisting brains wouldn’t be. The consciousness experiences are different between at different times and different brains; I see this as a matter of degree of similarity.
Hi Michael,
I removed the comment about worrying that we might not reach a consensus because I worried that it might send you the wrong idea (i.e. that I don’t want to talk anymore). It’s been tiring I have to admit, but also enjoyable and helpful. Anyways, you clearly saw my comment before I removed it. But yeah, I’m good with talking on.
I agree that experiences are the result of chemical reactions, however the nature of the relations “X being experientially worse than Y” and “X being greater in number than Y” are relevantly different. Someone by the name of “kbog” recently read my very first reply to you (the updated edition) and raised basically the same concern as you have here, and I think I have responded to him pretty aptly. So if you don’t mind, can you read my discussion with him:
http://effective-altruism.com/ea/1lt/is_effective_altruism_fundamentally_flawed/dmu
I would have answered you here, but I’m honestly pretty drained from replying to kbog, so I hope you can understand. Let me know what you think.
Regarding defining S1, I don’t think I can do better than to say that S1 is a thing that has, or is capable of having, experience(s). I add the phrase ‘or is capable of having’ this time because it has just occurred to me that when I am in dreamless sleep, I have no experiences whatsoever, yet I’d like to think that I am still around—i.e. that the particular subject-of-experience that I am is still around. However, it’s also possible that a subject-of-experience exists only when it is experiencing something. If that is true, then the subject-of-experience that I am is going out of and coming into existence several times a night. That’s spooky, but perhaps true.
Anyways, I can’t seem to figure out why you need any better of a definition of a subject-of-experience than that. I feel like my definition sufficiently distinguishes it from other kinds of things. Moreover, I have provided you with a criteria for identity over time. Shouldn’t this be enough?
You write, “I think moral personhood doesn’t make sense as a binary concept (the mind from a brain is different at different times, sometimes vastly different such as in the case of a major brain injury) The matter in the brain is also different over time (ship of Theseus).”
I agree with all of this, but I would insist those NEED NOT BE numerical differences, just qualitative differences. A mind can be very qualitatively different (e.g. big personality change) from one moment to the next, but that does not necessarily mean that it is a numerically different mind. Likewise, a brain can be very qualitative different (e.g. big change in shape) from one moment to the next, but that does not necessarily mean that it is a numerically different brain.
You then write, “I don’t see a good reason to call these the same person in a moral sense in a way that two minds of two coexisting brains wouldn’t be.”
Well, if a particular mind is the numerically same mind before and after a big qualitative change (e.g., due to a brain injury), then clearly there is reason to call it the same mind/person in a way that two minds of two coexisting brains wouldn’t be. After all, it’s the numerically same mind, whereas two minds of two coexisting brains are clearly two numerically different minds.
You might agree that there is a literal reason to call it the same mind, but deny that there is a moral reason that wouldn’t be true of two minds of two coexisting brains. But I think the literal reason constitutes or provides the moral reason: if a mind is numerically the same mind before and after a big qualitative change (e.g. big personality change), then that means whatever experiences are had by that mind before and after the change are HAD BY THAT NUMERICALLY SAME MIND. So if that particular mind suffered a headache before the radical change and then suffered a headache after the change, it is THAT PARTICULAR MIND THAT SUFFERS BOTH. That is enough reason to also call that mind the same mind in a moral sense that wouldn’t also be true of two numerically different minds of two coexisting brains.
I didn’t quite understand the sentences after that.
FYI, I’m pretty busy over the next few days, but I’d like to get back to this conversation at one point. If I do, it may be a bit though.
No worries!