This was a really thought-provoking read, thank you!
I think I agree with Richard Chappellās comment that: āthe more you manipulate my values, the less the future person is meā.
In this particular case, if I take the pill, my preferences, dispositions, and attitudes are being completely transformed in an instant. These are a huge part of what makes me who I am, so I think that after taking this pill I would become a completely different person, in a very literal sense. It would be a new person who had access to all of my memories, but it would not be me.
From this point of view, there is no essential difference between this thought experiment, and the common objection to total utilitarianism where you consider killing one person and replacing them with someone new, so that total well-being is increased.
This is still a troubling thought experiment of course, but I think it does weaken the strength of your appeal to the Platinum rule? We are no longer talking about treating a person differently to how they would want to be treated, in isolation. We just have another utilitarian thought experiment where we are considering harming person X in order to benefit a different person Y.
And I think my response to both thought experiments is the same. Killing a person who does not want to be killed, or changing the preferences of someone who does not want them changed, does a huge amount of harm (at least on a preference-satisfaction version of utilitarianism), so the assumption in these thought experiments that overall preference satisfaction is nevertheless increased is doing a lot of work, more work than it might appear at first.
We can come up with an example with a similarly important moral loss, but without apparently complete identity change. I donāt think giving up your most important preference completely changes who you are. You donāt become a completely different person when you come to love someone, or stop loving them, even though this is a very important part of you. It still may be an important partial identity change, though, so kind of partial replacement.
Furthermore, we can change your most important preferences without changing all your dispositions. Not just your memories, we can keep your personality traits and intelligence, too, say.
I agree that we can imagine a similar scenario where your identity is changed to a much lesser degree. But Iām still not convinced that we can straightforwardly apply the Platinum rule to such a scenario.
If your subjective wellbeing is increased after taking the pill, then one of the preferences that must be changed is your preference not to take the pill. This means that when we try to apply the Platinum rule: ātreat others as they would have us treat themā, we are naturally led to ask: āas they would have us treat them when?ā If their preference to have taken the pill after taking it is stronger than their preference not to take the pill before taking it, the Platinum rule becomes less straightforward.
I can imagine two ways of clarifying the rule here, to explain why forcing someone to take the pill would be wrong, which you already allude to in your post:
We should treat others as they would have us treat them at the time we are making the decision. But this would imply that if someoneās preferences are about to naturally, predictably, change for the rest of their life, then we should disregard that when trying to decide what is best for them, and only consider what they want right now. This seems much more controversial than the original statement of the rule.
We should treat others as they would have us treat them, considering the preferences they would have over their lifetime if we did not act. But this would imply that if someone was about to eat the pill by accident, thinking it was just a sweet, and we knew it was against their current wishes, then we should not try to stop them or warn them. This would create a very odd action/āinaction distinction. Again, this seems much more controversial than the original statement of the rule.
In the post you say the Platinum rule might be the most important thing for a moral theory to get right, and I think I agree with you on this. It is something that seems so natural and obvious that I want to take it as a kind of axiom. But neither of these two extensions to it feel this obvious any more. They both seem very controversial.
I think the rule only properly makes sense when applied to a person-moment, rather than to a whole person throughout their life. If this is true, then I think my original objection still applies. We arenāt dealing with a situation where we can apply the platinum rule in isolation. Instead, we have just another utilitarian trade-off between the welfare of one (set of) person(-moments) and another.
Ya, Iām not totally sold on the Platinum Rule itself. I think Iām gesturing at one of the most important things to get right (to me), but I donāt mean itās specifically the Platinum rule. Iām trying to develop this further in some other pieces for this sequence.
That being said, I think adding preferences (or allowing new preferences to be added) is importantly different from other tradeoffs, as I discuss in āPeople arenāt always right about whatās best for themselvesā.
This was a really thought-provoking read, thank you!
I think I agree with Richard Chappellās comment that: āthe more you manipulate my values, the less the future person is meā.
In this particular case, if I take the pill, my preferences, dispositions, and attitudes are being completely transformed in an instant. These are a huge part of what makes me who I am, so I think that after taking this pill I would become a completely different person, in a very literal sense. It would be a new person who had access to all of my memories, but it would not be me.
From this point of view, there is no essential difference between this thought experiment, and the common objection to total utilitarianism where you consider killing one person and replacing them with someone new, so that total well-being is increased.
This is still a troubling thought experiment of course, but I think it does weaken the strength of your appeal to the Platinum rule? We are no longer talking about treating a person differently to how they would want to be treated, in isolation. We just have another utilitarian thought experiment where we are considering harming person X in order to benefit a different person Y.
And I think my response to both thought experiments is the same. Killing a person who does not want to be killed, or changing the preferences of someone who does not want them changed, does a huge amount of harm (at least on a preference-satisfaction version of utilitarianism), so the assumption in these thought experiments that overall preference satisfaction is nevertheless increased is doing a lot of work, more work than it might appear at first.
We can come up with an example with a similarly important moral loss, but without apparently complete identity change. I donāt think giving up your most important preference completely changes who you are. You donāt become a completely different person when you come to love someone, or stop loving them, even though this is a very important part of you. It still may be an important partial identity change, though, so kind of partial replacement.
Furthermore, we can change your most important preferences without changing all your dispositions. Not just your memories, we can keep your personality traits and intelligence, too, say.
I agree that we can imagine a similar scenario where your identity is changed to a much lesser degree. But Iām still not convinced that we can straightforwardly apply the Platinum rule to such a scenario.
If your subjective wellbeing is increased after taking the pill, then one of the preferences that must be changed is your preference not to take the pill. This means that when we try to apply the Platinum rule: ātreat others as they would have us treat themā, we are naturally led to ask: āas they would have us treat them when?ā If their preference to have taken the pill after taking it is stronger than their preference not to take the pill before taking it, the Platinum rule becomes less straightforward.
I can imagine two ways of clarifying the rule here, to explain why forcing someone to take the pill would be wrong, which you already allude to in your post:
We should treat others as they would have us treat them at the time we are making the decision. But this would imply that if someoneās preferences are about to naturally, predictably, change for the rest of their life, then we should disregard that when trying to decide what is best for them, and only consider what they want right now. This seems much more controversial than the original statement of the rule.
We should treat others as they would have us treat them, considering the preferences they would have over their lifetime if we did not act. But this would imply that if someone was about to eat the pill by accident, thinking it was just a sweet, and we knew it was against their current wishes, then we should not try to stop them or warn them. This would create a very odd action/āinaction distinction. Again, this seems much more controversial than the original statement of the rule.
In the post you say the Platinum rule might be the most important thing for a moral theory to get right, and I think I agree with you on this. It is something that seems so natural and obvious that I want to take it as a kind of axiom. But neither of these two extensions to it feel this obvious any more. They both seem very controversial.
I think the rule only properly makes sense when applied to a person-moment, rather than to a whole person throughout their life. If this is true, then I think my original objection still applies. We arenāt dealing with a situation where we can apply the platinum rule in isolation. Instead, we have just another utilitarian trade-off between the welfare of one (set of) person(-moments) and another.
Ya, Iām not totally sold on the Platinum Rule itself. I think Iām gesturing at one of the most important things to get right (to me), but I donāt mean itās specifically the Platinum rule. Iām trying to develop this further in some other pieces for this sequence.
That being said, I think adding preferences (or allowing new preferences to be added) is importantly different from other tradeoffs, as I discuss in āPeople arenāt always right about whatās best for themselvesā.
I further motivate and describe views along the lines in this post in this sequence.