Can you (or anyone else who feels similarly) clarify the sense in which you consider the repugnant conclusion ānot actually importantā, but the drowning child example āimportantā?
Both are hypotheticals, both are trying to highlight contradictions in our intuitions about the world, both require you to either (a) put up with the fact that your theory is self-contradictory or (b) accept something that most people would consider unusual/ācounter-intuitive.
Can you (or anyone else who feels similarly) clarify the sense in which you consider the repugnant conclusion ānot actually importantā, but the drowning child example āimportantā?
Because children die of preventable diseases, but no-one creates arbitrarily large populations of people with just-better-than-nothing well-being.
Iām sorry, but I donāt understand this reply. Suppose you can in fact create arbitrarily large populations of people with lives barely worth living. Some moral theories would then imply that this is what you should do. If you find this implication repugnant, you should also find it repugnant that a theory would have that implication if you found yourself in that position, even if as a matter of fact you donāt. As an analogy, consider Kantās theory, which implies that a man who is hiding a Jewish family should tell the truth when Nazi officials question him about it. It would be strange to defend Kantās theory by alleging that, in fact, no actual person ever found himself in that situation. What matters is that the situation is possible, not whether the situation is actual.
But maybe Iām misunderstanding what you meant by ānot actually importantā?
Well, you can argue that the hypothetical situation is sufficiently exotic that you donāt expect your intuitions to be reliable there.
Itās actually pretty reasonable to me to say that the shallow pond example is simple, realistic and important, compared to the repugnant conclusion, which is abstract, unusual, unreliable and hence useless.
If you find this implication repugnant, {you should also find it repugnant that a theory would have that implication if you found yourself in that position, even if as a matter of fact you donāt}.
I reject the implication inside the curly brackets that I added. I donāt care what would happen to my moral theory if creating these large populations becomes possible; in the unlikely event that Iām still around when it becomes relevant, Iām happy to leave it to future-me to patch up my moral theory in a way that future-me deems appropriate.
As an analogy
I guess I could attach some sort of plausibility score to moral thought experiments. Rescuing a drowning child gets a score near 1, since rescue situations really do happen and itās just a matter of detail about how much it costs the rescuer. As applied to donating to charity, the score might have to be lowered a little to account for how donating to charity isnāt an exact match for the child in the pond.
The Nazi officials caseā¦ seems pretty plausible to me? Like didnāt that actually happen?
Something of a more intermediate case between the drowning child and creating large populations would be the idea of murdering someone to harvest their organs. This is feasible today, but irrelevant since no-one is altruistically murdering people for organs. I think itās reasonable for someone previously a pure utilitarian to respond with, āAlright, my earlier utilitarianism fails in this case, but it works in lots of other places, so Iāll continue to use it elsewhere, without claiming that itās a complete moral theory.ā (And if they want to analyse it really closely and work out the boundaries of when killing one person to save others is moral and when not, then thatās also a reasonable response.)
A thought experiment involving the creation of large populations gets a plausibility score near zero.
I reject the implication inside the curly brackets that I added.
[...]
I think itās reasonable for someone previously a pure utilitarian to respond with, āAlright, my earlier utilitarianism fails in this case, but it works in lots of other places, so Iāll continue to use it elsewhere, without claiming that itās a complete moral theory.ā
I find your position unclear. On the one hand, you suggest that thought experiments involving situations that arenāt actual donāt constitute a problem for a theory (first quote above). On the other hand, you imply that they do constitute a problem, which is addressed by restricting the scope of the theory so that it doesnāt apply to such situations (second quote above). Could you clarify?
Maybe Iāve misinterpreted ārepugnantā here? I thought it basically meant ābadā, but Google tells me that a second definition is āin conflict or incompatible withā, and now that I know this, Iām guessing that itās the latter definition that you are using for ārepugnantā. But Iām finding it difficult to make sense of it all (it carries a really strong negative connotation for me, and Iām not sure if itās supposed to in this contextāthere might be nuances that Iām missing), so Iāll try to describe my position using other words.
If my moral theory, when applied to some highly unrealistic thought experiment (which doesnāt have some clear analog to something more realistic), results in a conclusion that I really donāt like, then:
I accept that my moral theory is not a complete and correct theory; and
this is not something that bothers me at all. If the thought experiment ever becomes relevant, Iāll worry about how to patch up the theory then. In the meantime, Iāll carry on trying to live by my moral theory.
Thank you for the clarification. I think I understand your position now.
this is not something that bothers me at all. If the thought experiment ever becomes relevant, Iāll worry about how to patch up the theory then. In the meantime, Iāll carry on trying to live by my moral theory.
Why doesnāt it bother you at all that a theory has counterintuitive implications in counterfactual scenarios? Shouldnāt this lower your confidence in the theory? After all, our justification for believing a moral theory seems to turn on (1) the theoryās simplicity and (2) the degree to which it fits our intuitions. When you learn that your theory has counterintuitive implications, this causes you to either restrict the scope of the theory, and thus make it more complex, or recognize that it doesnāt fit the data as well as you thought before. In either case, it seems you should update by believing the theory to a lesser degree.
Why doesnāt it bother you at all that a theory has counterintuitive implications in counterfactual scenarios? Shouldnāt this lower your confidence in the theory?
I think my disagreement is mostly on (1) -- I expect that a correct moral theory would be horrendously complicated. I certainly canāt reduce my moral theory to some simple set of principles: there are many realistic circumstances where my principles clash (individual rights versus greater good, say, or plenty of legal battles where itās not clear what a moral decision would be), and I donāt know of any simple rules to decide what principles I deem more important in which situations. Certainly there are many realistic problems which I think could go either way.
But I agree that all other things equal, simplicity is a good feature to have, and enough simplicity might sometimes outweigh intuition. Perhaps, once future-me carefully consider enormous aggregative ethics problems, I will have an insight that allows a drastically simplified moral theory. The new theory would solve the repugnant conclusion (whatever I think ārepugnantā means in this future!). Applied to present-meās day-to-day problems, such a simplified theory will likely give slightly different answers to what I think today: maybe the uncertainty I have today about certain court cases would be solved by one of the principles that future-me thinks of.
But I donāt think the answers will change a lot. I think my current moral theory basically gives appropriate answers (sometimes uncertain ones) to my problems today. Thereās wiggle-room in places, but there are also some really solid intuitions that I donāt expect future-me to sacrifice. Rescuing the drowning child (at least when I live in a world without the ability to create large numbers of sentient beings!) would be one of these.
Can you (or anyone else who feels similarly) clarify the sense in which you consider the repugnant conclusion ānot actually importantā, but the drowning child example āimportantā?
Both are hypotheticals, both are trying to highlight contradictions in our intuitions about the world, both require you to either (a) put up with the fact that your theory is self-contradictory or (b) accept something that most people would consider unusual/ācounter-intuitive.
Because children die of preventable diseases, but no-one creates arbitrarily large populations of people with just-better-than-nothing well-being.
Iām sorry, but I donāt understand this reply. Suppose you can in fact create arbitrarily large populations of people with lives barely worth living. Some moral theories would then imply that this is what you should do. If you find this implication repugnant, you should also find it repugnant that a theory would have that implication if you found yourself in that position, even if as a matter of fact you donāt. As an analogy, consider Kantās theory, which implies that a man who is hiding a Jewish family should tell the truth when Nazi officials question him about it. It would be strange to defend Kantās theory by alleging that, in fact, no actual person ever found himself in that situation. What matters is that the situation is possible, not whether the situation is actual.
But maybe Iām misunderstanding what you meant by ānot actually importantā?
Well, you can argue that the hypothetical situation is sufficiently exotic that you donāt expect your intuitions to be reliable there.
Itās actually pretty reasonable to me to say that the shallow pond example is simple, realistic and important, compared to the repugnant conclusion, which is abstract, unusual, unreliable and hence useless.
I reject the implication inside the curly brackets that I added. I donāt care what would happen to my moral theory if creating these large populations becomes possible; in the unlikely event that Iām still around when it becomes relevant, Iām happy to leave it to future-me to patch up my moral theory in a way that future-me deems appropriate.
I guess I could attach some sort of plausibility score to moral thought experiments. Rescuing a drowning child gets a score near 1, since rescue situations really do happen and itās just a matter of detail about how much it costs the rescuer. As applied to donating to charity, the score might have to be lowered a little to account for how donating to charity isnāt an exact match for the child in the pond.
The Nazi officials caseā¦ seems pretty plausible to me? Like didnāt that actually happen?
Something of a more intermediate case between the drowning child and creating large populations would be the idea of murdering someone to harvest their organs. This is feasible today, but irrelevant since no-one is altruistically murdering people for organs. I think itās reasonable for someone previously a pure utilitarian to respond with, āAlright, my earlier utilitarianism fails in this case, but it works in lots of other places, so Iāll continue to use it elsewhere, without claiming that itās a complete moral theory.ā (And if they want to analyse it really closely and work out the boundaries of when killing one person to save others is moral and when not, then thatās also a reasonable response.)
A thought experiment involving the creation of large populations gets a plausibility score near zero.
I find your position unclear. On the one hand, you suggest that thought experiments involving situations that arenāt actual donāt constitute a problem for a theory (first quote above). On the other hand, you imply that they do constitute a problem, which is addressed by restricting the scope of the theory so that it doesnāt apply to such situations (second quote above). Could you clarify?
Maybe Iāve misinterpreted ārepugnantā here? I thought it basically meant ābadā, but Google tells me that a second definition is āin conflict or incompatible withā, and now that I know this, Iām guessing that itās the latter definition that you are using for ārepugnantā. But Iām finding it difficult to make sense of it all (it carries a really strong negative connotation for me, and Iām not sure if itās supposed to in this contextāthere might be nuances that Iām missing), so Iāll try to describe my position using other words.
If my moral theory, when applied to some highly unrealistic thought experiment (which doesnāt have some clear analog to something more realistic), results in a conclusion that I really donāt like, then:
I accept that my moral theory is not a complete and correct theory; and
this is not something that bothers me at all. If the thought experiment ever becomes relevant, Iāll worry about how to patch up the theory then. In the meantime, Iāll carry on trying to live by my moral theory.
Thank you for the clarification. I think I understand your position now.
Why doesnāt it bother you at all that a theory has counterintuitive implications in counterfactual scenarios? Shouldnāt this lower your confidence in the theory? After all, our justification for believing a moral theory seems to turn on (1) the theoryās simplicity and (2) the degree to which it fits our intuitions. When you learn that your theory has counterintuitive implications, this causes you to either restrict the scope of the theory, and thus make it more complex, or recognize that it doesnāt fit the data as well as you thought before. In either case, it seems you should update by believing the theory to a lesser degree.
I think my disagreement is mostly on (1) -- I expect that a correct moral theory would be horrendously complicated. I certainly canāt reduce my moral theory to some simple set of principles: there are many realistic circumstances where my principles clash (individual rights versus greater good, say, or plenty of legal battles where itās not clear what a moral decision would be), and I donāt know of any simple rules to decide what principles I deem more important in which situations. Certainly there are many realistic problems which I think could go either way.
But I agree that all other things equal, simplicity is a good feature to have, and enough simplicity might sometimes outweigh intuition. Perhaps, once future-me carefully consider enormous aggregative ethics problems, I will have an insight that allows a drastically simplified moral theory. The new theory would solve the repugnant conclusion (whatever I think ārepugnantā means in this future!). Applied to present-meās day-to-day problems, such a simplified theory will likely give slightly different answers to what I think today: maybe the uncertainty I have today about certain court cases would be solved by one of the principles that future-me thinks of.
But I donāt think the answers will change a lot. I think my current moral theory basically gives appropriate answers (sometimes uncertain ones) to my problems today. Thereās wiggle-room in places, but there are also some really solid intuitions that I donāt expect future-me to sacrifice. Rescuing the drowning child (at least when I live in a world without the ability to create large numbers of sentient beings!) would be one of these.