âNo-one responds to the drowning child by saying, âwell there might be an infinite number of sentient life-forms out there, so it doesnât matter if the child drowns or I damage my suitâ. It is just not a consideration.â
âIt is not an issue for altruists otherwiseâeveryone saves the drowning child.â
I donât understand what you are saying here. Are you claiming that because âeveryoneâ does do X or because ânooneâ does not do X (putting those in quotation marks because I presume you donât literally mean what you wrote, rather you mean the âvast majority of people would/âwould not do Xâ), X must be morally correct?
More generally, letting the child drown is a reductio of whatever moral system led to that conclusion.
Population ethics (including infinite ethics) is replete with impossibility theorems showing that no moral theory can satisfy all of our considered intuitions. (See this paper for an overview.) So you cannot simply point to a counterintuitive implication and claim that it disproves the theory from which it follows. If that procedure was followed consistently, it would disprove all moral theories.
I consider this a reason to not strictly adhere to any single moral theory.
This statement is ambiguous. It either means that you adhere to a hybrid theory made up of parts of different moral theories, or that you donât adhere to a moral theory at all. If you adhere to a hybrid moral theory, this theory is itself subject to the impossibility theorems, so it, too, will have counterintuitive implications. If you adhere to no theory at all, then nothing is right or wrong; a fortiori, not rescuing the child isnât wrong, and a theoryâs implying that not rescuing the child isnât wrong cannot therefore be a reason for rejecting this theory.
OKâI mean the hybrid theoryâbut I see two possibilities (I donât think itâs worth my time reading up on this subject enough to make sure what I mean matches exactly the terminology of the paper(s) you refer to):
In my hybridisation, Iâve already sacrificed some intuitive principles (improving total welfare versus respecting individual rights, say), by weighing up competing intuitions.
Whatever counter-intuitive implications my mish-mash, sometimes fuzzily defined hybrid theory has, they have been pushed into the realm of âwhat philosophers can write papers onâ, rather than what is actually important. The repugnant conclusion falls under this category.
Whichever way it works out, I stick resolutely to saving the drowning child.
Can you (or anyone else who feels similarly) clarify the sense in which you consider the repugnant conclusion ânot actually importantâ, but the drowning child example âimportantâ?
Both are hypotheticals, both are trying to highlight contradictions in our intuitions about the world, both require you to either (a) put up with the fact that your theory is self-contradictory or (b) accept something that most people would consider unusual/âcounter-intuitive.
Can you (or anyone else who feels similarly) clarify the sense in which you consider the repugnant conclusion ânot actually importantâ, but the drowning child example âimportantâ?
Because children die of preventable diseases, but no-one creates arbitrarily large populations of people with just-better-than-nothing well-being.
Iâm sorry, but I donât understand this reply. Suppose you can in fact create arbitrarily large populations of people with lives barely worth living. Some moral theories would then imply that this is what you should do. If you find this implication repugnant, you should also find it repugnant that a theory would have that implication if you found yourself in that position, even if as a matter of fact you donât. As an analogy, consider Kantâs theory, which implies that a man who is hiding a Jewish family should tell the truth when Nazi officials question him about it. It would be strange to defend Kantâs theory by alleging that, in fact, no actual person ever found himself in that situation. What matters is that the situation is possible, not whether the situation is actual.
But maybe Iâm misunderstanding what you meant by ânot actually importantâ?
Well, you can argue that the hypothetical situation is sufficiently exotic that you donât expect your intuitions to be reliable there.
Itâs actually pretty reasonable to me to say that the shallow pond example is simple, realistic and important, compared to the repugnant conclusion, which is abstract, unusual, unreliable and hence useless.
If you find this implication repugnant, {you should also find it repugnant that a theory would have that implication if you found yourself in that position, even if as a matter of fact you donât}.
I reject the implication inside the curly brackets that I added. I donât care what would happen to my moral theory if creating these large populations becomes possible; in the unlikely event that Iâm still around when it becomes relevant, Iâm happy to leave it to future-me to patch up my moral theory in a way that future-me deems appropriate.
As an analogy
I guess I could attach some sort of plausibility score to moral thought experiments. Rescuing a drowning child gets a score near 1, since rescue situations really do happen and itâs just a matter of detail about how much it costs the rescuer. As applied to donating to charity, the score might have to be lowered a little to account for how donating to charity isnât an exact match for the child in the pond.
The Nazi officials case⊠seems pretty plausible to me? Like didnât that actually happen?
Something of a more intermediate case between the drowning child and creating large populations would be the idea of murdering someone to harvest their organs. This is feasible today, but irrelevant since no-one is altruistically murdering people for organs. I think itâs reasonable for someone previously a pure utilitarian to respond with, âAlright, my earlier utilitarianism fails in this case, but it works in lots of other places, so Iâll continue to use it elsewhere, without claiming that itâs a complete moral theory.â (And if they want to analyse it really closely and work out the boundaries of when killing one person to save others is moral and when not, then thatâs also a reasonable response.)
A thought experiment involving the creation of large populations gets a plausibility score near zero.
I reject the implication inside the curly brackets that I added.
[...]
I think itâs reasonable for someone previously a pure utilitarian to respond with, âAlright, my earlier utilitarianism fails in this case, but it works in lots of other places, so Iâll continue to use it elsewhere, without claiming that itâs a complete moral theory.â
I find your position unclear. On the one hand, you suggest that thought experiments involving situations that arenât actual donât constitute a problem for a theory (first quote above). On the other hand, you imply that they do constitute a problem, which is addressed by restricting the scope of the theory so that it doesnât apply to such situations (second quote above). Could you clarify?
Maybe Iâve misinterpreted ârepugnantâ here? I thought it basically meant âbadâ, but Google tells me that a second definition is âin conflict or incompatible withâ, and now that I know this, Iâm guessing that itâs the latter definition that you are using for ârepugnantâ. But Iâm finding it difficult to make sense of it all (it carries a really strong negative connotation for me, and Iâm not sure if itâs supposed to in this contextâthere might be nuances that Iâm missing), so Iâll try to describe my position using other words.
If my moral theory, when applied to some highly unrealistic thought experiment (which doesnât have some clear analog to something more realistic), results in a conclusion that I really donât like, then:
I accept that my moral theory is not a complete and correct theory; and
this is not something that bothers me at all. If the thought experiment ever becomes relevant, Iâll worry about how to patch up the theory then. In the meantime, Iâll carry on trying to live by my moral theory.
Thank you for the clarification. I think I understand your position now.
this is not something that bothers me at all. If the thought experiment ever becomes relevant, Iâll worry about how to patch up the theory then. In the meantime, Iâll carry on trying to live by my moral theory.
Why doesnât it bother you at all that a theory has counterintuitive implications in counterfactual scenarios? Shouldnât this lower your confidence in the theory? After all, our justification for believing a moral theory seems to turn on (1) the theoryâs simplicity and (2) the degree to which it fits our intuitions. When you learn that your theory has counterintuitive implications, this causes you to either restrict the scope of the theory, and thus make it more complex, or recognize that it doesnât fit the data as well as you thought before. In either case, it seems you should update by believing the theory to a lesser degree.
Why doesnât it bother you at all that a theory has counterintuitive implications in counterfactual scenarios? Shouldnât this lower your confidence in the theory?
I think my disagreement is mostly on (1) -- I expect that a correct moral theory would be horrendously complicated. I certainly canât reduce my moral theory to some simple set of principles: there are many realistic circumstances where my principles clash (individual rights versus greater good, say, or plenty of legal battles where itâs not clear what a moral decision would be), and I donât know of any simple rules to decide what principles I deem more important in which situations. Certainly there are many realistic problems which I think could go either way.
But I agree that all other things equal, simplicity is a good feature to have, and enough simplicity might sometimes outweigh intuition. Perhaps, once future-me carefully consider enormous aggregative ethics problems, I will have an insight that allows a drastically simplified moral theory. The new theory would solve the repugnant conclusion (whatever I think ârepugnantâ means in this future!). Applied to present-meâs day-to-day problems, such a simplified theory will likely give slightly different answers to what I think today: maybe the uncertainty I have today about certain court cases would be solved by one of the principles that future-me thinks of.
But I donât think the answers will change a lot. I think my current moral theory basically gives appropriate answers (sometimes uncertain ones) to my problems today. Thereâs wiggle-room in places, but there are also some really solid intuitions that I donât expect future-me to sacrifice. Rescuing the drowning child (at least when I live in a world without the ability to create large numbers of sentient beings!) would be one of these.
I think it quite obvious that if one does not observe a given theory they are not thereby disarmed from criticism of such a theory, similarly, a rejection of moralism is not equivalent with your imputed upshot that ânothing is right or wrongâ (although we can imagine cases in which that could be so). In the case of the former, critiquing a theory adhering to but contradicting intuitionistic premises is a straightforward instance of immanent critique. In the case of the latter, quite famously, neither Bernard Williams nor Raymond Geuss had any truck with moralism, yet clearly were not ârelativistsâ.
âNo-one responds to the drowning child by saying, âwell there might be an infinite number of sentient life-forms out there, so it doesnât matter if the child drowns or I damage my suitâ. It is just not a consideration.â
âIt is not an issue for altruists otherwiseâeveryone saves the drowning child.â
I donât understand what you are saying here. Are you claiming that because âeveryoneâ does do X or because ânooneâ does not do X (putting those in quotation marks because I presume you donât literally mean what you wrote, rather you mean the âvast majority of people would/âwould not do Xâ), X must be morally correct?
That strikes me as...problematic.
Letting the child drown in the hope that
a) thereâs an infinite number of life-forms outside our observable universe, and
b) that the correct moral theory does not simply require counting utilities (or whatever) in some local region
strikes me as far more problematic. More generally, letting the child drown is a reductio of whatever moral system led to that conclusion.
Population ethics (including infinite ethics) is replete with impossibility theorems showing that no moral theory can satisfy all of our considered intuitions. (See this paper for an overview.) So you cannot simply point to a counterintuitive implication and claim that it disproves the theory from which it follows. If that procedure was followed consistently, it would disprove all moral theories.
I consider this a reason to not strictly adhere to any single moral theory.
This statement is ambiguous. It either means that you adhere to a hybrid theory made up of parts of different moral theories, or that you donât adhere to a moral theory at all. If you adhere to a hybrid moral theory, this theory is itself subject to the impossibility theorems, so it, too, will have counterintuitive implications. If you adhere to no theory at all, then nothing is right or wrong; a fortiori, not rescuing the child isnât wrong, and a theoryâs implying that not rescuing the child isnât wrong cannot therefore be a reason for rejecting this theory.
OKâI mean the hybrid theoryâbut I see two possibilities (I donât think itâs worth my time reading up on this subject enough to make sure what I mean matches exactly the terminology of the paper(s) you refer to):
In my hybridisation, Iâve already sacrificed some intuitive principles (improving total welfare versus respecting individual rights, say), by weighing up competing intuitions.
Whatever counter-intuitive implications my mish-mash, sometimes fuzzily defined hybrid theory has, they have been pushed into the realm of âwhat philosophers can write papers onâ, rather than what is actually important. The repugnant conclusion falls under this category.
Whichever way it works out, I stick resolutely to saving the drowning child.
Can you (or anyone else who feels similarly) clarify the sense in which you consider the repugnant conclusion ânot actually importantâ, but the drowning child example âimportantâ?
Both are hypotheticals, both are trying to highlight contradictions in our intuitions about the world, both require you to either (a) put up with the fact that your theory is self-contradictory or (b) accept something that most people would consider unusual/âcounter-intuitive.
Because children die of preventable diseases, but no-one creates arbitrarily large populations of people with just-better-than-nothing well-being.
Iâm sorry, but I donât understand this reply. Suppose you can in fact create arbitrarily large populations of people with lives barely worth living. Some moral theories would then imply that this is what you should do. If you find this implication repugnant, you should also find it repugnant that a theory would have that implication if you found yourself in that position, even if as a matter of fact you donât. As an analogy, consider Kantâs theory, which implies that a man who is hiding a Jewish family should tell the truth when Nazi officials question him about it. It would be strange to defend Kantâs theory by alleging that, in fact, no actual person ever found himself in that situation. What matters is that the situation is possible, not whether the situation is actual.
But maybe Iâm misunderstanding what you meant by ânot actually importantâ?
Well, you can argue that the hypothetical situation is sufficiently exotic that you donât expect your intuitions to be reliable there.
Itâs actually pretty reasonable to me to say that the shallow pond example is simple, realistic and important, compared to the repugnant conclusion, which is abstract, unusual, unreliable and hence useless.
I reject the implication inside the curly brackets that I added. I donât care what would happen to my moral theory if creating these large populations becomes possible; in the unlikely event that Iâm still around when it becomes relevant, Iâm happy to leave it to future-me to patch up my moral theory in a way that future-me deems appropriate.
I guess I could attach some sort of plausibility score to moral thought experiments. Rescuing a drowning child gets a score near 1, since rescue situations really do happen and itâs just a matter of detail about how much it costs the rescuer. As applied to donating to charity, the score might have to be lowered a little to account for how donating to charity isnât an exact match for the child in the pond.
The Nazi officials case⊠seems pretty plausible to me? Like didnât that actually happen?
Something of a more intermediate case between the drowning child and creating large populations would be the idea of murdering someone to harvest their organs. This is feasible today, but irrelevant since no-one is altruistically murdering people for organs. I think itâs reasonable for someone previously a pure utilitarian to respond with, âAlright, my earlier utilitarianism fails in this case, but it works in lots of other places, so Iâll continue to use it elsewhere, without claiming that itâs a complete moral theory.â (And if they want to analyse it really closely and work out the boundaries of when killing one person to save others is moral and when not, then thatâs also a reasonable response.)
A thought experiment involving the creation of large populations gets a plausibility score near zero.
I find your position unclear. On the one hand, you suggest that thought experiments involving situations that arenât actual donât constitute a problem for a theory (first quote above). On the other hand, you imply that they do constitute a problem, which is addressed by restricting the scope of the theory so that it doesnât apply to such situations (second quote above). Could you clarify?
Maybe Iâve misinterpreted ârepugnantâ here? I thought it basically meant âbadâ, but Google tells me that a second definition is âin conflict or incompatible withâ, and now that I know this, Iâm guessing that itâs the latter definition that you are using for ârepugnantâ. But Iâm finding it difficult to make sense of it all (it carries a really strong negative connotation for me, and Iâm not sure if itâs supposed to in this contextâthere might be nuances that Iâm missing), so Iâll try to describe my position using other words.
If my moral theory, when applied to some highly unrealistic thought experiment (which doesnât have some clear analog to something more realistic), results in a conclusion that I really donât like, then:
I accept that my moral theory is not a complete and correct theory; and
this is not something that bothers me at all. If the thought experiment ever becomes relevant, Iâll worry about how to patch up the theory then. In the meantime, Iâll carry on trying to live by my moral theory.
Thank you for the clarification. I think I understand your position now.
Why doesnât it bother you at all that a theory has counterintuitive implications in counterfactual scenarios? Shouldnât this lower your confidence in the theory? After all, our justification for believing a moral theory seems to turn on (1) the theoryâs simplicity and (2) the degree to which it fits our intuitions. When you learn that your theory has counterintuitive implications, this causes you to either restrict the scope of the theory, and thus make it more complex, or recognize that it doesnât fit the data as well as you thought before. In either case, it seems you should update by believing the theory to a lesser degree.
I think my disagreement is mostly on (1) -- I expect that a correct moral theory would be horrendously complicated. I certainly canât reduce my moral theory to some simple set of principles: there are many realistic circumstances where my principles clash (individual rights versus greater good, say, or plenty of legal battles where itâs not clear what a moral decision would be), and I donât know of any simple rules to decide what principles I deem more important in which situations. Certainly there are many realistic problems which I think could go either way.
But I agree that all other things equal, simplicity is a good feature to have, and enough simplicity might sometimes outweigh intuition. Perhaps, once future-me carefully consider enormous aggregative ethics problems, I will have an insight that allows a drastically simplified moral theory. The new theory would solve the repugnant conclusion (whatever I think ârepugnantâ means in this future!). Applied to present-meâs day-to-day problems, such a simplified theory will likely give slightly different answers to what I think today: maybe the uncertainty I have today about certain court cases would be solved by one of the principles that future-me thinks of.
But I donât think the answers will change a lot. I think my current moral theory basically gives appropriate answers (sometimes uncertain ones) to my problems today. Thereâs wiggle-room in places, but there are also some really solid intuitions that I donât expect future-me to sacrifice. Rescuing the drowning child (at least when I live in a world without the ability to create large numbers of sentient beings!) would be one of these.
I think it quite obvious that if one does not observe a given theory they are not thereby disarmed from criticism of such a theory, similarly, a rejection of moralism is not equivalent with your imputed upshot that ânothing is right or wrongâ (although we can imagine cases in which that could be so). In the case of the former, critiquing a theory adhering to but contradicting intuitionistic premises is a straightforward instance of immanent critique. In the case of the latter, quite famously, neither Bernard Williams nor Raymond Geuss had any truck with moralism, yet clearly were not ârelativistsâ.