āNo-one responds to the drowning child by saying, āwell there might be an infinite number of sentient life-forms out there, so it doesnāt matter if the child drowns or I damage my suitā. It is just not a consideration.ā
āIt is not an issue for altruists otherwiseāeveryone saves the drowning child.ā
I donāt understand what you are saying here. Are you claiming that because āeveryoneā does do X or because ānooneā does not do X (putting those in quotation marks because I presume you donāt literally mean what you wrote, rather you mean the āvast majority of people would/āwould not do Xā), X must be morally correct?
More generally, letting the child drown is a reductio of whatever moral system led to that conclusion.
Population ethics (including infinite ethics) is replete with impossibility theorems showing that no moral theory can satisfy all of our considered intuitions. (See this paper for an overview.) So you cannot simply point to a counterintuitive implication and claim that it disproves the theory from which it follows. If that procedure was followed consistently, it would disprove all moral theories.
I consider this a reason to not strictly adhere to any single moral theory.
This statement is ambiguous. It either means that you adhere to a hybrid theory made up of parts of different moral theories, or that you donāt adhere to a moral theory at all. If you adhere to a hybrid moral theory, this theory is itself subject to the impossibility theorems, so it, too, will have counterintuitive implications. If you adhere to no theory at all, then nothing is right or wrong; a fortiori, not rescuing the child isnāt wrong, and a theoryās implying that not rescuing the child isnāt wrong cannot therefore be a reason for rejecting this theory.
OKāI mean the hybrid theoryābut I see two possibilities (I donāt think itās worth my time reading up on this subject enough to make sure what I mean matches exactly the terminology of the paper(s) you refer to):
In my hybridisation, Iāve already sacrificed some intuitive principles (improving total welfare versus respecting individual rights, say), by weighing up competing intuitions.
Whatever counter-intuitive implications my mish-mash, sometimes fuzzily defined hybrid theory has, they have been pushed into the realm of āwhat philosophers can write papers onā, rather than what is actually important. The repugnant conclusion falls under this category.
Whichever way it works out, I stick resolutely to saving the drowning child.
Can you (or anyone else who feels similarly) clarify the sense in which you consider the repugnant conclusion ānot actually importantā, but the drowning child example āimportantā?
Both are hypotheticals, both are trying to highlight contradictions in our intuitions about the world, both require you to either (a) put up with the fact that your theory is self-contradictory or (b) accept something that most people would consider unusual/ācounter-intuitive.
Can you (or anyone else who feels similarly) clarify the sense in which you consider the repugnant conclusion ānot actually importantā, but the drowning child example āimportantā?
Because children die of preventable diseases, but no-one creates arbitrarily large populations of people with just-better-than-nothing well-being.
Iām sorry, but I donāt understand this reply. Suppose you can in fact create arbitrarily large populations of people with lives barely worth living. Some moral theories would then imply that this is what you should do. If you find this implication repugnant, you should also find it repugnant that a theory would have that implication if you found yourself in that position, even if as a matter of fact you donāt. As an analogy, consider Kantās theory, which implies that a man who is hiding a Jewish family should tell the truth when Nazi officials question him about it. It would be strange to defend Kantās theory by alleging that, in fact, no actual person ever found himself in that situation. What matters is that the situation is possible, not whether the situation is actual.
But maybe Iām misunderstanding what you meant by ānot actually importantā?
Well, you can argue that the hypothetical situation is sufficiently exotic that you donāt expect your intuitions to be reliable there.
Itās actually pretty reasonable to me to say that the shallow pond example is simple, realistic and important, compared to the repugnant conclusion, which is abstract, unusual, unreliable and hence useless.
If you find this implication repugnant, {you should also find it repugnant that a theory would have that implication if you found yourself in that position, even if as a matter of fact you donāt}.
I reject the implication inside the curly brackets that I added. I donāt care what would happen to my moral theory if creating these large populations becomes possible; in the unlikely event that Iām still around when it becomes relevant, Iām happy to leave it to future-me to patch up my moral theory in a way that future-me deems appropriate.
As an analogy
I guess I could attach some sort of plausibility score to moral thought experiments. Rescuing a drowning child gets a score near 1, since rescue situations really do happen and itās just a matter of detail about how much it costs the rescuer. As applied to donating to charity, the score might have to be lowered a little to account for how donating to charity isnāt an exact match for the child in the pond.
The Nazi officials caseā¦ seems pretty plausible to me? Like didnāt that actually happen?
Something of a more intermediate case between the drowning child and creating large populations would be the idea of murdering someone to harvest their organs. This is feasible today, but irrelevant since no-one is altruistically murdering people for organs. I think itās reasonable for someone previously a pure utilitarian to respond with, āAlright, my earlier utilitarianism fails in this case, but it works in lots of other places, so Iāll continue to use it elsewhere, without claiming that itās a complete moral theory.ā (And if they want to analyse it really closely and work out the boundaries of when killing one person to save others is moral and when not, then thatās also a reasonable response.)
A thought experiment involving the creation of large populations gets a plausibility score near zero.
I reject the implication inside the curly brackets that I added.
[...]
I think itās reasonable for someone previously a pure utilitarian to respond with, āAlright, my earlier utilitarianism fails in this case, but it works in lots of other places, so Iāll continue to use it elsewhere, without claiming that itās a complete moral theory.ā
I find your position unclear. On the one hand, you suggest that thought experiments involving situations that arenāt actual donāt constitute a problem for a theory (first quote above). On the other hand, you imply that they do constitute a problem, which is addressed by restricting the scope of the theory so that it doesnāt apply to such situations (second quote above). Could you clarify?
Maybe Iāve misinterpreted ārepugnantā here? I thought it basically meant ābadā, but Google tells me that a second definition is āin conflict or incompatible withā, and now that I know this, Iām guessing that itās the latter definition that you are using for ārepugnantā. But Iām finding it difficult to make sense of it all (it carries a really strong negative connotation for me, and Iām not sure if itās supposed to in this contextāthere might be nuances that Iām missing), so Iāll try to describe my position using other words.
If my moral theory, when applied to some highly unrealistic thought experiment (which doesnāt have some clear analog to something more realistic), results in a conclusion that I really donāt like, then:
I accept that my moral theory is not a complete and correct theory; and
this is not something that bothers me at all. If the thought experiment ever becomes relevant, Iāll worry about how to patch up the theory then. In the meantime, Iāll carry on trying to live by my moral theory.
Thank you for the clarification. I think I understand your position now.
this is not something that bothers me at all. If the thought experiment ever becomes relevant, Iāll worry about how to patch up the theory then. In the meantime, Iāll carry on trying to live by my moral theory.
Why doesnāt it bother you at all that a theory has counterintuitive implications in counterfactual scenarios? Shouldnāt this lower your confidence in the theory? After all, our justification for believing a moral theory seems to turn on (1) the theoryās simplicity and (2) the degree to which it fits our intuitions. When you learn that your theory has counterintuitive implications, this causes you to either restrict the scope of the theory, and thus make it more complex, or recognize that it doesnāt fit the data as well as you thought before. In either case, it seems you should update by believing the theory to a lesser degree.
Why doesnāt it bother you at all that a theory has counterintuitive implications in counterfactual scenarios? Shouldnāt this lower your confidence in the theory?
I think my disagreement is mostly on (1) -- I expect that a correct moral theory would be horrendously complicated. I certainly canāt reduce my moral theory to some simple set of principles: there are many realistic circumstances where my principles clash (individual rights versus greater good, say, or plenty of legal battles where itās not clear what a moral decision would be), and I donāt know of any simple rules to decide what principles I deem more important in which situations. Certainly there are many realistic problems which I think could go either way.
But I agree that all other things equal, simplicity is a good feature to have, and enough simplicity might sometimes outweigh intuition. Perhaps, once future-me carefully consider enormous aggregative ethics problems, I will have an insight that allows a drastically simplified moral theory. The new theory would solve the repugnant conclusion (whatever I think ārepugnantā means in this future!). Applied to present-meās day-to-day problems, such a simplified theory will likely give slightly different answers to what I think today: maybe the uncertainty I have today about certain court cases would be solved by one of the principles that future-me thinks of.
But I donāt think the answers will change a lot. I think my current moral theory basically gives appropriate answers (sometimes uncertain ones) to my problems today. Thereās wiggle-room in places, but there are also some really solid intuitions that I donāt expect future-me to sacrifice. Rescuing the drowning child (at least when I live in a world without the ability to create large numbers of sentient beings!) would be one of these.
I think it quite obvious that if one does not observe a given theory they are not thereby disarmed from criticism of such a theory, similarly, a rejection of moralism is not equivalent with your imputed upshot that ānothing is right or wrongā (although we can imagine cases in which that could be so). In the case of the former, critiquing a theory adhering to but contradicting intuitionistic premises is a straightforward instance of immanent critique. In the case of the latter, quite famously, neither Bernard Williams nor Raymond Geuss had any truck with moralism, yet clearly were not ārelativistsā.
āNo-one responds to the drowning child by saying, āwell there might be an infinite number of sentient life-forms out there, so it doesnāt matter if the child drowns or I damage my suitā. It is just not a consideration.ā
āIt is not an issue for altruists otherwiseāeveryone saves the drowning child.ā
I donāt understand what you are saying here. Are you claiming that because āeveryoneā does do X or because ānooneā does not do X (putting those in quotation marks because I presume you donāt literally mean what you wrote, rather you mean the āvast majority of people would/āwould not do Xā), X must be morally correct?
That strikes me as...problematic.
Letting the child drown in the hope that
a) thereās an infinite number of life-forms outside our observable universe, and
b) that the correct moral theory does not simply require counting utilities (or whatever) in some local region
strikes me as far more problematic. More generally, letting the child drown is a reductio of whatever moral system led to that conclusion.
Population ethics (including infinite ethics) is replete with impossibility theorems showing that no moral theory can satisfy all of our considered intuitions. (See this paper for an overview.) So you cannot simply point to a counterintuitive implication and claim that it disproves the theory from which it follows. If that procedure was followed consistently, it would disprove all moral theories.
I consider this a reason to not strictly adhere to any single moral theory.
This statement is ambiguous. It either means that you adhere to a hybrid theory made up of parts of different moral theories, or that you donāt adhere to a moral theory at all. If you adhere to a hybrid moral theory, this theory is itself subject to the impossibility theorems, so it, too, will have counterintuitive implications. If you adhere to no theory at all, then nothing is right or wrong; a fortiori, not rescuing the child isnāt wrong, and a theoryās implying that not rescuing the child isnāt wrong cannot therefore be a reason for rejecting this theory.
OKāI mean the hybrid theoryābut I see two possibilities (I donāt think itās worth my time reading up on this subject enough to make sure what I mean matches exactly the terminology of the paper(s) you refer to):
In my hybridisation, Iāve already sacrificed some intuitive principles (improving total welfare versus respecting individual rights, say), by weighing up competing intuitions.
Whatever counter-intuitive implications my mish-mash, sometimes fuzzily defined hybrid theory has, they have been pushed into the realm of āwhat philosophers can write papers onā, rather than what is actually important. The repugnant conclusion falls under this category.
Whichever way it works out, I stick resolutely to saving the drowning child.
Can you (or anyone else who feels similarly) clarify the sense in which you consider the repugnant conclusion ānot actually importantā, but the drowning child example āimportantā?
Both are hypotheticals, both are trying to highlight contradictions in our intuitions about the world, both require you to either (a) put up with the fact that your theory is self-contradictory or (b) accept something that most people would consider unusual/ācounter-intuitive.
Because children die of preventable diseases, but no-one creates arbitrarily large populations of people with just-better-than-nothing well-being.
Iām sorry, but I donāt understand this reply. Suppose you can in fact create arbitrarily large populations of people with lives barely worth living. Some moral theories would then imply that this is what you should do. If you find this implication repugnant, you should also find it repugnant that a theory would have that implication if you found yourself in that position, even if as a matter of fact you donāt. As an analogy, consider Kantās theory, which implies that a man who is hiding a Jewish family should tell the truth when Nazi officials question him about it. It would be strange to defend Kantās theory by alleging that, in fact, no actual person ever found himself in that situation. What matters is that the situation is possible, not whether the situation is actual.
But maybe Iām misunderstanding what you meant by ānot actually importantā?
Well, you can argue that the hypothetical situation is sufficiently exotic that you donāt expect your intuitions to be reliable there.
Itās actually pretty reasonable to me to say that the shallow pond example is simple, realistic and important, compared to the repugnant conclusion, which is abstract, unusual, unreliable and hence useless.
I reject the implication inside the curly brackets that I added. I donāt care what would happen to my moral theory if creating these large populations becomes possible; in the unlikely event that Iām still around when it becomes relevant, Iām happy to leave it to future-me to patch up my moral theory in a way that future-me deems appropriate.
I guess I could attach some sort of plausibility score to moral thought experiments. Rescuing a drowning child gets a score near 1, since rescue situations really do happen and itās just a matter of detail about how much it costs the rescuer. As applied to donating to charity, the score might have to be lowered a little to account for how donating to charity isnāt an exact match for the child in the pond.
The Nazi officials caseā¦ seems pretty plausible to me? Like didnāt that actually happen?
Something of a more intermediate case between the drowning child and creating large populations would be the idea of murdering someone to harvest their organs. This is feasible today, but irrelevant since no-one is altruistically murdering people for organs. I think itās reasonable for someone previously a pure utilitarian to respond with, āAlright, my earlier utilitarianism fails in this case, but it works in lots of other places, so Iāll continue to use it elsewhere, without claiming that itās a complete moral theory.ā (And if they want to analyse it really closely and work out the boundaries of when killing one person to save others is moral and when not, then thatās also a reasonable response.)
A thought experiment involving the creation of large populations gets a plausibility score near zero.
I find your position unclear. On the one hand, you suggest that thought experiments involving situations that arenāt actual donāt constitute a problem for a theory (first quote above). On the other hand, you imply that they do constitute a problem, which is addressed by restricting the scope of the theory so that it doesnāt apply to such situations (second quote above). Could you clarify?
Maybe Iāve misinterpreted ārepugnantā here? I thought it basically meant ābadā, but Google tells me that a second definition is āin conflict or incompatible withā, and now that I know this, Iām guessing that itās the latter definition that you are using for ārepugnantā. But Iām finding it difficult to make sense of it all (it carries a really strong negative connotation for me, and Iām not sure if itās supposed to in this contextāthere might be nuances that Iām missing), so Iāll try to describe my position using other words.
If my moral theory, when applied to some highly unrealistic thought experiment (which doesnāt have some clear analog to something more realistic), results in a conclusion that I really donāt like, then:
I accept that my moral theory is not a complete and correct theory; and
this is not something that bothers me at all. If the thought experiment ever becomes relevant, Iāll worry about how to patch up the theory then. In the meantime, Iāll carry on trying to live by my moral theory.
Thank you for the clarification. I think I understand your position now.
Why doesnāt it bother you at all that a theory has counterintuitive implications in counterfactual scenarios? Shouldnāt this lower your confidence in the theory? After all, our justification for believing a moral theory seems to turn on (1) the theoryās simplicity and (2) the degree to which it fits our intuitions. When you learn that your theory has counterintuitive implications, this causes you to either restrict the scope of the theory, and thus make it more complex, or recognize that it doesnāt fit the data as well as you thought before. In either case, it seems you should update by believing the theory to a lesser degree.
I think my disagreement is mostly on (1) -- I expect that a correct moral theory would be horrendously complicated. I certainly canāt reduce my moral theory to some simple set of principles: there are many realistic circumstances where my principles clash (individual rights versus greater good, say, or plenty of legal battles where itās not clear what a moral decision would be), and I donāt know of any simple rules to decide what principles I deem more important in which situations. Certainly there are many realistic problems which I think could go either way.
But I agree that all other things equal, simplicity is a good feature to have, and enough simplicity might sometimes outweigh intuition. Perhaps, once future-me carefully consider enormous aggregative ethics problems, I will have an insight that allows a drastically simplified moral theory. The new theory would solve the repugnant conclusion (whatever I think ārepugnantā means in this future!). Applied to present-meās day-to-day problems, such a simplified theory will likely give slightly different answers to what I think today: maybe the uncertainty I have today about certain court cases would be solved by one of the principles that future-me thinks of.
But I donāt think the answers will change a lot. I think my current moral theory basically gives appropriate answers (sometimes uncertain ones) to my problems today. Thereās wiggle-room in places, but there are also some really solid intuitions that I donāt expect future-me to sacrifice. Rescuing the drowning child (at least when I live in a world without the ability to create large numbers of sentient beings!) would be one of these.
I think it quite obvious that if one does not observe a given theory they are not thereby disarmed from criticism of such a theory, similarly, a rejection of moralism is not equivalent with your imputed upshot that ānothing is right or wrongā (although we can imagine cases in which that could be so). In the case of the former, critiquing a theory adhering to but contradicting intuitionistic premises is a straightforward instance of immanent critique. In the case of the latter, quite famously, neither Bernard Williams nor Raymond Geuss had any truck with moralism, yet clearly were not ārelativistsā.