The universe may very well be infinite, and hence contain an infinite amount of happiness and sadness. This causes several problems for altruists
This topic came up on the 80k blog a while ago and I found it utterly ridiculous then and I find it utterly ridiculous now. The possibility of an infinite amount of happiness outside our light-cone (!) does not pose problems for altruists except insofar as they write philosophy textbooks and have to spend a paragraph explaining that, if mathematically necessary, we only count up utilities in some suitably local region, like the Earth. No-one responds to the drowning child by saying, “well there might be an infinite number of sentient life-forms out there, so it doesn’t matter if the child drowns or I damage my suit”. It is just not a consideration.
So I disagree very strongly with the framing of your post, since the bit I quoted is in the summary. The rest of your post is on the somewhat more reasonable topic of comparing utilities across an infinite number of generations. I don’t really see the use of this (you don’t need a fully developed theory of infinite ethics to justify a carbon tax; considering a handful of generations will do), and don’t see the use of the post on this forum, but I’m open to suggestions of possible applications.
I actually agree with you that most people shouldn’t be worried about this (hence my disclaimer that this is not for a general audience). But that doesn’t mean no one should care about it.
Whether we are concerned about an infinite amount of time or an infinite amount of space doesn’t really seem relevant to me at a mathematical level, hence why I grouped them together.
As per (1), it might not be a good use of your time to worry about this. But if it is, I would encourage you to read the paper of Nick Bostrom’s that I linked above, since I think “just look in a local region” is too flippant. E.g. there may be an infinite number of Everett branches we should care about, even if we restrict our attention to earth.
Hopefully this is my last comment in this thread, since I don’t think there’s much more I have to say after this.
I don’t really mind if people are working on these problems, but it’s a looooong way from effective altruism.
Taking into account life-forms outside our observable universe for our moral theories is just absurd. Modelling our actions as affecting an infinite number of our descendants feels a lot more reasonable to me. (I don’t know if it’s useful to do this, but it doesn’t seem obviously stupid.)
Many-worlds is even further away from effective altruism. (And quantum probabilities sum to 1 anyway, so there’s a natural way to weight all the branches if you want to start shooting people if and only if a photon travels through a particular slit and interacts with a detector, ….)
I think the relevance of this post is that it tentatively endorses some type of time-discounting (and also space-discounting?) in utilitarianism. This could be relevant to considerations of the far future, which many EAs think is very important. Though presumably we could make the asymptotic part of the function as far away as we like, so we shouldn’t run into any asymptotic issues?
“No-one responds to the drowning child by saying, “well there might be an infinite number of sentient life-forms out there, so it doesn’t matter if the child drowns or I damage my suit”. It is just not a consideration.”
“It is not an issue for altruists otherwise—everyone saves the drowning child.”
I don’t understand what you are saying here. Are you claiming that because ‘everyone’ does do X or because ‘noone’ does not do X (putting those in quotation marks because I presume you don’t literally mean what you wrote, rather you mean the ‘vast majority of people would/would not do X’), X must be morally correct?
More generally, letting the child drown is a reductio of whatever moral system led to that conclusion.
Population ethics (including infinite ethics) is replete with impossibility theorems showing that no moral theory can satisfy all of our considered intuitions. (See this paper for an overview.) So you cannot simply point to a counterintuitive implication and claim that it disproves the theory from which it follows. If that procedure was followed consistently, it would disprove all moral theories.
I consider this a reason to not strictly adhere to any single moral theory.
This statement is ambiguous. It either means that you adhere to a hybrid theory made up of parts of different moral theories, or that you don’t adhere to a moral theory at all. If you adhere to a hybrid moral theory, this theory is itself subject to the impossibility theorems, so it, too, will have counterintuitive implications. If you adhere to no theory at all, then nothing is right or wrong; a fortiori, not rescuing the child isn’t wrong, and a theory’s implying that not rescuing the child isn’t wrong cannot therefore be a reason for rejecting this theory.
OK—I mean the hybrid theory—but I see two possibilities (I don’t think it’s worth my time reading up on this subject enough to make sure what I mean matches exactly the terminology of the paper(s) you refer to):
In my hybridisation, I’ve already sacrificed some intuitive principles (improving total welfare versus respecting individual rights, say), by weighing up competing intuitions.
Whatever counter-intuitive implications my mish-mash, sometimes fuzzily defined hybrid theory has, they have been pushed into the realm of “what philosophers can write papers on”, rather than what is actually important. The repugnant conclusion falls under this category.
Whichever way it works out, I stick resolutely to saving the drowning child.
Can you (or anyone else who feels similarly) clarify the sense in which you consider the repugnant conclusion ‘not actually important’, but the drowning child example ‘important’?
Both are hypotheticals, both are trying to highlight contradictions in our intuitions about the world, both require you to either (a) put up with the fact that your theory is self-contradictory or (b) accept something that most people would consider unusual/counter-intuitive.
Can you (or anyone else who feels similarly) clarify the sense in which you consider the repugnant conclusion ‘not actually important’, but the drowning child example ‘important’?
Because children die of preventable diseases, but no-one creates arbitrarily large populations of people with just-better-than-nothing well-being.
I’m sorry, but I don’t understand this reply. Suppose you can in fact create arbitrarily large populations of people with lives barely worth living. Some moral theories would then imply that this is what you should do. If you find this implication repugnant, you should also find it repugnant that a theory would have that implication if you found yourself in that position, even if as a matter of fact you don’t. As an analogy, consider Kant’s theory, which implies that a man who is hiding a Jewish family should tell the truth when Nazi officials question him about it. It would be strange to defend Kant’s theory by alleging that, in fact, no actual person ever found himself in that situation. What matters is that the situation is possible, not whether the situation is actual.
But maybe I’m misunderstanding what you meant by “not actually important”?
Well, you can argue that the hypothetical situation is sufficiently exotic that you don’t expect your intuitions to be reliable there.
It’s actually pretty reasonable to me to say that the shallow pond example is simple, realistic and important, compared to the repugnant conclusion, which is abstract, unusual, unreliable and hence useless.
If you find this implication repugnant, {you should also find it repugnant that a theory would have that implication if you found yourself in that position, even if as a matter of fact you don’t}.
I reject the implication inside the curly brackets that I added. I don’t care what would happen to my moral theory if creating these large populations becomes possible; in the unlikely event that I’m still around when it becomes relevant, I’m happy to leave it to future-me to patch up my moral theory in a way that future-me deems appropriate.
As an analogy
I guess I could attach some sort of plausibility score to moral thought experiments. Rescuing a drowning child gets a score near 1, since rescue situations really do happen and it’s just a matter of detail about how much it costs the rescuer. As applied to donating to charity, the score might have to be lowered a little to account for how donating to charity isn’t an exact match for the child in the pond.
The Nazi officials case… seems pretty plausible to me? Like didn’t that actually happen?
Something of a more intermediate case between the drowning child and creating large populations would be the idea of murdering someone to harvest their organs. This is feasible today, but irrelevant since no-one is altruistically murdering people for organs. I think it’s reasonable for someone previously a pure utilitarian to respond with, “Alright, my earlier utilitarianism fails in this case, but it works in lots of other places, so I’ll continue to use it elsewhere, without claiming that it’s a complete moral theory.” (And if they want to analyse it really closely and work out the boundaries of when killing one person to save others is moral and when not, then that’s also a reasonable response.)
A thought experiment involving the creation of large populations gets a plausibility score near zero.
I reject the implication inside the curly brackets that I added.
[...]
I think it’s reasonable for someone previously a pure utilitarian to respond with, “Alright, my earlier utilitarianism fails in this case, but it works in lots of other places, so I’ll continue to use it elsewhere, without claiming that it’s a complete moral theory.”
I find your position unclear. On the one hand, you suggest that thought experiments involving situations that aren’t actual don’t constitute a problem for a theory (first quote above). On the other hand, you imply that they do constitute a problem, which is addressed by restricting the scope of the theory so that it doesn’t apply to such situations (second quote above). Could you clarify?
Maybe I’ve misinterpreted ‘repugnant’ here? I thought it basically meant “bad”, but Google tells me that a second definition is “in conflict or incompatible with”, and now that I know this, I’m guessing that it’s the latter definition that you are using for ‘repugnant’. But I’m finding it difficult to make sense of it all (it carries a really strong negative connotation for me, and I’m not sure if it’s supposed to in this context—there might be nuances that I’m missing), so I’ll try to describe my position using other words.
If my moral theory, when applied to some highly unrealistic thought experiment (which doesn’t have some clear analog to something more realistic), results in a conclusion that I really don’t like, then:
I accept that my moral theory is not a complete and correct theory; and
this is not something that bothers me at all. If the thought experiment ever becomes relevant, I’ll worry about how to patch up the theory then. In the meantime, I’ll carry on trying to live by my moral theory.
Thank you for the clarification. I think I understand your position now.
this is not something that bothers me at all. If the thought experiment ever becomes relevant, I’ll worry about how to patch up the theory then. In the meantime, I’ll carry on trying to live by my moral theory.
Why doesn’t it bother you at all that a theory has counterintuitive implications in counterfactual scenarios? Shouldn’t this lower your confidence in the theory? After all, our justification for believing a moral theory seems to turn on (1) the theory’s simplicity and (2) the degree to which it fits our intuitions. When you learn that your theory has counterintuitive implications, this causes you to either restrict the scope of the theory, and thus make it more complex, or recognize that it doesn’t fit the data as well as you thought before. In either case, it seems you should update by believing the theory to a lesser degree.
Why doesn’t it bother you at all that a theory has counterintuitive implications in counterfactual scenarios? Shouldn’t this lower your confidence in the theory?
I think my disagreement is mostly on (1) -- I expect that a correct moral theory would be horrendously complicated. I certainly can’t reduce my moral theory to some simple set of principles: there are many realistic circumstances where my principles clash (individual rights versus greater good, say, or plenty of legal battles where it’s not clear what a moral decision would be), and I don’t know of any simple rules to decide what principles I deem more important in which situations. Certainly there are many realistic problems which I think could go either way.
But I agree that all other things equal, simplicity is a good feature to have, and enough simplicity might sometimes outweigh intuition. Perhaps, once future-me carefully consider enormous aggregative ethics problems, I will have an insight that allows a drastically simplified moral theory. The new theory would solve the repugnant conclusion (whatever I think ‘repugnant’ means in this future!). Applied to present-me’s day-to-day problems, such a simplified theory will likely give slightly different answers to what I think today: maybe the uncertainty I have today about certain court cases would be solved by one of the principles that future-me thinks of.
But I don’t think the answers will change a lot. I think my current moral theory basically gives appropriate answers (sometimes uncertain ones) to my problems today. There’s wiggle-room in places, but there are also some really solid intuitions that I don’t expect future-me to sacrifice. Rescuing the drowning child (at least when I live in a world without the ability to create large numbers of sentient beings!) would be one of these.
I think it quite obvious that if one does not observe a given theory they are not thereby disarmed from criticism of such a theory, similarly, a rejection of moralism is not equivalent with your imputed upshot that “nothing is right or wrong” (although we can imagine cases in which that could be so). In the case of the former, critiquing a theory adhering to but contradicting intuitionistic premises is a straightforward instance of immanent critique. In the case of the latter, quite famously, neither Bernard Williams nor Raymond Geuss had any truck with moralism, yet clearly were not ‘relativists’.
I sympathize with this. It seems likely that the accessible population of our actions is finite, so I’m not sure one need to necessarily worried about what happens in the infinite case. I’m unworried if my impact on earth across its future is significantly positive, yet the answer of whether I’ve made the (possibly infinite) universe better is undefined.
However, one frustration to this tactic is that infinitarian concerns can ‘slip in’ whenever afforded a non-zero credence. So although given our best physics it is overwhelmingly likely the morally relevant domain of our actions will be constrained by a lightcone only finitely extended in the later-than direction (because of heat death, proton decay, etc.), we should assign some non-zero credence our best physics will be mistaken: perhaps life-permitting conditions could continue indefinitely, or we could wring out life asymptotically faster than the second law, etc. These ‘infinite outcomes’ swamp the expected value calculation, and so infinitarian worries loom large.
Putting to one side my bias towards aggregative consequentialism, someone has to say that to anyone except a radical consequentialist, the classic ‘hope physics is broken’ example does make you seem crazy and consequentialism seem wrong! :p
The text immediately following the passage you quoted reads:
for example: we can plausibly only affect a finite subset of the universe, and an infinite quantity of happiness is unchanged by the addition or subtraction of a finite amount of happiness.
This implies that the quantity of happiness in the universe stays the same after you save the drowning child. So if your reason for saving the child is to make the world a better place, you should be troubled by this implication.
That is precisely the argument that I maintain is only a problem for people who want to write philosophy textbooks, and even then one that should only take a paragraph to tidy up. It is not an issue for altruists otherwise—everyone saves the drowning child.
This topic came up on the 80k blog a while ago and I found it utterly ridiculous then and I find it utterly ridiculous now. The possibility of an infinite amount of happiness outside our light-cone (!) does not pose problems for altruists except insofar as they write philosophy textbooks and have to spend a paragraph explaining that, if mathematically necessary, we only count up utilities in some suitably local region, like the Earth. No-one responds to the drowning child by saying, “well there might be an infinite number of sentient life-forms out there, so it doesn’t matter if the child drowns or I damage my suit”. It is just not a consideration.
So I disagree very strongly with the framing of your post, since the bit I quoted is in the summary. The rest of your post is on the somewhat more reasonable topic of comparing utilities across an infinite number of generations. I don’t really see the use of this (you don’t need a fully developed theory of infinite ethics to justify a carbon tax; considering a handful of generations will do), and don’t see the use of the post on this forum, but I’m open to suggestions of possible applications.
Thanks for the feedback. Couple thoughts:
I actually agree with you that most people shouldn’t be worried about this (hence my disclaimer that this is not for a general audience). But that doesn’t mean no one should care about it.
Whether we are concerned about an infinite amount of time or an infinite amount of space doesn’t really seem relevant to me at a mathematical level, hence why I grouped them together.
As per (1), it might not be a good use of your time to worry about this. But if it is, I would encourage you to read the paper of Nick Bostrom’s that I linked above, since I think “just look in a local region” is too flippant. E.g. there may be an infinite number of Everett branches we should care about, even if we restrict our attention to earth.
Hopefully this is my last comment in this thread, since I don’t think there’s much more I have to say after this.
I don’t really mind if people are working on these problems, but it’s a looooong way from effective altruism.
Taking into account life-forms outside our observable universe for our moral theories is just absurd. Modelling our actions as affecting an infinite number of our descendants feels a lot more reasonable to me. (I don’t know if it’s useful to do this, but it doesn’t seem obviously stupid.)
Many-worlds is even further away from effective altruism. (And quantum probabilities sum to 1 anyway, so there’s a natural way to weight all the branches if you want to start shooting people if and only if a photon travels through a particular slit and interacts with a detector, ….)
I think the relevance of this post is that it tentatively endorses some type of time-discounting (and also space-discounting?) in utilitarianism. This could be relevant to considerations of the far future, which many EAs think is very important. Though presumably we could make the asymptotic part of the function as far away as we like, so we shouldn’t run into any asymptotic issues?
“No-one responds to the drowning child by saying, “well there might be an infinite number of sentient life-forms out there, so it doesn’t matter if the child drowns or I damage my suit”. It is just not a consideration.”
“It is not an issue for altruists otherwise—everyone saves the drowning child.”
I don’t understand what you are saying here. Are you claiming that because ‘everyone’ does do X or because ‘noone’ does not do X (putting those in quotation marks because I presume you don’t literally mean what you wrote, rather you mean the ‘vast majority of people would/would not do X’), X must be morally correct?
That strikes me as...problematic.
Letting the child drown in the hope that
a) there’s an infinite number of life-forms outside our observable universe, and
b) that the correct moral theory does not simply require counting utilities (or whatever) in some local region
strikes me as far more problematic. More generally, letting the child drown is a reductio of whatever moral system led to that conclusion.
Population ethics (including infinite ethics) is replete with impossibility theorems showing that no moral theory can satisfy all of our considered intuitions. (See this paper for an overview.) So you cannot simply point to a counterintuitive implication and claim that it disproves the theory from which it follows. If that procedure was followed consistently, it would disprove all moral theories.
I consider this a reason to not strictly adhere to any single moral theory.
This statement is ambiguous. It either means that you adhere to a hybrid theory made up of parts of different moral theories, or that you don’t adhere to a moral theory at all. If you adhere to a hybrid moral theory, this theory is itself subject to the impossibility theorems, so it, too, will have counterintuitive implications. If you adhere to no theory at all, then nothing is right or wrong; a fortiori, not rescuing the child isn’t wrong, and a theory’s implying that not rescuing the child isn’t wrong cannot therefore be a reason for rejecting this theory.
OK—I mean the hybrid theory—but I see two possibilities (I don’t think it’s worth my time reading up on this subject enough to make sure what I mean matches exactly the terminology of the paper(s) you refer to):
In my hybridisation, I’ve already sacrificed some intuitive principles (improving total welfare versus respecting individual rights, say), by weighing up competing intuitions.
Whatever counter-intuitive implications my mish-mash, sometimes fuzzily defined hybrid theory has, they have been pushed into the realm of “what philosophers can write papers on”, rather than what is actually important. The repugnant conclusion falls under this category.
Whichever way it works out, I stick resolutely to saving the drowning child.
Can you (or anyone else who feels similarly) clarify the sense in which you consider the repugnant conclusion ‘not actually important’, but the drowning child example ‘important’?
Both are hypotheticals, both are trying to highlight contradictions in our intuitions about the world, both require you to either (a) put up with the fact that your theory is self-contradictory or (b) accept something that most people would consider unusual/counter-intuitive.
Because children die of preventable diseases, but no-one creates arbitrarily large populations of people with just-better-than-nothing well-being.
I’m sorry, but I don’t understand this reply. Suppose you can in fact create arbitrarily large populations of people with lives barely worth living. Some moral theories would then imply that this is what you should do. If you find this implication repugnant, you should also find it repugnant that a theory would have that implication if you found yourself in that position, even if as a matter of fact you don’t. As an analogy, consider Kant’s theory, which implies that a man who is hiding a Jewish family should tell the truth when Nazi officials question him about it. It would be strange to defend Kant’s theory by alleging that, in fact, no actual person ever found himself in that situation. What matters is that the situation is possible, not whether the situation is actual.
But maybe I’m misunderstanding what you meant by “not actually important”?
Well, you can argue that the hypothetical situation is sufficiently exotic that you don’t expect your intuitions to be reliable there.
It’s actually pretty reasonable to me to say that the shallow pond example is simple, realistic and important, compared to the repugnant conclusion, which is abstract, unusual, unreliable and hence useless.
I reject the implication inside the curly brackets that I added. I don’t care what would happen to my moral theory if creating these large populations becomes possible; in the unlikely event that I’m still around when it becomes relevant, I’m happy to leave it to future-me to patch up my moral theory in a way that future-me deems appropriate.
I guess I could attach some sort of plausibility score to moral thought experiments. Rescuing a drowning child gets a score near 1, since rescue situations really do happen and it’s just a matter of detail about how much it costs the rescuer. As applied to donating to charity, the score might have to be lowered a little to account for how donating to charity isn’t an exact match for the child in the pond.
The Nazi officials case… seems pretty plausible to me? Like didn’t that actually happen?
Something of a more intermediate case between the drowning child and creating large populations would be the idea of murdering someone to harvest their organs. This is feasible today, but irrelevant since no-one is altruistically murdering people for organs. I think it’s reasonable for someone previously a pure utilitarian to respond with, “Alright, my earlier utilitarianism fails in this case, but it works in lots of other places, so I’ll continue to use it elsewhere, without claiming that it’s a complete moral theory.” (And if they want to analyse it really closely and work out the boundaries of when killing one person to save others is moral and when not, then that’s also a reasonable response.)
A thought experiment involving the creation of large populations gets a plausibility score near zero.
I find your position unclear. On the one hand, you suggest that thought experiments involving situations that aren’t actual don’t constitute a problem for a theory (first quote above). On the other hand, you imply that they do constitute a problem, which is addressed by restricting the scope of the theory so that it doesn’t apply to such situations (second quote above). Could you clarify?
Maybe I’ve misinterpreted ‘repugnant’ here? I thought it basically meant “bad”, but Google tells me that a second definition is “in conflict or incompatible with”, and now that I know this, I’m guessing that it’s the latter definition that you are using for ‘repugnant’. But I’m finding it difficult to make sense of it all (it carries a really strong negative connotation for me, and I’m not sure if it’s supposed to in this context—there might be nuances that I’m missing), so I’ll try to describe my position using other words.
If my moral theory, when applied to some highly unrealistic thought experiment (which doesn’t have some clear analog to something more realistic), results in a conclusion that I really don’t like, then:
I accept that my moral theory is not a complete and correct theory; and
this is not something that bothers me at all. If the thought experiment ever becomes relevant, I’ll worry about how to patch up the theory then. In the meantime, I’ll carry on trying to live by my moral theory.
Thank you for the clarification. I think I understand your position now.
Why doesn’t it bother you at all that a theory has counterintuitive implications in counterfactual scenarios? Shouldn’t this lower your confidence in the theory? After all, our justification for believing a moral theory seems to turn on (1) the theory’s simplicity and (2) the degree to which it fits our intuitions. When you learn that your theory has counterintuitive implications, this causes you to either restrict the scope of the theory, and thus make it more complex, or recognize that it doesn’t fit the data as well as you thought before. In either case, it seems you should update by believing the theory to a lesser degree.
I think my disagreement is mostly on (1) -- I expect that a correct moral theory would be horrendously complicated. I certainly can’t reduce my moral theory to some simple set of principles: there are many realistic circumstances where my principles clash (individual rights versus greater good, say, or plenty of legal battles where it’s not clear what a moral decision would be), and I don’t know of any simple rules to decide what principles I deem more important in which situations. Certainly there are many realistic problems which I think could go either way.
But I agree that all other things equal, simplicity is a good feature to have, and enough simplicity might sometimes outweigh intuition. Perhaps, once future-me carefully consider enormous aggregative ethics problems, I will have an insight that allows a drastically simplified moral theory. The new theory would solve the repugnant conclusion (whatever I think ‘repugnant’ means in this future!). Applied to present-me’s day-to-day problems, such a simplified theory will likely give slightly different answers to what I think today: maybe the uncertainty I have today about certain court cases would be solved by one of the principles that future-me thinks of.
But I don’t think the answers will change a lot. I think my current moral theory basically gives appropriate answers (sometimes uncertain ones) to my problems today. There’s wiggle-room in places, but there are also some really solid intuitions that I don’t expect future-me to sacrifice. Rescuing the drowning child (at least when I live in a world without the ability to create large numbers of sentient beings!) would be one of these.
I think it quite obvious that if one does not observe a given theory they are not thereby disarmed from criticism of such a theory, similarly, a rejection of moralism is not equivalent with your imputed upshot that “nothing is right or wrong” (although we can imagine cases in which that could be so). In the case of the former, critiquing a theory adhering to but contradicting intuitionistic premises is a straightforward instance of immanent critique. In the case of the latter, quite famously, neither Bernard Williams nor Raymond Geuss had any truck with moralism, yet clearly were not ‘relativists’.
I sympathize with this. It seems likely that the accessible population of our actions is finite, so I’m not sure one need to necessarily worried about what happens in the infinite case. I’m unworried if my impact on earth across its future is significantly positive, yet the answer of whether I’ve made the (possibly infinite) universe better is undefined.
However, one frustration to this tactic is that infinitarian concerns can ‘slip in’ whenever afforded a non-zero credence. So although given our best physics it is overwhelmingly likely the morally relevant domain of our actions will be constrained by a lightcone only finitely extended in the later-than direction (because of heat death, proton decay, etc.), we should assign some non-zero credence our best physics will be mistaken: perhaps life-permitting conditions could continue indefinitely, or we could wring out life asymptotically faster than the second law, etc. These ‘infinite outcomes’ swamp the expected value calculation, and so infinitarian worries loom large.
Putting to one side my bias towards aggregative consequentialism, someone has to say that to anyone except a radical consequentialist, the classic ‘hope physics is broken’ example does make you seem crazy and consequentialism seem wrong! :p
Or perhaps uncertainty to the size of the universe might lead to similar worries, if we merely know it is finite, but do not have a bound.
The text immediately following the passage you quoted reads:
This implies that the quantity of happiness in the universe stays the same after you save the drowning child. So if your reason for saving the child is to make the world a better place, you should be troubled by this implication.
That is precisely the argument that I maintain is only a problem for people who want to write philosophy textbooks, and even then one that should only take a paragraph to tidy up. It is not an issue for altruists otherwise—everyone saves the drowning child.