I think you missed my point a bit. Nothing I said was to challenge impartiality. I am not at all saying that people further away in time and space are any less intrinsically morally relevant, only that if you ascribe some non-zero probability of the universe being finite then you can ascribe a preference ordering. Like all else being identical helping someone now is better than helping someone after the universe might no longer exist, because, you know, the universe might not exist then (they are no less morally relevant). And so ta-da all paradoxes to do with infinite ethics go away as you can not longer shift utility infinitely into the future.
You can ignore infinite cases if you assume only the aggregate matters and infinite/undefined cases can’t be predictably affected. But, if you’re a risk-neutral expected value maximizing total utilitarian, you should be trying to increase the probability of a positive infinity aggregate or reduce the probability of a negative infinity aggregate (or both), and at any finite cost and fanatically.
I don’t see why that is different from saying “But, if you’re a risk-neutral expected value maximizing total utilitarian, you should be trying to increase the probability of a [very large number]* aggregate or reduce the probability of a negative [very large number]* aggregate (or both), and at [essentially] any finite cost and fanatically.”
I don’t think you need infinities to say that very small probabilities of very big positive (or negative) outcomes messes up utilitarian thinking. (See Pascal’s Mugging or Repugnant Conclusion.)
My claim is that any paradox with infinites is very easily resolvable (e.g. by noting that there is some chance the universe is not infinite, etc) or can be reduced to an well known existing ethical challenge (e.g. utilitarianism can get fanatical about large numbers) .
I hope that explains where I am coming form and why I might say that actually you “can ignore infinite cases”.
I don’t think you need infinities to say that very small probabilities of very big positive (or negative) outcomes messes up utilitarian thinking. (See Pascal’s Mugging or Repugnant Conclusion.)
I agree. One does not even need large numbers nor small probabilities. Complex cluelessness is enough to make the result of any expected value calculation quite unclear. However, not totally arbitrary, so I still endorse expectational total hedonistic utilitarianism.
It is pretty much the same, but I don’t see why that justifies ignoring infinities, if you maximize total utility risk neutrally. I personally assign <50% weight to fanatical decision theories, so I mostly don’t maximize total utility risk neutrally. Maybe you mean something similar (or less tha 100% weight to fanatical views)?
Some people have proposed specific responses to Pascal’s mugging and the RC that are more specific to the structures of those problems, but they can’t be used to ignore infinities in general.
I am not sure that we disagree here / expect we are talking about slightly different things. I am not expressing any view on fanaticism issues or how to resolve them.
All I was saying is that infinites are no more of a problem for utilitarianism/ethics than large numbers. (If you want to say “infinite” or “TREE(3)” in a thought experiment, well either works.) I am not 100% sure, but based on what you said, I don’t think you disagree on that.
So what? What thought experiment does this lead to that causes a challenge for ethics? If infinite undefined-ness causes a problem for ethics please specify it, but so far the infinite ethics thought experiments I have seen either:
Are trivially the same as non-infinite thought experiments. For example undefined-ness is a problem for utilitarianism even without infinity. For example think of the Pascal’s mugger who offers to create “an undefined and unspecified but very large amount of utility, so large as to make TREE(3) appear small”
Make no sense. They require assuming two things that physics says are not true – let us assume that we know with 100% certainty that the universe is infinite and let us assume that we can treat those infinites as anything other than limits in a finite series. This make no more sense than though experiments about what if time travel was true make sense and are little better than what if “consciousness is actually cheesy-bread”.
Maybe I am missing something and there are for example some really good solutions to Pascal’s mugging that don’t work in the infinite case but work in the very large but undefined cases or some other kind of thought experiment I have not seen yet in which case I am happy to retract my skepticism.
I would say both “very large unknown positive number x”—“very large unknown positive number y” and inf—inf are undefined. However, whereas the value of 1st difference can in theory be determined by looking into what is generating x and y, the 2nd difference cannot be resolved even in principle.
inf—inf can sometimes be resolved under certain assumptions with richer representations of infinite outcomes, e.g. if both infinities are the result of infinite series over a common ordered index set (e.g. spacetime locations by distance from a specific location, moral patients with some order), you can rearrange the difference of series as a series of differences. This doesn’t always work, because the series of differences may not always have a limit at all.
inf—inf can sometimes be resolved under certain assumptions with richer representations of infinite outcomes, e.g. if both infinities are the result of infinite series over a common ordered index set
Right, but I would classify these cases as resolving “very large unknown positive number x”—“very large unknown positive number y”. It looks to me that infinite series are endless in the sense that we cannot point to where they end, but they do not contain infinity.
For example, the natural numbers 1, 2, … go on indefinetely, but any single one of them is still finite, so I would say they can be represented by 1, 2, …, N, where N is a very large unknown number. From the point of view of physics, I am pretty confident we could assume N = TREE(3)^TREE(3)^TREE(3)^TREE(3)^TREE(3)^TREE(3)^TREE(3)^TREE(3)^TREE(3)^TREE(3) while explaining exactly the same evidence.
I think you are saying that, although utility may exist arbitrarily far away (in time/space), the likelihood of it existing tends to zero as it is gets further and further away from us. So the expected utility of the utility which is further and further away will approach zero (e.g. in the same way that x e^-x^2 tends to zero as x tends to infinity, which explais the finite mean of a normal distribution).
I think allowing for infities is still a problem. My understanding is that, when people talk about infinite worlds, they do not mean finite worlds tending to infinite worlds. They mean literally infinite world. In this case, the expected utility will be infinite, not just tend to infinity as interpreted in physics.
I think you are saying that, although utility may exist arbitrarily far away (in time/space), the likelihood of it existing tends to zero...
Hi Vasco, No I am not saying that at all. Sorry I don’t know how best to express this. I never said utility should approach zero at all. I said your discount could be infintesimally small if you wish. So utility declines over time but that does not mean it needs to approach zero In fact in the limit it can stay at 1 but still allow a preference ordering.
For example consider the series where you start with 1 and then minus a quarter and then minus an 1⁄8 from that then minus a 1⁄16 from that and so on*, which goes like: 1, 3⁄4, 5⁄8, 9⁄16, 17⁄32, 33⁄64, … . This does not get closer to zero over time – it gets closer to 0.5. But also each point in the series is smaller than the previous so you can put them in order.
Utility could tend to zero if there was a constant discount rate applied to account for a small but steady chance that the universe might stop existing. But it would make more sense to apply a declining discount rate, so there is no need to say it tends to zero or any other number.
In short if there is a tiny tiny probability the universe will not last forever then that should be sufficient to apply a preference ordering to infinite sequences and resolve any paradoxes involving infinity and ethics.
I think allowing for infities is still a problem.
You can only get those weird infinite ethics paradoxes if you say lets pretend for a minute that with 100% certainty we live in an infinite world, and it is”literally infinite … not just tend to infinity as interpreted in physics”. Which is just not the case!
I mean you could do that thought experiment but I don’t see how that makes any more sense than saying: lets pretend for a minute that time travel is possible and then point out that utilitarianism doesn’t have a good answer to if I should go back in time and kill Hitler if doing so would stop me from going back in time and killing Hitler.
In my view such though experiments are nonsense, all you are doing is pointing out that in impossible cases where there are paradoxes your decision theory breaks down – well of course it does.
“You can only get those weird infinite ethics paradoxes if you say lets pretend for a minute that with 100% certainty we live in an infinite world, and it is”literally infinite … not just tend to infinity as interpreted in physics”. Which is just not the case!”
EU maximization will typically mean all expected values are undefined under naive treatment of infinities like real numbers, because every option should have a nonzero probability of +infinity and a nonzero probability of -infinity (e.g. there’s a chance there’s a god and you will worship the right one, or there’s a chance the universe is infinite and the aggregate utility is undefined). So, even if you assign tiny probabilities to infinities, they break naive EU maximization.
There are extensions that break less, though, but each framework for handling both finite and infinite cases seems to have serious problems or require pretty arbitrary assumptions.
“In my view such though experiments are nonsense, all you are doing is pointing out that in impossible cases where there are paradoxes your decision theory breaks down – well of course it does.”
Infinities haven’t been proven to be impossible, though.
Infinities haven’t been proven to be impossible, though.
What do you think is the meaning of possibility? In my view, it only makes sense to talk about conditional probabilities, i.e. we can only say something is more or less likely conditional on a set of assumptions.
For example, when I say the probability of getting heads in a coin flip is about 50 %, I am assuming I flip it in a non-strategic way (e.g. not aiming for only 0.5 rotations such that I get my desired outcome), and that the coin has heads on one face, and tails on the other, among others. The probability of getting heads will tend to 0 or 1 as I specify more and more of the conditions of the coin flip.
Similarly, when we talk about the probability of an ongoing chess match between Magnus and Alireza being won by Magnus, we will conditionalise on the current state of the board, and that the game will continue following chess rules as we know it, among others.
I see axioms as the propositions which are always playing the role of assumptions. They are like the rules of a table game which allow us to determine which player is most likely to win. In this sense, asking what is the probability of infinite worlds sounds similar to asking what is the probability of the rules of chess being correct. It is meaningless to say the rules of chess are correct or incorrect, all I can do is talking about the likelihood of certain board states conditional on the rules of chess being followed. In reality, we can only say what is the probability of a given world state conditional on some axioms.
I am still a little confused about how to decide on what should be defined as axioms, but I think 2 important criteria are:
The set of axioms should be consistent.
All axioms should feel intuitively true.
I am happy to reject the possibility of infinite worlds because:
Setting the possibility of infinite worlds as an axiom would not be consistent with Amanda’s 5 axioms (i.e. all 6 cannot be true at the same time).
Amanda’s 5 axioms feel intuitively true, whereas the possibility of infinite worlds feel intuitively false.
In my view such though experiments are nonsense, all you are doing is pointing out that in impossible cases where there are paradoxes your decision theory breaks down – well of course it does.
I think you missed my point a bit. Nothing I said was to challenge impartiality. I am not at all saying that people further away in time and space are any less intrinsically morally relevant, only that if you ascribe some non-zero probability of the universe being finite then you can ascribe a preference ordering. Like all else being identical helping someone now is better than helping someone after the universe might no longer exist, because, you know, the universe might not exist then (they are no less morally relevant). And so ta-da all paradoxes to do with infinite ethics go away as you can not longer shift utility infinitely into the future.
You can ignore infinite cases if you assume only the aggregate matters and infinite/undefined cases can’t be predictably affected. But, if you’re a risk-neutral expected value maximizing total utilitarian, you should be trying to increase the probability of a positive infinity aggregate or reduce the probability of a negative infinity aggregate (or both), and at any finite cost and fanatically.
I don’t see why that is different from saying “But, if you’re a risk-neutral expected value maximizing total utilitarian, you should be trying to increase the probability of a [very large number]* aggregate or reduce the probability of a negative [very large number]* aggregate (or both), and at [essentially] any finite cost and fanatically.”
I don’t think you need infinities to say that very small probabilities of very big positive (or negative) outcomes messes up utilitarian thinking. (See Pascal’s Mugging or Repugnant Conclusion.)
My claim is that any paradox with infinites is very easily resolvable (e.g. by noting that there is some chance the universe is not infinite, etc) or can be reduced to an well known existing ethical challenge (e.g. utilitarianism can get fanatical about large numbers) .
I hope that explains where I am coming form and why I might say that actually you “can ignore infinite cases”.
* E.g. TREE(3)
I agree. One does not even need large numbers nor small probabilities. Complex cluelessness is enough to make the result of any expected value calculation quite unclear. However, not totally arbitrary, so I still endorse expectational total hedonistic utilitarianism.
(I’ve edited this comment somewhat.)
It is pretty much the same, but I don’t see why that justifies ignoring infinities, if you maximize total utility risk neutrally. I personally assign <50% weight to fanatical decision theories, so I mostly don’t maximize total utility risk neutrally. Maybe you mean something similar (or less tha 100% weight to fanatical views)?
Some people have proposed specific responses to Pascal’s mugging and the RC that are more specific to the structures of those problems, but they can’t be used to ignore infinities in general.
I am not sure that we disagree here / expect we are talking about slightly different things. I am not expressing any view on fanaticism issues or how to resolve them.
All I was saying is that infinites are no more of a problem for utilitarianism/ethics than large numbers. (If you want to say “infinite” or “TREE(3)” in a thought experiment, well either works.) I am not 100% sure, but based on what you said, I don’t think you disagree on that.
Doesn’t infinity make aggregating utilities undefined, in a way that’s not true for just very large numbers? Maybe I’m missing something here though.
So what? What thought experiment does this lead to that causes a challenge for ethics? If infinite undefined-ness causes a problem for ethics please specify it, but so far the infinite ethics thought experiments I have seen either:
Are trivially the same as non-infinite thought experiments. For example undefined-ness is a problem for utilitarianism even without infinity. For example think of the Pascal’s mugger who offers to create “an undefined and unspecified but very large amount of utility, so large as to make TREE(3) appear small”
Make no sense. They require assuming two things that physics says are not true – let us assume that we know with 100% certainty that the universe is infinite and let us assume that we can treat those infinites as anything other than limits in a finite series. This make no more sense than though experiments about what if time travel was true make sense and are little better than what if “consciousness is actually cheesy-bread”.
Maybe I am missing something and there are for example some really good solutions to Pascal’s mugging that don’t work in the infinite case but work in the very large but undefined cases or some other kind of thought experiment I have not seen yet in which case I am happy to retract my skepticism.
Hi Linch,
I would say both “very large unknown positive number x”—“very large unknown positive number y” and inf—inf are undefined. However, whereas the value of 1st difference can in theory be determined by looking into what is generating x and y, the 2nd difference cannot be resolved even in principle.
inf—inf can sometimes be resolved under certain assumptions with richer representations of infinite outcomes, e.g. if both infinities are the result of infinite series over a common ordered index set (e.g. spacetime locations by distance from a specific location, moral patients with some order), you can rearrange the difference of series as a series of differences. This doesn’t always work, because the series of differences may not always have a limit at all.
See:
https://forum.effectivealtruism.org/posts/N2veJcXPHby5ZwnE5/hayden-wilkinson-doing-good-in-an-infinite-chaotic-world
https://link.springer.com/article/10.1007/s11098-020-01516-w
Right, but I would classify these cases as resolving “very large unknown positive number x”—“very large unknown positive number y”. It looks to me that infinite series are endless in the sense that we cannot point to where they end, but they do not contain infinity.
For example, the natural numbers 1, 2, … go on indefinetely, but any single one of them is still finite, so I would say they can be represented by 1, 2, …, N, where N is a very large unknown number. From the point of view of physics, I am pretty confident we could assume N = TREE(3)^TREE(3)^TREE(3)^TREE(3)^TREE(3)^TREE(3)^TREE(3)^TREE(3)^TREE(3)^TREE(3) while explaining exactly the same evidence.
Saved to watch later. Thanks for sharing!
Ah, sorry for misinterpreting!
I think you are saying that, although utility may exist arbitrarily far away (in time/space), the likelihood of it existing tends to zero as it is gets further and further away from us. So the expected utility of the utility which is further and further away will approach zero (e.g. in the same way that x e^-x^2 tends to zero as x tends to infinity, which explais the finite mean of a normal distribution).
I think allowing for infities is still a problem. My understanding is that, when people talk about infinite worlds, they do not mean finite worlds tending to infinite worlds. They mean literally infinite world. In this case, the expected utility will be infinite, not just tend to infinity as interpreted in physics.
Hi Vasco, No I am not saying that at all. Sorry I don’t know how best to express this. I never said utility should approach zero at all. I said your discount could be infintesimally small if you wish. So utility declines over time but that does not mean it needs to approach zero In fact in the limit it can stay at 1 but still allow a preference ordering.
For example consider the series where you start with 1 and then minus a quarter and then minus an 1⁄8 from that then minus a 1⁄16 from that and so on*, which goes like: 1, 3⁄4, 5⁄8, 9⁄16, 17⁄32, 33⁄64, … . This does not get closer to zero over time – it gets closer to 0.5. But also each point in the series is smaller than the previous so you can put them in order.
Utility could tend to zero if there was a constant discount rate applied to account for a small but steady chance that the universe might stop existing. But it would make more sense to apply a declining discount rate, so there is no need to say it tends to zero or any other number.
In short if there is a tiny tiny probability the universe will not last forever then that should be sufficient to apply a preference ordering to infinite sequences and resolve any paradoxes involving infinity and ethics.
You can only get those weird infinite ethics paradoxes if you say lets pretend for a minute that with 100% certainty we live in an infinite world, and it is”literally infinite … not just tend to infinity as interpreted in physics”. Which is just not the case!
I mean you could do that thought experiment but I don’t see how that makes any more sense than saying: lets pretend for a minute that time travel is possible and then point out that utilitarianism doesn’t have a good answer to if I should go back in time and kill Hitler if doing so would stop me from going back in time and killing Hitler.
In my view such though experiments are nonsense, all you are doing is pointing out that in impossible cases where there are paradoxes your decision theory breaks down – well of course it does.
Hope that helps.
“You can only get those weird infinite ethics paradoxes if you say lets pretend for a minute that with 100% certainty we live in an infinite world, and it is”literally infinite … not just tend to infinity as interpreted in physics”. Which is just not the case!”
EU maximization will typically mean all expected values are undefined under naive treatment of infinities like real numbers, because every option should have a nonzero probability of +infinity and a nonzero probability of -infinity (e.g. there’s a chance there’s a god and you will worship the right one, or there’s a chance the universe is infinite and the aggregate utility is undefined). So, even if you assign tiny probabilities to infinities, they break naive EU maximization.
There are extensions that break less, though, but each framework for handling both finite and infinite cases seems to have serious problems or require pretty arbitrary assumptions.
“In my view such though experiments are nonsense, all you are doing is pointing out that in impossible cases where there are paradoxes your decision theory breaks down – well of course it does.”
Infinities haven’t been proven to be impossible, though.
What do you think is the meaning of possibility? In my view, it only makes sense to talk about conditional probabilities, i.e. we can only say something is more or less likely conditional on a set of assumptions.
For example, when I say the probability of getting heads in a coin flip is about 50 %, I am assuming I flip it in a non-strategic way (e.g. not aiming for only 0.5 rotations such that I get my desired outcome), and that the coin has heads on one face, and tails on the other, among others. The probability of getting heads will tend to 0 or 1 as I specify more and more of the conditions of the coin flip.
Similarly, when we talk about the probability of an ongoing chess match between Magnus and Alireza being won by Magnus, we will conditionalise on the current state of the board, and that the game will continue following chess rules as we know it, among others.
I see axioms as the propositions which are always playing the role of assumptions. They are like the rules of a table game which allow us to determine which player is most likely to win. In this sense, asking what is the probability of infinite worlds sounds similar to asking what is the probability of the rules of chess being correct. It is meaningless to say the rules of chess are correct or incorrect, all I can do is talking about the likelihood of certain board states conditional on the rules of chess being followed. In reality, we can only say what is the probability of a given world state conditional on some axioms.
I am still a little confused about how to decide on what should be defined as axioms, but I think 2 important criteria are:
The set of axioms should be consistent.
All axioms should feel intuitively true.
I am happy to reject the possibility of infinite worlds because:
Setting the possibility of infinite worlds as an axiom would not be consistent with Amanda’s 5 axioms (i.e. all 6 cannot be true at the same time).
Amanda’s 5 axioms feel intuitively true, whereas the possibility of infinite worlds feel intuitively false.
Got it, thanks!
I agree.