I think you are saying that, although utility may exist arbitrarily far away (in time/space), the likelihood of it existing tends to zero...
Hi Vasco, No I am not saying that at all. Sorry I don’t know how best to express this. I never said utility should approach zero at all. I said your discount could be infintesimally small if you wish. So utility declines over time but that does not mean it needs to approach zero In fact in the limit it can stay at 1 but still allow a preference ordering.
For example consider the series where you start with 1 and then minus a quarter and then minus an 1⁄8 from that then minus a 1⁄16 from that and so on*, which goes like: 1, 3⁄4, 5⁄8, 9⁄16, 17⁄32, 33⁄64, … . This does not get closer to zero over time – it gets closer to 0.5. But also each point in the series is smaller than the previous so you can put them in order.
Utility could tend to zero if there was a constant discount rate applied to account for a small but steady chance that the universe might stop existing. But it would make more sense to apply a declining discount rate, so there is no need to say it tends to zero or any other number.
In short if there is a tiny tiny probability the universe will not last forever then that should be sufficient to apply a preference ordering to infinite sequences and resolve any paradoxes involving infinity and ethics.
I think allowing for infities is still a problem.
You can only get those weird infinite ethics paradoxes if you say lets pretend for a minute that with 100% certainty we live in an infinite world, and it is”literally infinite … not just tend to infinity as interpreted in physics”. Which is just not the case!
I mean you could do that thought experiment but I don’t see how that makes any more sense than saying: lets pretend for a minute that time travel is possible and then point out that utilitarianism doesn’t have a good answer to if I should go back in time and kill Hitler if doing so would stop me from going back in time and killing Hitler.
In my view such though experiments are nonsense, all you are doing is pointing out that in impossible cases where there are paradoxes your decision theory breaks down – well of course it does.
“You can only get those weird infinite ethics paradoxes if you say lets pretend for a minute that with 100% certainty we live in an infinite world, and it is”literally infinite … not just tend to infinity as interpreted in physics”. Which is just not the case!”
EU maximization will typically mean all expected values are undefined under naive treatment of infinities like real numbers, because every option should have a nonzero probability of +infinity and a nonzero probability of -infinity (e.g. there’s a chance there’s a god and you will worship the right one, or there’s a chance the universe is infinite and the aggregate utility is undefined). So, even if you assign tiny probabilities to infinities, they break naive EU maximization.
There are extensions that break less, though, but each framework for handling both finite and infinite cases seems to have serious problems or require pretty arbitrary assumptions.
“In my view such though experiments are nonsense, all you are doing is pointing out that in impossible cases where there are paradoxes your decision theory breaks down – well of course it does.”
Infinities haven’t been proven to be impossible, though.
Infinities haven’t been proven to be impossible, though.
What do you think is the meaning of possibility? In my view, it only makes sense to talk about conditional probabilities, i.e. we can only say something is more or less likely conditional on a set of assumptions.
For example, when I say the probability of getting heads in a coin flip is about 50 %, I am assuming I flip it in a non-strategic way (e.g. not aiming for only 0.5 rotations such that I get my desired outcome), and that the coin has heads on one face, and tails on the other, among others. The probability of getting heads will tend to 0 or 1 as I specify more and more of the conditions of the coin flip.
Similarly, when we talk about the probability of an ongoing chess match between Magnus and Alireza being won by Magnus, we will conditionalise on the current state of the board, and that the game will continue following chess rules as we know it, among others.
I see axioms as the propositions which are always playing the role of assumptions. They are like the rules of a table game which allow us to determine which player is most likely to win. In this sense, asking what is the probability of infinite worlds sounds similar to asking what is the probability of the rules of chess being correct. It is meaningless to say the rules of chess are correct or incorrect, all I can do is talking about the likelihood of certain board states conditional on the rules of chess being followed. In reality, we can only say what is the probability of a given world state conditional on some axioms.
I am still a little confused about how to decide on what should be defined as axioms, but I think 2 important criteria are:
The set of axioms should be consistent.
All axioms should feel intuitively true.
I am happy to reject the possibility of infinite worlds because:
Setting the possibility of infinite worlds as an axiom would not be consistent with Amanda’s 5 axioms (i.e. all 6 cannot be true at the same time).
Amanda’s 5 axioms feel intuitively true, whereas the possibility of infinite worlds feel intuitively false.
In my view such though experiments are nonsense, all you are doing is pointing out that in impossible cases where there are paradoxes your decision theory breaks down – well of course it does.
Hi Vasco, No I am not saying that at all. Sorry I don’t know how best to express this. I never said utility should approach zero at all. I said your discount could be infintesimally small if you wish. So utility declines over time but that does not mean it needs to approach zero In fact in the limit it can stay at 1 but still allow a preference ordering.
For example consider the series where you start with 1 and then minus a quarter and then minus an 1⁄8 from that then minus a 1⁄16 from that and so on*, which goes like: 1, 3⁄4, 5⁄8, 9⁄16, 17⁄32, 33⁄64, … . This does not get closer to zero over time – it gets closer to 0.5. But also each point in the series is smaller than the previous so you can put them in order.
Utility could tend to zero if there was a constant discount rate applied to account for a small but steady chance that the universe might stop existing. But it would make more sense to apply a declining discount rate, so there is no need to say it tends to zero or any other number.
In short if there is a tiny tiny probability the universe will not last forever then that should be sufficient to apply a preference ordering to infinite sequences and resolve any paradoxes involving infinity and ethics.
You can only get those weird infinite ethics paradoxes if you say lets pretend for a minute that with 100% certainty we live in an infinite world, and it is”literally infinite … not just tend to infinity as interpreted in physics”. Which is just not the case!
I mean you could do that thought experiment but I don’t see how that makes any more sense than saying: lets pretend for a minute that time travel is possible and then point out that utilitarianism doesn’t have a good answer to if I should go back in time and kill Hitler if doing so would stop me from going back in time and killing Hitler.
In my view such though experiments are nonsense, all you are doing is pointing out that in impossible cases where there are paradoxes your decision theory breaks down – well of course it does.
Hope that helps.
“You can only get those weird infinite ethics paradoxes if you say lets pretend for a minute that with 100% certainty we live in an infinite world, and it is”literally infinite … not just tend to infinity as interpreted in physics”. Which is just not the case!”
EU maximization will typically mean all expected values are undefined under naive treatment of infinities like real numbers, because every option should have a nonzero probability of +infinity and a nonzero probability of -infinity (e.g. there’s a chance there’s a god and you will worship the right one, or there’s a chance the universe is infinite and the aggregate utility is undefined). So, even if you assign tiny probabilities to infinities, they break naive EU maximization.
There are extensions that break less, though, but each framework for handling both finite and infinite cases seems to have serious problems or require pretty arbitrary assumptions.
“In my view such though experiments are nonsense, all you are doing is pointing out that in impossible cases where there are paradoxes your decision theory breaks down – well of course it does.”
Infinities haven’t been proven to be impossible, though.
What do you think is the meaning of possibility? In my view, it only makes sense to talk about conditional probabilities, i.e. we can only say something is more or less likely conditional on a set of assumptions.
For example, when I say the probability of getting heads in a coin flip is about 50 %, I am assuming I flip it in a non-strategic way (e.g. not aiming for only 0.5 rotations such that I get my desired outcome), and that the coin has heads on one face, and tails on the other, among others. The probability of getting heads will tend to 0 or 1 as I specify more and more of the conditions of the coin flip.
Similarly, when we talk about the probability of an ongoing chess match between Magnus and Alireza being won by Magnus, we will conditionalise on the current state of the board, and that the game will continue following chess rules as we know it, among others.
I see axioms as the propositions which are always playing the role of assumptions. They are like the rules of a table game which allow us to determine which player is most likely to win. In this sense, asking what is the probability of infinite worlds sounds similar to asking what is the probability of the rules of chess being correct. It is meaningless to say the rules of chess are correct or incorrect, all I can do is talking about the likelihood of certain board states conditional on the rules of chess being followed. In reality, we can only say what is the probability of a given world state conditional on some axioms.
I am still a little confused about how to decide on what should be defined as axioms, but I think 2 important criteria are:
The set of axioms should be consistent.
All axioms should feel intuitively true.
I am happy to reject the possibility of infinite worlds because:
Setting the possibility of infinite worlds as an axiom would not be consistent with Amanda’s 5 axioms (i.e. all 6 cannot be true at the same time).
Amanda’s 5 axioms feel intuitively true, whereas the possibility of infinite worlds feel intuitively false.
Got it, thanks!
I agree.