“You can only get those weird infinite ethics paradoxes if you say lets pretend for a minute that with 100% certainty we live in an infinite world, and it is”literally infinite … not just tend to infinity as interpreted in physics”. Which is just not the case!”
EU maximization will typically mean all expected values are undefined under naive treatment of infinities like real numbers, because every option should have a nonzero probability of +infinity and a nonzero probability of -infinity (e.g. there’s a chance there’s a god and you will worship the right one, or there’s a chance the universe is infinite and the aggregate utility is undefined). So, even if you assign tiny probabilities to infinities, they break naive EU maximization.
There are extensions that break less, though, but each framework for handling both finite and infinite cases seems to have serious problems or require pretty arbitrary assumptions.
“In my view such though experiments are nonsense, all you are doing is pointing out that in impossible cases where there are paradoxes your decision theory breaks down – well of course it does.”
Infinities haven’t been proven to be impossible, though.
Infinities haven’t been proven to be impossible, though.
What do you think is the meaning of possibility? In my view, it only makes sense to talk about conditional probabilities, i.e. we can only say something is more or less likely conditional on a set of assumptions.
For example, when I say the probability of getting heads in a coin flip is about 50 %, I am assuming I flip it in a non-strategic way (e.g. not aiming for only 0.5 rotations such that I get my desired outcome), and that the coin has heads on one face, and tails on the other, among others. The probability of getting heads will tend to 0 or 1 as I specify more and more of the conditions of the coin flip.
Similarly, when we talk about the probability of an ongoing chess match between Magnus and Alireza being won by Magnus, we will conditionalise on the current state of the board, and that the game will continue following chess rules as we know it, among others.
I see axioms as the propositions which are always playing the role of assumptions. They are like the rules of a table game which allow us to determine which player is most likely to win. In this sense, asking what is the probability of infinite worlds sounds similar to asking what is the probability of the rules of chess being correct. It is meaningless to say the rules of chess are correct or incorrect, all I can do is talking about the likelihood of certain board states conditional on the rules of chess being followed. In reality, we can only say what is the probability of a given world state conditional on some axioms.
I am still a little confused about how to decide on what should be defined as axioms, but I think 2 important criteria are:
The set of axioms should be consistent.
All axioms should feel intuitively true.
I am happy to reject the possibility of infinite worlds because:
Setting the possibility of infinite worlds as an axiom would not be consistent with Amanda’s 5 axioms (i.e. all 6 cannot be true at the same time).
Amanda’s 5 axioms feel intuitively true, whereas the possibility of infinite worlds feel intuitively false.
“You can only get those weird infinite ethics paradoxes if you say lets pretend for a minute that with 100% certainty we live in an infinite world, and it is”literally infinite … not just tend to infinity as interpreted in physics”. Which is just not the case!”
EU maximization will typically mean all expected values are undefined under naive treatment of infinities like real numbers, because every option should have a nonzero probability of +infinity and a nonzero probability of -infinity (e.g. there’s a chance there’s a god and you will worship the right one, or there’s a chance the universe is infinite and the aggregate utility is undefined). So, even if you assign tiny probabilities to infinities, they break naive EU maximization.
There are extensions that break less, though, but each framework for handling both finite and infinite cases seems to have serious problems or require pretty arbitrary assumptions.
“In my view such though experiments are nonsense, all you are doing is pointing out that in impossible cases where there are paradoxes your decision theory breaks down – well of course it does.”
Infinities haven’t been proven to be impossible, though.
What do you think is the meaning of possibility? In my view, it only makes sense to talk about conditional probabilities, i.e. we can only say something is more or less likely conditional on a set of assumptions.
For example, when I say the probability of getting heads in a coin flip is about 50 %, I am assuming I flip it in a non-strategic way (e.g. not aiming for only 0.5 rotations such that I get my desired outcome), and that the coin has heads on one face, and tails on the other, among others. The probability of getting heads will tend to 0 or 1 as I specify more and more of the conditions of the coin flip.
Similarly, when we talk about the probability of an ongoing chess match between Magnus and Alireza being won by Magnus, we will conditionalise on the current state of the board, and that the game will continue following chess rules as we know it, among others.
I see axioms as the propositions which are always playing the role of assumptions. They are like the rules of a table game which allow us to determine which player is most likely to win. In this sense, asking what is the probability of infinite worlds sounds similar to asking what is the probability of the rules of chess being correct. It is meaningless to say the rules of chess are correct or incorrect, all I can do is talking about the likelihood of certain board states conditional on the rules of chess being followed. In reality, we can only say what is the probability of a given world state conditional on some axioms.
I am still a little confused about how to decide on what should be defined as axioms, but I think 2 important criteria are:
The set of axioms should be consistent.
All axioms should feel intuitively true.
I am happy to reject the possibility of infinite worlds because:
Setting the possibility of infinite worlds as an axiom would not be consistent with Amanda’s 5 axioms (i.e. all 6 cannot be true at the same time).
Amanda’s 5 axioms feel intuitively true, whereas the possibility of infinite worlds feel intuitively false.