I see now my reply just above misinterpreted of what you said, sorry. If I understand correctly, you were referring to what you mentioned here:
All options maximize expected utility (EU), since the expected utility will be undefined (or infinite) regardless. There’s always a nonzero chance you will end up choosing the right religion and be rewarded infinitely and a nonzero chance you will end up choosing badly and be punished infinitely, so the EU is +infinity + (-infinity) = undefined. (I got this from a paper, but I forget which one; maybe something by Alan Hájek.)
In response to 1, you might say that we should maximize the probability of +infinity and minimize the probability of -infinity before considering finite values. This could be justified through the application of plausible rationality axioms directly, in particular the independence axiom. This could reduce to EU maximization with some prior steps where we ignore equiprobable parts of distributions with the same values. However, infinite and unbounded values violate the continuity axiom. Furthermore, if we’re allowing infinities, especially as limits of aggregates of finite values like an eternity of positive welfare, then it would be suspicious to not allow unbounded finite values at least in principle. Unbounded finite values can lead to violations of the sure-thing principle, as well as vulnerability to Dutch books and money pumps (e.g. see here, here and my reply, and here). If the bases for allowing and treating infinities this way require the violation of some plausible requirements of rationality or require ad hoc and suspicious beliefs about what kinds of values are possible (infinity is possible but finite values must be bounded), then it’s at least not obvious that we’re normatively required to treat infinities this way. Some other decision theory might be preferable, or we can allow decision-theoretic uncertainty.
The 1st point is not a problem for me. For the reasons described in Ellis 2018, I do not think there are infinities.
As for the 2nd point, the definition of unbounded utilities Paul Christiano uses here and here involves “an infinite sequence of outcomes”. This point is also not a worry for me, as I do not think there are infinite sequences in the real world.
Similarly, I think zeros only exist in the sense of representing arbitralily small, but non-null values.
Do you just mean that you shouldn’t use 0 as a probability (maybe only for an event in a countable probability space)? I agree with that, which is called Cromwell’s rule.
(Or, are you saying zero can never accurately describe anything? Like the number of apples in my hand, or the number of dollars you have in a Swiss bank account? Or, based on your own claim, the number of infinite sequences that exist? The probability that “the number of things that exist and match definition X is 0” is in fact 0, for any X?)
I would say 0 can be used to describe abstract concepts, but I do not think it can be observed in the real world. All measurements have a finite sensitivity, so measuring zero only means the variable of interest is smaller than the sensitivity of the measurement. For example, if a termometer of sensitivity 0.5 K, and range from 0 K to 300 K indicates 0 K, we can only say the temperature is lower than 0.5 K (we cannot say it is 0).
I agree 0 should not be used for real probabilities. Abstractly, we can use 0 to describe something impossible. For example, if X is a uniform distribution ranging from 0 to 1, the probability of X being between −2 and −1 is 0.
If I say I have 0 apples in my hands, I just mean 0 is the integer number which most accurately describes the vague concept of the number of apples in my hands. It is not indended to be exactly 0. For example, I may have forgotten to account for my 2 bites, which imply I only have 0.9 apples in my hands. Or I may only consider I have 0.5 apples in my hands because I am only holding the apple with one hand (i.e. 50 % of my 2 hands). Or maybe having refers to who bought the apples, and I only contributed to 50 % of the cost of the apple. In general, it looks like human language does not translate perfectly to exact numbers.
I see now my reply just above misinterpreted of what you said, sorry. If I understand correctly, you were referring to what you mentioned here:
The 1st point is not a problem for me. For the reasons described in Ellis 2018, I do not think there are infinities.
As for the 2nd point, the definition of unbounded utilities Paul Christiano uses here and here involves “an infinite sequence of outcomes”. This point is also not a worry for me, as I do not think there are infinite sequences in the real world.
Similarly, I think zeros only exist in the sense of representing arbitralily small, but non-null values.
Do you just mean that you shouldn’t use 0 as a probability (maybe only for an event in a countable probability space)? I agree with that, which is called Cromwell’s rule.
(Or, are you saying zero can never accurately describe anything? Like the number of apples in my hand, or the number of dollars you have in a Swiss bank account? Or, based on your own claim, the number of infinite sequences that exist? The probability that “the number of things that exist and match definition X is 0” is in fact 0, for any X?)
I argue for infinite sequences in my other reply.
I would say 0 can be used to describe abstract concepts, but I do not think it can be observed in the real world. All measurements have a finite sensitivity, so measuring zero only means the variable of interest is smaller than the sensitivity of the measurement. For example, if a termometer of sensitivity 0.5 K, and range from 0 K to 300 K indicates 0 K, we can only say the temperature is lower than 0.5 K (we cannot say it is 0).
I agree 0 should not be used for real probabilities. Abstractly, we can use 0 to describe something impossible. For example, if X is a uniform distribution ranging from 0 to 1, the probability of X being between −2 and −1 is 0.
If I say I have 0 apples in my hands, I just mean 0 is the integer number which most accurately describes the vague concept of the number of apples in my hands. It is not indended to be exactly 0. For example, I may have forgotten to account for my 2 bites, which imply I only have 0.9 apples in my hands. Or I may only consider I have 0.5 apples in my hands because I am only holding the apple with one hand (i.e. 50 % of my 2 hands). Or maybe having refers to who bought the apples, and I only contributed to 50 % of the cost of the apple. In general, it looks like human language does not translate perfectly to exact numbers.