Now, there’s an honest and accurate genie — or God or whoever’s simulating our world or an AI with extremely advanced predictive capabilities — that offers to tell you exactly how A will turn out.[9] Talking to them and finding out won’t affect A or its utility, they’ll just tell you what you’ll get.
This seems impossible, for the possibilities that account for ~all the expected utility (without which it’s finite)? You can’t fit enough bits in a human brain or lifetime (or all accessible galaxies, or whatever). Your brain would have to be expanded infinitely (any finite size wouldn’t be enough). And if we’re giving you an actually infinite brain, the part about how infinite expectations of finite outcomes are more conservative arguments than actual infinities goes away.
I do want to point out that the results here don’t depend on actual infinities (infinite universe, infinitely long lives, infinite value), which is the domain of infinite ethics. We only need infinitely many possible outcomes and unbounded but finite value. My impression is that this is a less exotic/controversial domain (although I think an infinite universe shouldn’t be controversial, and I’d guess our universe is infinite with probability >80%).
And if we’re giving you an actually infinite brain, the part about how infinite expectations of finite outcomes are more conservative arguments than actual infinities goes away.
My post also covers two impossibility theorems that don’t depend on anyone having arbitrary precision or unbounded or infinite representations of anything:[1]
Stochastic Dominance, Anteriority and Impartiality are jointly inconsistent.
Stochastic Dominance, Separability and Compensation (Impartiality) are jointly inconsistent.
The proofs are also of course finite, and the prospects used have finite representations, even though they represent infinitely many possible outcomes and unbounded populations.
It wouldn’t have to definitely be infinite, but I’d guess it would have to be expandable to arbitrarily large finite sizes, with the size depending on the outcome to represent, which I think is also very unrealistic. I discuss this briefly in my Responses section. Maybe not impossible if we’re dealing with arbitrarily long lives, because we could keep expanding over time, although there could be other practical physical limits on this that would make this impossible, maybe requiring so much density that it would collapse into a black hole?
One way to illustrate this point is with Turing machines,[1] with finite but arbitrarily expandable tape to write on for memory. There are (finite) Turing machines that can handle arbitrarily large finite inputs, e.g. doing arithmetic with or comparing two arbitrarily large but finite integers. They only use a finite amount of tape at a time, so we can just feed more and more tape for larger and larger numbers. So, never actually infinite, but arbitrarily large and arbitrarily expandable. A similar argument might apply for more standard computer architectures, with expandable memory, but I’m not that familiar with how standard computers work.
You might respond that we need an actual infinite amount of possible tape (memory space) to be able to do this. Like there has to be an infinite amount of matter available to us to turn into memory space. That isn’t true. The universe and amount of available matter for tape could be arbitrarily large but finite, and we could (in principle) need less tape than what could be available in all possible outcomes, and the amount of tape we’d need will scale with the “size” or value of the outcome. For example, if we want to represent how long you’ll live in years or an upper bound for it, in case you might live arbitrarily long, the amount of tape you’d need would scale with how long you’d live. It could scale much slower, e.g. we could represent the log of the number of years, or the log of the log, or the log of the log of the log, etc.. Still, the length of the tape would have to be unbounded across outcomes.
So, I’d have concerns about:
there not being enough practically accessible matter available (even if we only ever need a finite amount), and
the tape being too spread out spatially to work (like past the edge of the observable universe), or
the tape not being packable densely enough without collapsing into a black hole.
So the scenario seems unrealistic and physically impossible. But if it’s impossible, it’s for reasons that don’t have to do with infinite value or infinitely large things (although black holes might involve infinities).
there not being enough practically accessible matter available (even if we only ever need a finite amount), and
This is what I was thinking about. If I need a supply of matter set aside in advance to be able to record/receive an answer, no finite supply suffices. Only an infinite brain/tape, or infinite pile of tape making resources, would suffice.
If the resources are created on demand ex nihilo, and in such a way that the expansion processes can’t be just ‘left on’ you could try to jury rig around it.
If the resources are created on demand ex nihilo, and in such a way that the expansion processes can’t be just ‘left on’ you could try to jury rig around it.
The resources wouldn’t necessarily need to be created on demand ex nihilo either (although that would suffice), but either way, we’re forced into extremely remote possibilities — denying our current best understanding of physics — and perhaps less likely than infinite accessible resources (or other relevant infinities). That should be enough to say it’s less conservative than actual infinities and make your point for this particular money pump, but it again doesn’t necessarily depend on actual infinities. However, some people actually assign 0 probability to infinity (I think they’re wrong to do so), and some of them may be willing to grant this possibility instead. For them, it would actually be more conservative.
The resources could just already exist by assumption in large enough quantities by outcome in the prospect (at least with nonzero probability for arbitrarily large finite quantities). For example, the prospect could be partially about how much information we can represent to ourselves (or recognize). We could be uncertain about how much matter would be accessible and how much we could do with it. So, we can have uncertainty about this and may not be able to put an absolute hard upper bound on it with certainty, even if we could with near-certainty, given our understanding of physics and the universe, and our confidence in them. And this could still be the case conditional on no infinities. So, we could consider prospects with extremely low probability heavy tails for how much we could represent to ourselves, which would have the important features of St Petersburg prospects for the money pump argument. It’s also something we’d care about naturally, because larger possible representations would tend to coincide with much more possible value.
St Petersburg prospects already depend on extremely remote possibilities to be compelling, so if you object to extremely low probabilities or instead assign 0 probability to them (deny the hypothetical), then you can already object at this point without actual infinities. That being said, someone could hold that finding out the value of a St Petersburg prospect to unbounded values is with certainty impossible (without an actual infinity, and so reject Cromwell’s rule), but that St Petersburg prospects are still possible despite this.
If you don’t deny with certainty the possibility of finding out unbounded values without actual infinities, then, we can allow “Find out A” to fail sometimes, but work in enough exotic possibilities with heavy tails that A conditional on it working (but not its specific value) still has infinite expected utility. Then we can replace B−$100 in the money pump with a prospect D defined as follows in my next comment, and you still get a working money pump argument.
Let C be identically distributed to but statistically independent from A| you'll know A (not any specific value of A). C and A| you'll know A can each have infinite expected utility, by assumption, using an extended definition of “you” in which you get to expand arbitrarily, in extremely remote possibilities.C−$100 is also strictly stochastically dominated by A| you'll know A, so C−$100≺A| you'll know A.
Now, consider the following prospect:
With probability p=P[ you'll know A], it’s C−$100. With the other probability 1−p, it’sA| you'll know A. We can abuse notation to write this in short-hand as
A strictly stochastically dominates D, so D≺A. Then the rest of the money pump argument follows, replacing B−$100 with D, and assuming “Find out A” only works sometimes, but enough of the time that A| you'll know A still has infinite expected utility.[1] You don’t know ahead of time when “Find out A” will work, but when it does, you’ll switch to D, which would then be C−$100, and when “Find out A” doesn’t work, it makes no difference. So, your options become:
you (sometimes) pay $50 ahead of time and switch to A−$100 to avoid switching to the dominated D, which is a sure loss relative to sticking through with A when you do it and irrational.
you stick through with A (or the conditionally stochastically equivalent prospect X) sometimes when “Find out A” works, despite C−$100 beating the outcome of A you find out, which is irrational.
you always switch to C−$100 when “Find out A” works, which is a dominated strategy ahead of time, and so irrational.
On the other hand, if we can’t rule out arbitrarily large finite brains with certainty, then the requirements of rationality (whatever they are) should still apply when we condition on it being possible.
Maybe we should discount some very low probabilities (or probability differences) to 0 (and I’m very sympathetic to this), but that would also be vulnerable to money pump arguments and undermine expected utility theory, because it also violates the standard finitary versions of the Independence axiom and Sure-Thing Principle.
I would guess that arbitrarily large but finite (extended) brains are much less realistic than infinite universes, though. I’d put a probability <1% on arbitrarily large brains being possible, but probability >80% on the universe being infinite. So, maybe actual infinities can make do with more conservative assumptions than the particular money pump argument in my post (but not necessarily unboundedness in general).
The hypothetical situations where irrational decisions would be forced could be unrealistic or very improbable, and so seemingly irrational behaviour in them doesn’t matter, or matters less. The money pump I considered doesn’t seem very realistic, and it’s hard to imagine very realistic versions. Finding out the actual value (or a finite upper bound on it) of a prospect with infinite expected utility conditional on finite actual utility would realistically require an unbounded amount of time and space to even represent. Furthermore, for utility functions that scale relatively continuously with events over space and time, with unbounded time, many of the events contributing utility will have happened, and events that have already happened can’t be traded away. That being said, I expect this last issue to be addressable in principle by just subtracting from B - $100 the value in A already accumulated in the time it took to estimate the actual value of A, assuming this can be done without all of A’s value having already been accumulated.
This seems impossible, for the possibilities that account for ~all the expected utility (without which it’s finite)? You can’t fit enough bits in a human brain or lifetime (or all accessible galaxies, or whatever). Your brain would have to be expanded infinitely (any finite size wouldn’t be enough). And if we’re giving you an actually infinite brain, the part about how infinite expectations of finite outcomes are more conservative arguments than actual infinities goes away.
My post also covers two impossibility theorems that don’t depend on anyone having arbitrary precision or unbounded or infinite representations of anything:[1]
Stochastic Dominance, Anteriority and Impartiality are jointly inconsistent.
Stochastic Dominance, Separability and Compensation (Impartiality) are jointly inconsistent.
The proofs are also of course finite, and the prospects used have finite representations, even though they represent infinitely many possible outcomes and unbounded populations.
The actual outcome would be an unbounded (across outcomes) representation of itself, but that doesn’t undermine the argument.
I personally think unbounded utility functions don’t work, I’m not claiming otherwise here, the comment above is about the thought experiment.
It wouldn’t have to definitely be infinite, but I’d guess it would have to be expandable to arbitrarily large finite sizes, with the size depending on the outcome to represent, which I think is also very unrealistic. I discuss this briefly in my Responses section. Maybe not impossible if we’re dealing with arbitrarily long lives, because we could keep expanding over time, although there could be other practical physical limits on this that would make this impossible, maybe requiring so much density that it would collapse into a black hole?
One way to illustrate this point is with Turing machines,[1] with finite but arbitrarily expandable tape to write on for memory. There are (finite) Turing machines that can handle arbitrarily large finite inputs, e.g. doing arithmetic with or comparing two arbitrarily large but finite integers. They only use a finite amount of tape at a time, so we can just feed more and more tape for larger and larger numbers. So, never actually infinite, but arbitrarily large and arbitrarily expandable. A similar argument might apply for more standard computer architectures, with expandable memory, but I’m not that familiar with how standard computers work.
You might respond that we need an actual infinite amount of possible tape (memory space) to be able to do this. Like there has to be an infinite amount of matter available to us to turn into memory space. That isn’t true. The universe and amount of available matter for tape could be arbitrarily large but finite, and we could (in principle) need less tape than what could be available in all possible outcomes, and the amount of tape we’d need will scale with the “size” or value of the outcome. For example, if we want to represent how long you’ll live in years or an upper bound for it, in case you might live arbitrarily long, the amount of tape you’d need would scale with how long you’d live. It could scale much slower, e.g. we could represent the log of the number of years, or the log of the log, or the log of the log of the log, etc.. Still, the length of the tape would have to be unbounded across outcomes.
So, I’d have concerns about:
there not being enough practically accessible matter available (even if we only ever need a finite amount), and
the tape being too spread out spatially to work (like past the edge of the observable universe), or
the tape not being packable densely enough without collapsing into a black hole.
So the scenario seems unrealistic and physically impossible. But if it’s impossible, it’s for reasons that don’t have to do with infinite value or infinitely large things (although black holes might involve infinities).
For anyone unaware, it’s a type of computer that you can actually build and run, but not the architecture we actually use.
This is what I was thinking about. If I need a supply of matter set aside in advance to be able to record/receive an answer, no finite supply suffices. Only an infinite brain/tape, or infinite pile of tape making resources, would suffice.
If the resources are created on demand ex nihilo, and in such a way that the expansion processes can’t be just ‘left on’ you could try to jury rig around it.
The resources wouldn’t necessarily need to be created on demand ex nihilo either (although that would suffice), but either way, we’re forced into extremely remote possibilities — denying our current best understanding of physics — and perhaps less likely than infinite accessible resources (or other relevant infinities). That should be enough to say it’s less conservative than actual infinities and make your point for this particular money pump, but it again doesn’t necessarily depend on actual infinities. However, some people actually assign 0 probability to infinity (I think they’re wrong to do so), and some of them may be willing to grant this possibility instead. For them, it would actually be more conservative.
The resources could just already exist by assumption in large enough quantities by outcome in the prospect (at least with nonzero probability for arbitrarily large finite quantities). For example, the prospect could be partially about how much information we can represent to ourselves (or recognize). We could be uncertain about how much matter would be accessible and how much we could do with it. So, we can have uncertainty about this and may not be able to put an absolute hard upper bound on it with certainty, even if we could with near-certainty, given our understanding of physics and the universe, and our confidence in them. And this could still be the case conditional on no infinities. So, we could consider prospects with extremely low probability heavy tails for how much we could represent to ourselves, which would have the important features of St Petersburg prospects for the money pump argument. It’s also something we’d care about naturally, because larger possible representations would tend to coincide with much more possible value.
St Petersburg prospects already depend on extremely remote possibilities to be compelling, so if you object to extremely low probabilities or instead assign 0 probability to them (deny the hypothetical), then you can already object at this point without actual infinities. That being said, someone could hold that finding out the value of a St Petersburg prospect to unbounded values is with certainty impossible (without an actual infinity, and so reject Cromwell’s rule), but that St Petersburg prospects are still possible despite this.
If you don’t deny with certainty the possibility of finding out unbounded values without actual infinities, then, we can allow “Find out A” to fail sometimes, but work in enough exotic possibilities with heavy tails that A conditional on it working (but not its specific value) still has infinite expected utility. Then we can replace B−$100 in the money pump with a prospect D defined as follows in my next comment, and you still get a working money pump argument.
Let C be identically distributed to but statistically independent from A| you'll know A (not any specific value of A). C and A| you'll know A can each have infinite expected utility, by assumption, using an extended definition of “you” in which you get to expand arbitrarily, in extremely remote possibilities.C−$100 is also strictly stochastically dominated by A| you'll know A, so C−$100≺A| you'll know A.
Now, consider the following prospect:
With probability p=P[ you'll know A], it’s C−$100. With the other probability 1−p, it’sA| you'll know A. We can abuse notation to write this in short-hand as
D=p(C−$100)+(1−p)(A|you won't know A)=p(C−$100)+(1−p)XThen, letting X=A| you'll know A, we can compare D to
A=p(A| you'll know A)+(1−p)(A|you won't know A)=p(A| you'll know A)+(1−p)XA strictly stochastically dominates D, so D≺A. Then the rest of the money pump argument follows, replacing B−$100 with D, and assuming “Find out A” only works sometimes, but enough of the time that A| you'll know A still has infinite expected utility.[1] You don’t know ahead of time when “Find out A” will work, but when it does, you’ll switch to D, which would then be C−$100, and when “Find out A” doesn’t work, it makes no difference. So, your options become:
you (sometimes) pay $50 ahead of time and switch to A−$100 to avoid switching to the dominated D, which is a sure loss relative to sticking through with A when you do it and irrational.
you stick through with A (or the conditionally stochastically equivalent prospect X) sometimes when “Find out A” works, despite C−$100 beating the outcome of A you find out, which is irrational.
you always switch to C−$100 when “Find out A” works, which is a dominated strategy ahead of time, and so irrational.
Or otherwise beats each of its actual possible outcomes.
On the other hand, if we can’t rule out arbitrarily large finite brains with certainty, then the requirements of rationality (whatever they are) should still apply when we condition on it being possible.
Maybe we should discount some very low probabilities (or probability differences) to 0 (and I’m very sympathetic to this), but that would also be vulnerable to money pump arguments and undermine expected utility theory, because it also violates the standard finitary versions of the Independence axiom and Sure-Thing Principle.
I would guess that arbitrarily large but finite (extended) brains are much less realistic than infinite universes, though. I’d put a probability <1% on arbitrarily large brains being possible, but probability >80% on the universe being infinite. So, maybe actual infinities can make do with more conservative assumptions than the particular money pump argument in my post (but not necessarily unboundedness in general).
From my Responses section:
(Maybe I’m understating how unrealistic this is.)