there not being enough practically accessible matter available (even if we only ever need a finite amount), and
This is what I was thinking about. If I need a supply of matter set aside in advance to be able to record/receive an answer, no finite supply suffices. Only an infinite brain/tape, or infinite pile of tape making resources, would suffice.
If the resources are created on demand ex nihilo, and in such a way that the expansion processes can’t be just ‘left on’ you could try to jury rig around it.
If the resources are created on demand ex nihilo, and in such a way that the expansion processes can’t be just ‘left on’ you could try to jury rig around it.
The resources wouldn’t necessarily need to be created on demand ex nihilo either (although that would suffice), but either way, we’re forced into extremely remote possibilities — denying our current best understanding of physics — and perhaps less likely than infinite accessible resources (or other relevant infinities). That should be enough to say it’s less conservative than actual infinities and make your point for this particular money pump, but it again doesn’t necessarily depend on actual infinities. However, some people actually assign 0 probability to infinity (I think they’re wrong to do so), and some of them may be willing to grant this possibility instead. For them, it would actually be more conservative.
The resources could just already exist by assumption in large enough quantities by outcome in the prospect (at least with nonzero probability for arbitrarily large finite quantities). For example, the prospect could be partially about how much information we can represent to ourselves (or recognize). We could be uncertain about how much matter would be accessible and how much we could do with it. So, we can have uncertainty about this and may not be able to put an absolute hard upper bound on it with certainty, even if we could with near-certainty, given our understanding of physics and the universe, and our confidence in them. And this could still be the case conditional on no infinities. So, we could consider prospects with extremely low probability heavy tails for how much we could represent to ourselves, which would have the important features of St Petersburg prospects for the money pump argument. It’s also something we’d care about naturally, because larger possible representations would tend to coincide with much more possible value.
St Petersburg prospects already depend on extremely remote possibilities to be compelling, so if you object to extremely low probabilities or instead assign 0 probability to them (deny the hypothetical), then you can already object at this point without actual infinities. That being said, someone could hold that finding out the value of a St Petersburg prospect to unbounded values is with certainty impossible (without an actual infinity, and so reject Cromwell’s rule), but that St Petersburg prospects are still possible despite this.
If you don’t deny with certainty the possibility of finding out unbounded values without actual infinities, then, we can allow “Find out A” to fail sometimes, but work in enough exotic possibilities with heavy tails that A conditional on it working (but not its specific value) still has infinite expected utility. Then we can replace B−$100 in the money pump with a prospect D defined as follows in my next comment, and you still get a working money pump argument.
Let C be identically distributed to but statistically independent from A| you'll know A (not any specific value of A). C and A| you'll know A can each have infinite expected utility, by assumption, using an extended definition of “you” in which you get to expand arbitrarily, in extremely remote possibilities.C−$100 is also strictly stochastically dominated by A| you'll know A, so C−$100≺A| you'll know A.
Now, consider the following prospect:
With probability p=P[ you'll know A], it’s C−$100. With the other probability 1−p, it’sA| you'll know A. We can abuse notation to write this in short-hand as
A strictly stochastically dominates D, so D≺A. Then the rest of the money pump argument follows, replacing B−$100 with D, and assuming “Find out A” only works sometimes, but enough of the time that A| you'll know A still has infinite expected utility.[1] You don’t know ahead of time when “Find out A” will work, but when it does, you’ll switch to D, which would then be C−$100, and when “Find out A” doesn’t work, it makes no difference. So, your options become:
you (sometimes) pay $50 ahead of time and switch to A−$100 to avoid switching to the dominated D, which is a sure loss relative to sticking through with A when you do it and irrational.
you stick through with A (or the conditionally stochastically equivalent prospect X) sometimes when “Find out A” works, despite C−$100 beating the outcome of A you find out, which is irrational.
you always switch to C−$100 when “Find out A” works, which is a dominated strategy ahead of time, and so irrational.
This is what I was thinking about. If I need a supply of matter set aside in advance to be able to record/receive an answer, no finite supply suffices. Only an infinite brain/tape, or infinite pile of tape making resources, would suffice.
If the resources are created on demand ex nihilo, and in such a way that the expansion processes can’t be just ‘left on’ you could try to jury rig around it.
The resources wouldn’t necessarily need to be created on demand ex nihilo either (although that would suffice), but either way, we’re forced into extremely remote possibilities — denying our current best understanding of physics — and perhaps less likely than infinite accessible resources (or other relevant infinities). That should be enough to say it’s less conservative than actual infinities and make your point for this particular money pump, but it again doesn’t necessarily depend on actual infinities. However, some people actually assign 0 probability to infinity (I think they’re wrong to do so), and some of them may be willing to grant this possibility instead. For them, it would actually be more conservative.
The resources could just already exist by assumption in large enough quantities by outcome in the prospect (at least with nonzero probability for arbitrarily large finite quantities). For example, the prospect could be partially about how much information we can represent to ourselves (or recognize). We could be uncertain about how much matter would be accessible and how much we could do with it. So, we can have uncertainty about this and may not be able to put an absolute hard upper bound on it with certainty, even if we could with near-certainty, given our understanding of physics and the universe, and our confidence in them. And this could still be the case conditional on no infinities. So, we could consider prospects with extremely low probability heavy tails for how much we could represent to ourselves, which would have the important features of St Petersburg prospects for the money pump argument. It’s also something we’d care about naturally, because larger possible representations would tend to coincide with much more possible value.
St Petersburg prospects already depend on extremely remote possibilities to be compelling, so if you object to extremely low probabilities or instead assign 0 probability to them (deny the hypothetical), then you can already object at this point without actual infinities. That being said, someone could hold that finding out the value of a St Petersburg prospect to unbounded values is with certainty impossible (without an actual infinity, and so reject Cromwell’s rule), but that St Petersburg prospects are still possible despite this.
If you don’t deny with certainty the possibility of finding out unbounded values without actual infinities, then, we can allow “Find out A” to fail sometimes, but work in enough exotic possibilities with heavy tails that A conditional on it working (but not its specific value) still has infinite expected utility. Then we can replace B−$100 in the money pump with a prospect D defined as follows in my next comment, and you still get a working money pump argument.
Let C be identically distributed to but statistically independent from A| you'll know A (not any specific value of A). C and A| you'll know A can each have infinite expected utility, by assumption, using an extended definition of “you” in which you get to expand arbitrarily, in extremely remote possibilities.C−$100 is also strictly stochastically dominated by A| you'll know A, so C−$100≺A| you'll know A.
Now, consider the following prospect:
With probability p=P[ you'll know A], it’s C−$100. With the other probability 1−p, it’sA| you'll know A. We can abuse notation to write this in short-hand as
D=p(C−$100)+(1−p)(A|you won't know A)=p(C−$100)+(1−p)XThen, letting X=A| you'll know A, we can compare D to
A=p(A| you'll know A)+(1−p)(A|you won't know A)=p(A| you'll know A)+(1−p)XA strictly stochastically dominates D, so D≺A. Then the rest of the money pump argument follows, replacing B−$100 with D, and assuming “Find out A” only works sometimes, but enough of the time that A| you'll know A still has infinite expected utility.[1] You don’t know ahead of time when “Find out A” will work, but when it does, you’ll switch to D, which would then be C−$100, and when “Find out A” doesn’t work, it makes no difference. So, your options become:
you (sometimes) pay $50 ahead of time and switch to A−$100 to avoid switching to the dominated D, which is a sure loss relative to sticking through with A when you do it and irrational.
you stick through with A (or the conditionally stochastically equivalent prospect X) sometimes when “Find out A” works, despite C−$100 beating the outcome of A you find out, which is irrational.
you always switch to C−$100 when “Find out A” works, which is a dominated strategy ahead of time, and so irrational.
Or otherwise beats each of its actual possible outcomes.