I havenât read the paper, but a simple objection is that youâre never going to be certain your actions only have finite effects, because you should only assign credence 0 to contradictions. (I donât actually know the argument for the latter, but some philosophers believe it.) So you have to deal with the very, very small but not literally 0 chance that your actions will have an infinitely good/âbad outcome because your current theories of how the universe works are wrong. However, anything with a chance of bringing about an infinitely good or bad outcome has an infinite expected value or an undefined one. So unless all expected values are undefined (which brings it own problems) you have to deal with infinite expected values, which is enough to cause trouble.
Manheim and Sandberg address your objection in the paper persuasively (to me personally), so let me quote them, since directly addressing these arguments might change my mind. @MichaelStJules Iâd be keen to get your take on this as well. (Iâm not quoting the footnotes, even though they were key to persuading me too.)
Section 4.1, âRejecting Physicsâ:
4.1.1 Pessimistic Meta-induction and expectations of falsification
The pessimistic meta-induction warns that since many past successful scientific theories were found to be false, we have no reason expect that our currently successful theories are approximately true. Hence, for example, the above constraints on information processing are not guaranteed to imply finitude. Indeed, many of them are based on information physics that is weakly understood and liable to be updated in new directions. If physics in our universe does, in fact, allow for access to infinite matter, energy, time, or computation through some as-yet-undiscovered loophole, it would undermine the central claim to finitude.
This criticism cannot be refuted, but there are two reasons to be at least somewhat skeptical. First, scientific progress is not typically revisionist, but rather aggregative. Even the scientific revolutions of Newton, then Einstein, did not eliminate gravity, but rather explained it further. While we should regard the scientific input to our argument as tentative, the fallibility argument merely shows that science will likely change. It does not show that it will change in the direction of allowing infinite storage. Second, past results in physics have increasingly found strict bounds on the range of physical phenomena rather than unbounding them. Classical mechanics allow for far more forms of dynamics than relativistic mechanics, and quantum mechanics strongly constrain what can be known and manipulated on small scales.
While all of these arguments in defense of physics are strong evidence that it is correct, it is reasonable to assign a very small but non-zero value to the possibility that the laws of physics allow for infinities. In that case, any claimed infinities based on a claim of incorrect physics can only provide conditional infinities. And those conditional infinities may be irrelevant to our decisionmaking, for various reasons.
4.1.2 Boltzmann Brains, Decisions, and the indefinite long-term
One specific possible consideration for an infinity is that after the heat-death of the universe there will be an indefinitely long period where Boltzmann brains can be created from random fluctuations. Such brains are isomorphic to thinking human brains, and in the infinite long-term, an infinite number of such brains might exist [ 34]. If such brains are morally relevant, this seems to provide a value infinity.
We argue that even if these brains have moral value, it is by construction impossible to affect their state, or the distribution of their states. This makes their value largely irrelevant to decision-making, with one caveat. That is, if a decision-maker believes that these brains have positive or negative moral value, it could influence decisions about whether decisions that could (or would intentionally) destroy space-time, for instance, by causing a false-vacuum collapse. Such an action would be a positive or negative decision, depending on whether the future value of a non-collapsed universe is otherwise positive or negative. Similar and related implications exist depending on whether a post-collapse universe itself has a positive or negative moral value.
Despite the caveat, however, a corresponding (and less limited) argument can be made about decisionmaking for other proposed infinities that cannot be affected. For example, inaccessible portions of the universe, beyond the reachable light-cone, cannot be causally influenced. As long as we maintain that we care about the causal impacts of decisions, they are irrelevant to decisionmaking.
Section 4.2.4 more directly addresses the objection I think. (Unfortunately the copy-pasting doesnât preserve the mathematical formatting, so perhaps itâd be clearer to just look at page 12 of their paper; in particular Iâve simplified their notation for $1 in 2020 to just $1):
4.2.4 Bounding Probabilities
As noted above, any act considered by a rational decision maker, whether consequentialist or otherwise, is about preferences over a necessarily finite number of possible decisions. This means that if we restrict a decision-maker or ethical system to finite, non-zero probabilities relating to finite value assigned to each end state, we end up with only finite achievable value. The question is whether probabilities can in fact be bounded in this way.
We imagine Robert, faced with a choice between getting $1 with certainty, and getting $100 billion with some probability. Given that there are two choices, Robert assigns utility in proportion to the value of the outcome weighted by the probability. If the probability is low enough, yet he chooses the option, it implies that the value must be correspondingly high.
As a first argument, imagine Robert rationally believes there is a probability of 10^â100 of receiving the second option, and despite the lower expected dollar value, chooses it. This implies that he values receiving $100 billion at approximately 10^100x the value of receiving $1. While this preference is strange, it is valid, and can be used to illustrate why Bayesians should not consider infinitesimal probabilities valid.
To show this, we ask what would be needed for Robert to be convinced this unlikely event occurred. Clearly, Robert would need evidence, and given the incredibly low prior probability, the evidence would need to be stupendously strong. If someone showed Robert that his bank balance was now $100 billion higher, that would provide some evidence for the claimâbut on its own, a bank statement can be fabricated, or in error. This means the provided evidence is not nearly enough to convince him that the event occurred. In fact, with such a low prior probability, it seems plausible that Robert could have everyone he knows agree that it occurred, see newspaper articles about the fact, and so on, and given the low prior odds assigned, still not be convinced. Of course, in the case that the event happened, the likelihood of getting all of that evidence will be much higher, causing him to update towards thinking it occurred.
A repeatable experiment which generates uncorrelated evidence could provide far more evidence over time, but complete lack of correlation seems implausible; checking the bank account balance twice gives almost no more evidence than checking it once. And as discussed in the appendix, even granting the possibility of such evidence generation, the amount possible is still bounded by available time, and therefore finite.
Practically, perhaps the combination of evidence reaches odds of 10^50:1 that the new money exists versus that it does not. Despite this, if he truly assigned the initially implausibly low probability, any feasible update would not be enough to make the event, receiving the larger sum, be a feasible contender for what Robert should conclude. Not only that, but we posit that a rational decision maker should know, beforehand, that he cannot ever conclude that the second case occurs.
If he is, in fact, a rational decision maker, it seems strange to the point of absurdity for him to to choose something he can never believe occurred, over the alternative of a certain small gain.
Generally, then, if an outcome is possible, at some point a rational observer must be able to be convinced, by aggregating evidence, that it occurred. Because evidence is a function of physical reality, the possible evidence is bounded, just as value itself is limited by physical constraints. We suggest (generously) that the strength of this evidence is limited to odds of the number of possible quantum states of the visible universe â a huge but finite value â to 1. If the prior probability assigned to an outcome is too low to allow for a decision maker to conclude it has occurred given any possible universe, no matter what improbable observations occur, we claim the assigned probability is not meaningful for decision making. As with the bound on lexicographic preferences, this bound allows for an immensely large assignment of value, even inconceivably so, but it is again still finite.
I havenât read the paper, but a simple objection is that youâre never going to be certain your actions only have finite effects, because you should only assign credence 0 to contradictions. (I donât actually know the argument for the latter, but some philosophers believe it.) So you have to deal with the very, very small but not literally 0 chance that your actions will have an infinitely good/âbad outcome because your current theories of how the universe works are wrong. However, anything with a chance of bringing about an infinitely good or bad outcome has an infinite expected value or an undefined one. So unless all expected values are undefined (which brings it own problems) you have to deal with infinite expected values, which is enough to cause trouble.
Manheim and Sandberg address your objection in the paper persuasively (to me personally), so let me quote them, since directly addressing these arguments might change my mind. @MichaelStJules Iâd be keen to get your take on this as well. (Iâm not quoting the footnotes, even though they were key to persuading me too.)
Section 4.1, âRejecting Physicsâ:
Section 4.2.4 more directly addresses the objection I think. (Unfortunately the copy-pasting doesnât preserve the mathematical formatting, so perhaps itâd be clearer to just look at page 12 of their paper; in particular Iâve simplified their notation for $1 in 2020 to just $1):