On the other hand, if we can’t rule out arbitrarily large finite brains with certainty, then the requirements of rationality (whatever they are) should still apply when we condition on it being possible.
Maybe we should discount some very low probabilities (or probability differences) to 0 (and I’m very sympathetic to this), but that would also be vulnerable to money pump arguments and undermine expected utility theory, because it also violates the standard finitary versions of the Independence axiom and Sure-Thing Principle.
On the other hand, if we can’t rule out arbitrarily large finite brains with certainty, then the requirements of rationality (whatever they are) should still apply when we condition on it being possible.
Maybe we should discount some very low probabilities (or probability differences) to 0 (and I’m very sympathetic to this), but that would also be vulnerable to money pump arguments and undermine expected utility theory, because it also violates the standard finitary versions of the Independence axiom and Sure-Thing Principle.