SIA > SSA, part 3: An aside on betting in anthropics
(Cross-posted from Hands and Cities. Previously in sequence: Part 1: Learning from the fact that you exist; Part 2: Telekinesis, reference classes, and other scandals.)
This post is the third in a four-part sequence, explaining why I think that one prominent approach to anthropic reasoning (the “Self-Indication Assumption” or “SIA”) is better than another (the “Self-Sampling Assumption” or “SSA”). This part briefly discusses betting in anthropics. In particular: why it’s so gnarly, why I’m not focusing on it, and why I don’t think it’s the only desiderata. If you’re not interested in betting-type arguments, feel free to skip to part 4.
XII. An aside on betting in anthropics
I’ve now covered my main objections to SSA. In part 4, I say more in defense of SIA in particular. Before doing so, though, I want to mention a whole category of argument that I’ve generally avoided in this post: that is, arguments about what sorts of anthropic theories will lead to the right patterns of betting behavior.
I expect to some readers, this will seem a glaring omission. What’s the use of talking about credences, if we’re not talking about betting? What are credences, even, if not “the things you bet with”? Indeed, for some people, the question of “which anthropic theory will get me the most utility when applied” will seem the only question worth asking in this context, and they will have developed their own views about anthropics centrally with this consideration in mind. Why, then, aren’t I putting it front and center?
Basically, because questions about betting in anthropics get gnarly really fast. I’m hoping to write about them at some point, but this series of posts in particular is already super long. That said, I also don’t think that questions about betting are the only desiderata. Let me explain.
Why is betting in anthropics gnarly? At a high-level, it’s because how you should bet, in a given case, isn’t just a function of your credences. It’s also a function of things like whether you’re EDT-ish or CDT-ish, your level of altruism towards copies of yourself/other people in your epistemic position, how that altruism expresses itself (average vs. total, bounded vs. unbounded), and the degree to which you go in for various “act as you would’ve pre-committed to acting from some prior epistemic position” type moves (e.g. “updatelessness”) — either at the level of choices (whether your own choices, or those of some group), or at the level of epistemology itself. Anthropics-ish cases tend to implicate these issues to an unusual degree, and in combination, they end up as a lot of variables to hold in your head at once. Indeed, there is some temptation to moosh them together, egg-ed on by their intertwined implications. But they are, I think, importantly separable.
I’ll give one example to illustrate a bit of the complexity here. You might be initially tempted by the following argument for thirding, rather than halfing, in Sleeping Beauty. “Suppose you’re a halfer. That means that when you wake up, you’ll take (or more specifically, be indifferent to) a bet like: ‘I win $10 if heads, I lose $10 if tails.’ After all, it’s neutral in expectation. But if you take that sort of bet on every waking, then half the time, you’ll end up losing $10 twice: once on Monday, and once on Tuesday. Thus, the EV of a ‘halfer’ policy is negative. But if you’re a thirder, you’ll demand to win $20 if heads, in order to accept a $10 loss on tails. And the EV of this policy is indeed neutral. So, you should be a thirder.”
But this argument doesn’t work if Beauty’s person-moments are EDT-ish (and altruistic towards each other). Suppose you’re a halfer person-moment offered the even-odds bet above on each waking. You reason: “It’s 50% I’m in a heads world, and 50% I’m in a tails. But if I’m a tails-world, there’s also another version of me, who will be making this same choice, and whose decision is extremely correlated with mine. Thus, if I accept, that other version will accept too, and we’ll end up losing twice. Thus, I reject.” That is, in this case, your betting behavior doesn’t align with your credences. Is that surprising? Sort of. But in general, if you’re going to take a bet different numbers of times conditional on outcome vs. another, the relationship between the odds you’ll accept and your true credences gets much more complicated than usual. This is similar to the sense in which, even if I am 50-50 on heads vs. tails, I am not indifferent between a 50% chance of taking the bet “win $10 on heads, lose $10 on tails” conditional on heads vs. a 50% chance of “win $20 on heads, lose $20 on tails” conditionals on tails. Even though both of the bets are at 1:1 odds (and hence both are neutral in expectation pre-coin-flip), I’d be taking the bigger-stakes bet on the condition that I lose. (See Arntzenius (2002) for more.)
Indeed, the EDT-ish thirder, here, actually ends up betting like a fifth-er. That is, if offered a “win twenty if heads, lose ten if tails” bet upon each waking, she reasons: “1/3rd I’m in a heads world and will win $20. But 2/3rds I’m in a tails world, and am about to take or reject this bet twice, thereby losing $20. Thus, I should reject. To accept, the heads payout would need to be $40 instead.” And note that this argument applies both to SIA, and to SSA in the Dorr/Arntzenius “Beauty also wakes up on Heads Tuesday, but hears a bell in that case” version (thanks to Paul Christiano and Katja Grace for discussion). That is, every (non-updateless) altruistic EDT-er is a fifth-er sometimes.
(Or at least, this holds if you use the version of EDT I am most naively attempted by. Paul Christiano has recently argued to me that you should instead use a version of EDT where, instead of updating on your observations in the way I would’ve thought normal, then picking the action with the highest EV, you instead just pick the action that has the highest EV-conditional-on-being-performed-in-response-to-your-observations, and leave the traditional notion of “I have a probability distribution that I update as I move through the world” to the side. I haven’t dug in on this, though: my sense is that the next steps in the dialectic involve asking questions like “why be a Bayesian at all.”)
Note that in these cases, I’ve been assuming that Beauty’s person-moments are altruistic towards each other. But we need not assume this. We could imagine, instead, versions where the person moments will get to spend whatever money they win on themselves, before the next waking (if there is one), with no regard for the future of Beauty-as-a-whole. Indeed, in analogous cases with different people rather than different person-moments (e.g., God’s coin toss), altruism towards the relevant people-in-your-epistemic-position is a lot less of a default. And we’ll also need to start asking questions about what sort of pre-commitments it would’ve make sense to have made, from what sorts of epistemic/cooperative positions, and what the implications of that are or should be. I think questions like this are well worth asking. But I don’t really want to get into them here.
What’s more, I don’t think that they are the only questions. In particular, to me it seems pretty possible to separate the question of how to bet from the question of what to believe. Thus, for example, in the EDT-ish halfer case above, it seems reasonable to me to imagine thinking: “I’m 50% on heads, here, but if it’s tails, then it’s not just me taking this ‘win $10 if heads, lose $10 if tails’ bet; it’s also another copy of me, whose interests I care about. Thus, I will demand $20 if heads instead.” You can reason like that, and then step out of your room and continue to expect to see a heads-up coin with the same confidence you normally do after you flip. Maybe this is in some sense the wrong sort of expectation, but I don’t think your betting behavior, on its own, establishes this.
(One response here is: “you’re mistakenly thinking that you can bet for two, but expect for one. But actually, you’re expecting for both, too. And don’t you care about the accuracy of your copy’s beliefs too? And what is expecting if not a bet? What about the tiny bit of pleasure or pain you’ll experience upon calling it for tails as you step out of the room? Don’t you want that for both copies?”. Maybe, maybe.)
More generally, it doesn’t feel to me like the type of questions I end up asking, when I think about anthropics, are centrally about betting. Suppose I am wondering “is there an X-type multiverse?” or “are there a zillion zillion copies of me somewhere in the universe?”. I feel like I’m just asking a question about what’s true, about what kind of world I’m living in — and I’m trying to use anthropics as a guide in figuring it out. I don’t feel like I’m asking, centrally, “what kinds of scenarios would make my choices now have the highest stakes?”, or “what would a version of myself behind some veil of ignorance have pre-committed to believing/acting-like-I-believe?”, or something like that. Those are (or, might be) important questions too. But sometimes you’re just, as it were, curious about the truth. And more generally, in many cases, you can’t actually decide how to bet until you have some picture of the truth. That is: anthropics, naively construed, purports to offer you some sort of evidence about the actual world (that’s what makes it so presumptuous). Does our place in history suggest that we’ll never make it to the stars? Does the fact that we exist mean that there are probably lots of simulations of us? Can we use earth’s evolutionary history as evidence for the frequency of intelligent life? Naively, one answers such questions first, then decides what to do about it. And I’m inclined to take the naive project on its face.
Indeed, I’ve been a bit surprised by the extent to which some people writing about anthropics seem interested in adjusting (contorting?) their epistemology per se in order to bet a particular way — instead of just, you know, betting that way. This seems especially salient to me in the context of discussions about dynamical inconsistencies between the policy you’d want to adopt ex ante, and your behavior ex post (see, e.g., here). As I discussed in my last post, these cases are common outside of anthropics, too, and “believe whatever you have to in order to do the right thing” doesn’t seem the most immediately attractive solution. Thus, for example, if you arrive in a Newcomb’s case with transparent boxes, and you want to one-box anyway, the thing to do, I suspect, is not to adopt whatever epistemic principles will get you to believe that the one box is opaque. The thing to do is to one-box. (Indeed, part of the attraction of updateless-ish decision theories is that they eliminate the need for epistemic distortion of this kind.) I expect that the thing to say about various “but that anthropic principle results in actions you’d want to pre-commit to not taking” (for example, in cases of “fifth-ing” above) is similar.
Still, I suspect that some will take issue with the idea that we can draw any kind of meaningful line between credences and bets. And some will think it doesn’t matter: you can describe the same behavior in multiple ways. Indeed, one possible response to cases like God’s coin toss is to kind of abandon the notion of “credences” in the context of anthropics (and maybe in general?), and to just act as you would’ve pre-committed to acting from the perspective of some pre-anthropic-update prior, where the right commitment to make will depend on your values (I associate this broad approach with Armstrong’s “Anthropic Decision Theory,” though his particular set-up involves more structure, and I haven’t dug into it in detail). This is pretty similar in spirit to just saying “use some updateless-type decision theory,” but it involves a more explicit punting on/denial of the notion of “probabilities” as even-a-thing-at-all — at least in the context of questions like which person-in-your-epistemic-situation you are. That is, as I understand it, you’re not saying the equivalent of “I’m ~100% that both boxes are full, but I’m one-boxing anyway.” Rather, you’re saying the equivalent of: “all this talk about ‘what do you think is in the boxes’ is really a way of re-describing whether or not you one-box. Or at least, it’s not important/interesting. What’s key is what you do: and I, for one, one-box, because from some epistemic perspective, I would’ve committed to doing so.”
This approach has merits. In particular, it puts the focus directly on action, and it allows you to reason centrally from the perspective of the pre-anthropic-update prior (even if you eventually end up acting like an SIA-er, a Presumptuous Philosopher, and so on), which can feel like a relief. Personally, though, I currently prefer to keep the distinctions between e.g. credences, values, and decision-theories alive and available — partly to stay alert to implications and subtleties one would miss if you mooshed them together and just talked about e.g. “policies,” and partly because, as just discussed, they just seem like different things to me (e.g., “do I believe this dog exists” is not the same as “do I love this dog”). And I also have some worry that the “pre-anthropic update prior” invoked by this approach is going to end up problematic in the same way that the prior becomes problematic for updateless decision theories in general. (E.g., how do you know what pre-commitments to make, if you don’t have credences? Which credences should we use? What if the epistemic perspective from which you’re making/evaluating your pre-commitment implicates anthropic questions, too?)
At some point, I do want to get more clarity about the betting stuff, here — obviously, it’s where rubber ultimately meets road. For now, though, let’s move on.
(Next and last post in sequence: SIA > SSA, part 4: In defense of the presumptuous philosopher.)
- SIA > SSA, part 1: Learning from the fact that you exist by 1 Oct 2021 6:58 UTC; 16 points) (
- SIA > SSA, part 2: Telekinesis, reference classes, and other scandals by 1 Oct 2021 6:58 UTC; 10 points) (
- SIA > SSA, part 4: In defense of the presumptuous philosopher by 1 Oct 2021 7:00 UTC; 8 points) (
What do we mean by probability in the sense of credences? I would suggest that these kinds of claims only makes sense from within the model and that we aren’t making a literal ontological claim. Here’s an example of what would count as an ontological claim about probability from quantum mechanics: probability distributions correspond to quantum states. In contrast, when we’re talking about credences, almost none of our uncertainty is due to uncertainty inherent in physics itself, such as comes from quantum uncertainty. That is, that this uncertainty is present in our model of the universe rather than the universe itself.
Following this logic seems to suggest that anthropic claims not related to quantum mechanics shouldn’t be taken literally either. My take on probability is (unsurprisingly) essentially the same as my take on counterfactuals:
a) That probability is a partially-constructed, partially-intrinsic frame that we impose on the world and use to organise our experiences
b) However, that it’s possible that this frame may be subsumed into another in the same way that Einstein subsumed space and time into space-time.
The partially constructed nature of probability means that we can’t really talk about what it means outside of a particular context or set of goals. Betting provides one such way in which we can define a context or task, although I suspect it’d be more fruitful to frame this in terms of a scoring system rather than betting behaviour.
You identified that, for example, an altruistic, halfer EDT agent bets in a way that is different from its credences. This is a problem if we try to equate betting and credences. On the other hand, if we think about credences in terms of adopting a particular scoring system, then this problem disappears. If the scoring system was constructed for non-altruistic agents, then its hardly surprising if an altruistic agent acts in a way that appears strange on the face of it given the score that it assigned. The altruistic agent may still be able to make use of the scoring system by making appropriate adjustements. Indeed, my preferred approach to anthropics is that insofar as it is possible, our value system should be build on top of our anthropics, instead of being integrated into it (1).
It is worth noting that adopting a scoring system doesn’t inherently indicate a value commitment as it is possible that one scoring system makes sense for one context and another for another context, but nonetheless if there is to be a standard scoring system it has to be one or another.
So the question of anthropic probability breaks down as follows:
a) If we are going to have a standard scoring system called probability, what convention should we adopt in relation to anthropics?
b) In what circumstances and for what purposes do different scoring systems make sense?
Even though you make legitimate criticisms of framing the problem in terms of betting, this core insight from the betting perspective survives untouched.
(1) This may not be possible insofar as choosing axioms is a value judgment rather than a judgement of fact, but it should be possible to construct a decision theory that is useful to both altruistic and non-altruistic agents
To me this seems like a strong argument that we shouldn’t separate credences from betting behaviour. If your arguments lead to a “special type of credences” which it’s silly for EDT agents to use to bet, then that just indicates that EDT-type reasoning is built into the plausibility of SIA.
In other words: you talk about contorting one’s epistemology in order to bet a particular way, but what’s the alternative? If I’m an EDT agent who wants to bet at odds of a third, what is the principled reasoning that leads me to have credence of a half? Seems like that’s just SSA again.
In fact, I want to offer an alternative framing of your objections to SSA. You argue that questions like “could I have been a chimpanzee” seem ridiculous. But these are closely analogous to the types of questions that one needs to ask when making decisions according to FDT (e.g. “are the decisions of chimpanzees correlated with my own?”) So, if we need to grapple with these questions somehow in order to make decisions, grappling with them via our choice of a reference class doesn’t seem like the worst way to do so.
I’m reminded of Yudkowsky’s writing about why he isn’t prepared to get rid of the concept of “anticipated subjective experience”, despite the difficulties it poses from a quantum-mechanical perspective.
If by this you mean “SIA is only plausible if you accept EDT,” then I disagree. I think many of the arguments for SIA—for example, “you should 1⁄4 on each of tails-mon, tails-tues, heads-mon, and heads-tues in Sleeping Beauty with two wakings each, and then update to being a thirder if you learn you’re not in heads-tues,” “telekinesis doesn’t work,” “you should be one-half on not-yet-flipped fair coins,” “reference classes aren’t a thing,” etc—don’t depend on EDT, or even on EDT-ish intuitions.
The alternative is to just bet the way you want to anyway, in the same way that the (most attractive, imo) alternative to two-boxing in transparent newcomb is not “believe that the boxes are opaque” but “one-box even though you know they’re transparent.” You don’t need to have a credence of a half to bet how you want to—especially if you’re updateless. And note that EDT-ish SSA-ers have the fifthing problem too, in cases like the “wake up twice regardless, then learn that you’re not heads-tuesday” version I just mentioned (where SSA ends up at 1/3rd on heads, too).
I think that “how much are my decisions correlated with those of the chimps?” is a much more meaningful and tractable question, with a much more determinate answer, than “are the chimps in my reference class?” Asking questions about correlations between things is the bread and butter of Bayesianism. Asking questions anthropic reference classes isn’t—or, doesn’t need to be.
Thanks for the link. I haven’t read this piece, but fwiw, to me it feels like “there is a truth about the way that the world is/about what world I’m living in, I’m trying to figure out what that truth is” is something we shouldn’t give up lightly. I haven’t engaged much with the QM stuff here, and I can imagine it moving me, but “how are you going to avoid fifth-ing?” doesn’t seem like a strong enough push on its own.