Indeed, the EDT-ish thirder, here, actually ends up betting like a fifth-er. That is, if offered a “win twenty if heads, lose ten if tails” bet upon each waking, she reasons: “1/3rd I’m in a heads world and will win $20. But 2/3rds I’m in a tails world, and am about to take or reject this bet twice, thereby losing $20. Thus, I should reject. To accept, the heads payout would need to be $40 instead.”
To me this seems like a strong argument that we shouldn’t separate credences from betting behaviour. If your arguments lead to a “special type of credences” which it’s silly for EDT agents to use to bet, then that just indicates that EDT-type reasoning is built into the plausibility of SIA.
In other words: you talk about contorting one’s epistemology in order to bet a particular way, but what’s the alternative? If I’m an EDT agent who wants to bet at odds of a third, what is the principled reasoning that leads me to have credence of a half? Seems like that’s just SSA again.
In fact, I want to offer an alternative framing of your objections to SSA. You argue that questions like “could I have been a chimpanzee” seem ridiculous. But these are closely analogous to the types of questions that one needs to ask when making decisions according to FDT (e.g. “are the decisions of chimpanzees correlated with my own?”) So, if we need to grapple with these questions somehow in order to make decisions, grappling with them via our choice of a reference class doesn’t seem like the worst way to do so.
Suppose I am wondering “is there an X-type multiverse?” or “are there a zillion zillion copies of me somewhere in the universe?”. I feel like I’m just asking a question about what’s true, about what kind of world I’m living in — and I’m trying to use anthropics as a guide in figuring it out.
I’m reminded of Yudkowsky’s writing about why he isn’t prepared to get rid of the concept of “anticipated subjective experience”, despite the difficulties it poses from a quantum-mechanical perspective.
“that just indicates that EDT-type reasoning is built into the plausibility of SIA”
If by this you mean “SIA is only plausible if you accept EDT,” then I disagree. I think many of the arguments for SIA—for example, “you should 1⁄4 on each of tails-mon, tails-tues, heads-mon, and heads-tues in Sleeping Beauty with two wakings each, and then update to being a thirder if you learn you’re not in heads-tues,” “telekinesis doesn’t work,” “you should be one-half on not-yet-flipped fair coins,” “reference classes aren’t a thing,” etc—don’t depend on EDT, or even on EDT-ish intuitions.
you talk about contorting one’s epistemology in order to bet a particular way, but what’s the alternative? If I’m an EDT agent who wants to bet at odds of a third, what is the principled reasoning that leads me to have credence of a half?
The alternative is to just bet the way you want to anyway, in the same way that the (most attractive, imo) alternative to two-boxing in transparent newcomb is not “believe that the boxes are opaque” but “one-box even though you know they’re transparent.” You don’t need to have a credence of a half to bet how you want to—especially if you’re updateless. And note that EDT-ish SSA-ers have the fifthing problem too, in cases like the “wake up twice regardless, then learn that you’re not heads-tuesday” version I just mentioned (where SSA ends up at 1/3rd on heads, too).
You argue that questions like “could I have been a chimpanzee” seem ridiculous. But these are closely analogous to the types of questions that one needs to ask when making decisions according to FDT (e.g. “are the decisions of chimpanzees correlated with my own?”) So, if we need to grapple with these questions somehow in order to make decisions, grappling with them via our choice of a reference class doesn’t seem like the worst way to do so.
I think that “how much are my decisions correlated with those of the chimps?” is a much more meaningful and tractable question, with a much more determinate answer, than “are the chimps in my reference class?” Asking questions about correlations between things is the bread and butter of Bayesianism. Asking questions anthropic reference classes isn’t—or, doesn’t need to be.
I’m reminded of Yudkowsky’s writing about why he isn’t prepared to get rid of the concept of “anticipated subjective experience”, despite the difficulties it poses from a quantum-mechanical perspective.
Thanks for the link. I haven’t read this piece, but fwiw, to me it feels like “there is a truth about the way that the world is/about what world I’m living in, I’m trying to figure out what that truth is” is something we shouldn’t give up lightly. I haven’t engaged much with the QM stuff here, and I can imagine it moving me, but “how are you going to avoid fifth-ing?” doesn’t seem like a strong enough push on its own.
To me this seems like a strong argument that we shouldn’t separate credences from betting behaviour. If your arguments lead to a “special type of credences” which it’s silly for EDT agents to use to bet, then that just indicates that EDT-type reasoning is built into the plausibility of SIA.
In other words: you talk about contorting one’s epistemology in order to bet a particular way, but what’s the alternative? If I’m an EDT agent who wants to bet at odds of a third, what is the principled reasoning that leads me to have credence of a half? Seems like that’s just SSA again.
In fact, I want to offer an alternative framing of your objections to SSA. You argue that questions like “could I have been a chimpanzee” seem ridiculous. But these are closely analogous to the types of questions that one needs to ask when making decisions according to FDT (e.g. “are the decisions of chimpanzees correlated with my own?”) So, if we need to grapple with these questions somehow in order to make decisions, grappling with them via our choice of a reference class doesn’t seem like the worst way to do so.
I’m reminded of Yudkowsky’s writing about why he isn’t prepared to get rid of the concept of “anticipated subjective experience”, despite the difficulties it poses from a quantum-mechanical perspective.
If by this you mean “SIA is only plausible if you accept EDT,” then I disagree. I think many of the arguments for SIA—for example, “you should 1⁄4 on each of tails-mon, tails-tues, heads-mon, and heads-tues in Sleeping Beauty with two wakings each, and then update to being a thirder if you learn you’re not in heads-tues,” “telekinesis doesn’t work,” “you should be one-half on not-yet-flipped fair coins,” “reference classes aren’t a thing,” etc—don’t depend on EDT, or even on EDT-ish intuitions.
The alternative is to just bet the way you want to anyway, in the same way that the (most attractive, imo) alternative to two-boxing in transparent newcomb is not “believe that the boxes are opaque” but “one-box even though you know they’re transparent.” You don’t need to have a credence of a half to bet how you want to—especially if you’re updateless. And note that EDT-ish SSA-ers have the fifthing problem too, in cases like the “wake up twice regardless, then learn that you’re not heads-tuesday” version I just mentioned (where SSA ends up at 1/3rd on heads, too).
I think that “how much are my decisions correlated with those of the chimps?” is a much more meaningful and tractable question, with a much more determinate answer, than “are the chimps in my reference class?” Asking questions about correlations between things is the bread and butter of Bayesianism. Asking questions anthropic reference classes isn’t—or, doesn’t need to be.
Thanks for the link. I haven’t read this piece, but fwiw, to me it feels like “there is a truth about the way that the world is/about what world I’m living in, I’m trying to figure out what that truth is” is something we shouldn’t give up lightly. I haven’t engaged much with the QM stuff here, and I can imagine it moving me, but “how are you going to avoid fifth-ing?” doesn’t seem like a strong enough push on its own.