Thanks for writing this! I think you’re right that if you buy the Doomsday argument (or assumptions that lead to it) then we should update against worlds with 10^50 future humans and towards worlds with Doom-soon.
However, you write
My take is that the Doomsday Argument is … but it follows from the assumptions outlined
which I don’t think is true. For example, your assumptions seem equally compatible with the self-indication assumption (SIA) that doesn’t predict Doom-soon.[1]
I think a lot of confusions in anthropics go away when we convert probability questions to decision problem questions. This is what Armstrong’s Anthropic Decision Theory does.
Thanks for the links! They were interesting and I’m happy that philosophers, including ones close to EA, are trying to grapple with these questions.
I was confused by SIA, and found that I agree with Bostrom’s critique of it much more than with the argument itself. The changes to the prior it proposes seem ad hoc, and I don’t understand how to motivate them. Let me know if you know how to motivate them (without a posteriori arguments that they—essentially by definition—cancel the update terms in the DA). It also seems to me to quickly lead to infinite expectations if taken at face value, unless there is a way to consistently avoid this issue by avoiding some kind of upper bound on population?
Anthropic decision theory seems more interesting to me, though I haven’t had a chance to try to understand it yet. I’ll take a look at the paper you linked when I get a chance.
Thanks for writing this! I think you’re right that if you buy the Doomsday argument (or assumptions that lead to it) then we should update against worlds with 10^50 future humans and towards worlds with Doom-soon.
However, you write
which I don’t think is true. For example, your assumptions seem equally compatible with the self-indication assumption (SIA) that doesn’t predict Doom-soon.[1]
I think a lot of confusions in anthropics go away when we convert probability questions to decision problem questions. This is what Armstrong’s Anthropic Decision Theory does.
Interestingly, something like the Doomsday argument applies for average utilitarians: they bet on Doom-soon, since in this case they win the bet the utility is spread over much fewer people.
Katja Grace has written about SIA Doomsday but this is (in my view) contingent on beliefs about aliens & simulations whereas SSA Doomsday is not.
Thanks for the links! They were interesting and I’m happy that philosophers, including ones close to EA, are trying to grapple with these questions.
I was confused by SIA, and found that I agree with Bostrom’s critique of it much more than with the argument itself. The changes to the prior it proposes seem ad hoc, and I don’t understand how to motivate them. Let me know if you know how to motivate them (without a posteriori arguments that they—essentially by definition—cancel the update terms in the DA). It also seems to me to quickly lead to infinite expectations if taken at face value, unless there is a way to consistently avoid this issue by avoiding some kind of upper bound on population?
Anthropic decision theory seems more interesting to me, though I haven’t had a chance to try to understand it yet. I’ll take a look at the paper you linked when I get a chance.
But I agree with your meta-point that I implicitly assumed SSA together with my “assumption 5” and SSA might not follow from the other assumptions