With low confidence, I think I agree with this framing.
If correct, then I think the point is that seeing us at an ‘early point in history’ updates us against a big future, but the fact we exist at all updates in favour of a big future, and these cancel out.
You wake up in a mysterious box, and hear the booming voice of God:
“I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it.
If it came up tails, I made ten billion boxes, labeled 1 through 10 billion — also with one human in each box.
To get into heaven, you have to answer this correctly: Which way did the coin land?”
You think briefly, and decide you should bet your eternal soul on tails. The fact that you woke up at all seems like pretty good evidence that you’re in the big world — if the coin landed tails, way more people should be having an experience just like yours.
But then you get up, walk outside, and look at the number on your box.
‘3’. Huh. Now you don’t know what to believe.
If God made 10 billion boxes, surely it’s much more likely that you would have seen a number like 7,346,678,928?
In the “voice of God” example, we’re guaranteed to minimize error by applying this reasoning; i.e., if God asks this question to every possible human created, and they all answer this way, most of them will be right. Now, I’m really unsure about the following, but imagine each new human predicts Doomsday through DA reasoning; in that case, I’m not sure it minimizes error the same way. We often assume human population will increase exponentially and then suddenly go extinct; but then it seems like most people will end up mistaken in their predictions. Maybe we’re using the wrong priors?
With low confidence, I think I agree with this framing.
If correct, then I think the point is that seeing us at an ‘early point in history’ updates us against a big future, but the fact we exist at all updates in favour of a big future, and these cancel out.
In the “voice of God” example, we’re guaranteed to minimize error by applying this reasoning; i.e., if God asks this question to every possible human created, and they all answer this way, most of them will be right.
Now, I’m really unsure about the following, but imagine each new human predicts Doomsday through DA reasoning; in that case, I’m not sure it minimizes error the same way. We often assume human population will increase exponentially and then suddenly go extinct; but then it seems like most people will end up mistaken in their predictions. Maybe we’re using the wrong priors?