I would like to understand how that is a valid objection, because I honestly don’t see it. To simplify a bit, if you think that 1 (‘humanity won’t reach a posthuman stage’) and 2 (‘posthuman civilizations are extremely unlikely to run vast numbers of simulations’) are false, it follows that humanity will probably both reach a posthuman stage and run a vast number of simulations. Now if you really think this will probably happen, I can see no reason to deny that it has already happened in the past. Why postulate that we will be the first simulators? There’s no empirical evidence to support it, given that we are talking about extremely detailed, realistic simulations, and as it was already agreed that simulations are so many, it seems very, very unlikely that we are located at the first level. In other words, if one believes that intelligent life is part of a process which normally culminates with a massive ancestor-simulation program, the fact that there is intelligent life is not enough to find out in what part of the process it is located.
To be clear, I’m not saying the conclusion is wrong—just that the explicit assumptions the paper makes (mainly the Indifference Principle) aren’t sufficient to imply its conclusion.
The version that you’ve just presented isn’t identical to the one in Bostrom’s paper—it’s (at least implicitly) making use of assumptions beyond the Indifference Principle. And I think it’s surprisingly non-trivial to work out exactly how to formalize the needed assumptions, and make the argument totally tight, although I’d still guess that this is ultimately possible.[1]
Caveat: Although the conclusion is at least slightly wrong, since—if we’re willing to assign non-zero probability to the hypothesis that we’re hallucating the world, because we’re ancestor simulations—it seems we should also assign non-zero probability to the hypothesis that we’re hallucinating for some other reason. (The argument implicitly assumes that being an ancestor simulation is the only ‘skeptical hypothesis’ we should assign non-zero probability to.) I think it’s also unclear how big a deal this caveat is.
My version tried to be an intuitive simplification of the core of Bostrom’s paper. I actually don’t identify these assumptions you mention. If you are right, I may have presupposed them while reading the paper, or my memory may be betraying me for the sake of making sense of it. Anyway, I really appreciate you took the time to comment.
I would like to understand how that is a valid objection, because I honestly don’t see it. To simplify a bit, if you think that 1 (‘humanity won’t reach a posthuman stage’) and 2 (‘posthuman civilizations are extremely unlikely to run vast numbers of simulations’) are false, it follows that humanity will probably both reach a posthuman stage and run a vast number of simulations. Now if you really think this will probably happen, I can see no reason to deny that it has already happened in the past. Why postulate that we will be the first simulators? There’s no empirical evidence to support it, given that we are talking about extremely detailed, realistic simulations, and as it was already agreed that simulations are so many, it seems very, very unlikely that we are located at the first level. In other words, if one believes that intelligent life is part of a process which normally culminates with a massive ancestor-simulation program, the fact that there is intelligent life is not enough to find out in what part of the process it is located.
To be clear, I’m not saying the conclusion is wrong—just that the explicit assumptions the paper makes (mainly the Indifference Principle) aren’t sufficient to imply its conclusion.
The version that you’ve just presented isn’t identical to the one in Bostrom’s paper—it’s (at least implicitly) making use of assumptions beyond the Indifference Principle. And I think it’s surprisingly non-trivial to work out exactly how to formalize the needed assumptions, and make the argument totally tight, although I’d still guess that this is ultimately possible.[1]
Caveat: Although the conclusion is at least slightly wrong, since—if we’re willing to assign non-zero probability to the hypothesis that we’re hallucating the world, because we’re ancestor simulations—it seems we should also assign non-zero probability to the hypothesis that we’re hallucinating for some other reason. (The argument implicitly assumes that being an ancestor simulation is the only ‘skeptical hypothesis’ we should assign non-zero probability to.) I think it’s also unclear how big a deal this caveat is.
My version tried to be an intuitive simplification of the core of Bostrom’s paper. I actually don’t identify these assumptions you mention. If you are right, I may have presupposed them while reading the paper, or my memory may be betraying me for the sake of making sense of it. Anyway, I really appreciate you took the time to comment.