I’m trying to understand the simulation argument. I think Bostrom uses the Indifference Principle (IP) in a weird way. If we become a posthuman civilization that runs many many simulations of our ancestors (meaning us), then how does the IP apply? It only applies when one has no other information to go on. But in this case, we do have some extra information—crucial information! I.e., we know that we are not in any of the simulations that we have produced. Therefore, we do not have any statistical reason to believe that we are simulated.
I agree that that’s a valid objection to the argument, as it’s presented in the paper, and that the follow-up FAQ essay also doesn’t sufficiently address it. Basically, the Indifference Principle defined in the paper isn’t sufficient to support the paper’s conclusions (for the reason you give).
I think the main question is whether this issue can be patched in a simple way (e.g. by slightly tweaking the Indifference Principle) or whether the objection is actually much deeper than that. I’m not sure, personally.
I also really recommend Joe’s essay as an exploration of these issues. (The essay also links a related Google doc I wrote on the subject, although that doc goes a bit less deep.)
I would like to understand how that is a valid objection, because I honestly don’t see it. To simplify a bit, if you think that 1 (‘humanity won’t reach a posthuman stage’) and 2 (‘posthuman civilizations are extremely unlikely to run vast numbers of simulations’) are false, it follows that humanity will probably both reach a posthuman stage and run a vast number of simulations. Now if you really think this will probably happen, I can see no reason to deny that it has already happened in the past. Why postulate that we will be the first simulators? There’s no empirical evidence to support it, given that we are talking about extremely detailed, realistic simulations, and as it was already agreed that simulations are so many, it seems very, very unlikely that we are located at the first level. In other words, if one believes that intelligent life is part of a process which normally culminates with a massive ancestor-simulation program, the fact that there is intelligent life is not enough to find out in what part of the process it is located.
To be clear, I’m not saying the conclusion is wrong—just that the explicit assumptions the paper makes (mainly the Indifference Principle) aren’t sufficient to imply its conclusion.
The version that you’ve just presented isn’t identical to the one in Bostrom’s paper—it’s (at least implicitly) making use of assumptions beyond the Indifference Principle. And I think it’s surprisingly non-trivial to work out exactly how to formalize the needed assumptions, and make the argument totally tight, although I’d still guess that this is ultimately possible.[1]
Caveat: Although the conclusion is at least slightly wrong, since—if we’re willing to assign non-zero probability to the hypothesis that we’re hallucating the world, because we’re ancestor simulations—it seems we should also assign non-zero probability to the hypothesis that we’re hallucinating for some other reason. (The argument implicitly assumes that being an ancestor simulation is the only ‘skeptical hypothesis’ we should assign non-zero probability to.) I think it’s also unclear how big a deal this caveat is.
My version tried to be an intuitive simplification of the core of Bostrom’s paper. I actually don’t identify these assumptions you mention. If you are right, I may have presupposed them while reading the paper, or my memory may be betraying me for the sake of making sense of it. Anyway, I really appreciate you took the time to comment.
Assume that base reality is similar to our own world, and that each civ have many descendant “simulated” civs. Although each civ knows it is not one of its own sims, so do all of them, so it is still plausible that we should be indifferent between them—most of which are simulated.
Plenty of room to object at basically every stage of the argument; my point is just that you might still want to be indifferent between civs that all know they aren’t their own sim.
Thank you. I am just wondering though, When you say “each civ,” what do you mean? What are these civilizations? Why assume they exist? What motivates the idea that there are other civilizations that run simulations sufficiently similar to our own world (as strange and contingent as its laws and constants are)?
The idea is that it seems like we are in a position to make ancestor simulations, which could contain organised life, i.e. “civilisations” (civs for short). Moreover, both simulated reality and base reality might be similar in that respect.
Sorry to say this, but do you actually not follow, or are you doing the analytic philosopher move of saying “what do you mean” because you will only accept a rigorous/watertight argument (or just don’t like the argument)?
Ok, thank you very much. But why then do so many people take the argument seriously? Is it surprising that the peer reviewed process didn’t pick up this problem?
I think most people would probably regard the objection as a nitpick (e.g. “OK, maybe the Indifference Principle isn’t actually sufficient to support a tight formal argument, and you need to add in some other assumption, but the informal version if the argument is just pretty clearly right”), feel the objection has been successfully answered (e.g. find the response in the Simulation Argument FAQ more compelling than I do), or just haven’t completely noticed the potential issue.
I think it’s still totally reasonable for the paper to have passed peer review. (I would have recommended publication if I were a reviewer.) It’s still a groundbreaking paper that raises new considerations and brings attention to a really important hypothesis. It’s also rare for a published philosophical argument to actually be totally tight and free from issues, and the issue with the paper is ambiguous enough and hard-to-think-about enough that there’s still no consensus about whether it actually is a real or important issue.
I agree that that’s a valid objection to the argument, as it’s presented in the paper, and that the follow-up FAQ essay also doesn’t sufficiently address it. Basically, the Indifference Principle defined in the paper isn’t sufficient to support the paper’s conclusions (for the reason you give).
I think the main question is whether this issue can be patched in a simple way (e.g. by slightly tweaking the Indifference Principle) or whether the objection is actually much deeper than that. I’m not sure, personally.
I also really recommend Joe’s essay as an exploration of these issues. (The essay also links a related Google doc I wrote on the subject, although that doc goes a bit less deep.)
I would like to understand how that is a valid objection, because I honestly don’t see it. To simplify a bit, if you think that 1 (‘humanity won’t reach a posthuman stage’) and 2 (‘posthuman civilizations are extremely unlikely to run vast numbers of simulations’) are false, it follows that humanity will probably both reach a posthuman stage and run a vast number of simulations. Now if you really think this will probably happen, I can see no reason to deny that it has already happened in the past. Why postulate that we will be the first simulators? There’s no empirical evidence to support it, given that we are talking about extremely detailed, realistic simulations, and as it was already agreed that simulations are so many, it seems very, very unlikely that we are located at the first level. In other words, if one believes that intelligent life is part of a process which normally culminates with a massive ancestor-simulation program, the fact that there is intelligent life is not enough to find out in what part of the process it is located.
To be clear, I’m not saying the conclusion is wrong—just that the explicit assumptions the paper makes (mainly the Indifference Principle) aren’t sufficient to imply its conclusion.
The version that you’ve just presented isn’t identical to the one in Bostrom’s paper—it’s (at least implicitly) making use of assumptions beyond the Indifference Principle. And I think it’s surprisingly non-trivial to work out exactly how to formalize the needed assumptions, and make the argument totally tight, although I’d still guess that this is ultimately possible.[1]
Caveat: Although the conclusion is at least slightly wrong, since—if we’re willing to assign non-zero probability to the hypothesis that we’re hallucating the world, because we’re ancestor simulations—it seems we should also assign non-zero probability to the hypothesis that we’re hallucinating for some other reason. (The argument implicitly assumes that being an ancestor simulation is the only ‘skeptical hypothesis’ we should assign non-zero probability to.) I think it’s also unclear how big a deal this caveat is.
My version tried to be an intuitive simplification of the core of Bostrom’s paper. I actually don’t identify these assumptions you mention. If you are right, I may have presupposed them while reading the paper, or my memory may be betraying me for the sake of making sense of it. Anyway, I really appreciate you took the time to comment.
Ok, thank you very much. But why then do so many people take the argument seriously?
Assume that base reality is similar to our own world, and that each civ have many descendant “simulated” civs. Although each civ knows it is not one of its own sims, so do all of them, so it is still plausible that we should be indifferent between them—most of which are simulated. Plenty of room to object at basically every stage of the argument; my point is just that you might still want to be indifferent between civs that all know they aren’t their own sim.
Thank you. I am just wondering though, When you say “each civ,” what do you mean? What are these civilizations? Why assume they exist? What motivates the idea that there are other civilizations that run simulations sufficiently similar to our own world (as strange and contingent as its laws and constants are)?
The idea is that it seems like we are in a position to make ancestor simulations, which could contain organised life, i.e. “civilisations” (civs for short). Moreover, both simulated reality and base reality might be similar in that respect.
Sorry to say this, but do you actually not follow, or are you doing the analytic philosopher move of saying “what do you mean” because you will only accept a rigorous/watertight argument (or just don’t like the argument)?
Like, I am surprised the article made it through the peer-review process without someone noting that problem.
Ok, thank you very much. But why then do so many people take the argument seriously? Is it surprising that the peer reviewed process didn’t pick up this problem?
I think most people would probably regard the objection as a nitpick (e.g. “OK, maybe the Indifference Principle isn’t actually sufficient to support a tight formal argument, and you need to add in some other assumption, but the informal version if the argument is just pretty clearly right”), feel the objection has been successfully answered (e.g. find the response in the Simulation Argument FAQ more compelling than I do), or just haven’t completely noticed the potential issue.
I think it’s still totally reasonable for the paper to have passed peer review. (I would have recommended publication if I were a reviewer.) It’s still a groundbreaking paper that raises new considerations and brings attention to a really important hypothesis. It’s also rare for a published philosophical argument to actually be totally tight and free from issues, and the issue with the paper is ambiguous enough and hard-to-think-about enough that there’s still no consensus about whether it actually is a real or important issue.