Yes, the resolution of other moral patients is something I left out. I appreciate you pointing this out because I think it is important, I was maybe assuming something like that longtermists are simulated accurately and that everything else has much lower resolution such as only being philosophical zombies, though as I articulate this I’m not sure that would work. We would have to know more about the physics of the simulation, though we could probably make some good guesses.
And yes, it becomes much stronger if I am the only being in the universe, simulated or otherwise. There are some other reasons I sometimes think the case for solipsism is very strong, but I never bother to argue for them, because if I’m right then there’s no one else to hear what I’m saying anyways! Plus the problem with solipsism is that to some degree everyone must evaluate it for themselves, since the case for it may vary quite a bit for different individuals depending on who in the universe you find yourself as.
Perhaps you are right about AI creating simulations. I’m not sure they would be as likely to create as many, but they may still create a lot. This is something I would have to think about more.
I think the argument with aliens is that perhaps there is a very strong filter such that any set of beings who evaluate the decision will come to the conclusion that they are in a simulation, and so any thing that has the level of intelligence required to become spacefaring would also be intelligent enough to realize it is probably in a simulation and so it’s not worth it. Perhaps this could even apply to AI.
It is, I admit, quite an extreme statement that no set of beings would ever come to the conclusion that they might not be in a simulation, or would not pursue longtermism on the off-chance that they are not in a simulation. But on the other hand, it would be equally extreme not to allow the possibility that we are in a simulation to affect our decision calculus at all, since it does seem quite possible—though perhaps the expected value of the simulation is ttoo small to have much of an effect, except in the universe where the universe is tiled with meaning-maximizing hedonium of the most important time in history and we are it.
I really appreciate your comment on CDT and EDT as well. I felt like they might give the same answer, even though it also “feels” somewhat similar to a Necomb’s Paradox. I think I will have to Study decision theory quite a bit more to really get a handle on this.
Thank you for this reply!
Yes, the resolution of other moral patients is something I left out. I appreciate you pointing this out because I think it is important, I was maybe assuming something like that longtermists are simulated accurately and that everything else has much lower resolution such as only being philosophical zombies, though as I articulate this I’m not sure that would work. We would have to know more about the physics of the simulation, though we could probably make some good guesses.
And yes, it becomes much stronger if I am the only being in the universe, simulated or otherwise. There are some other reasons I sometimes think the case for solipsism is very strong, but I never bother to argue for them, because if I’m right then there’s no one else to hear what I’m saying anyways! Plus the problem with solipsism is that to some degree everyone must evaluate it for themselves, since the case for it may vary quite a bit for different individuals depending on who in the universe you find yourself as.
Perhaps you are right about AI creating simulations. I’m not sure they would be as likely to create as many, but they may still create a lot. This is something I would have to think about more.
I think the argument with aliens is that perhaps there is a very strong filter such that any set of beings who evaluate the decision will come to the conclusion that they are in a simulation, and so any thing that has the level of intelligence required to become spacefaring would also be intelligent enough to realize it is probably in a simulation and so it’s not worth it. Perhaps this could even apply to AI.
It is, I admit, quite an extreme statement that no set of beings would ever come to the conclusion that they might not be in a simulation, or would not pursue longtermism on the off-chance that they are not in a simulation. But on the other hand, it would be equally extreme not to allow the possibility that we are in a simulation to affect our decision calculus at all, since it does seem quite possible—though perhaps the expected value of the simulation is ttoo small to have much of an effect, except in the universe where the universe is tiled with meaning-maximizing hedonium of the most important time in history and we are it.
I really appreciate your comment on CDT and EDT as well. I felt like they might give the same answer, even though it also “feels” somewhat similar to a Necomb’s Paradox. I think I will have to Study decision theory quite a bit more to really get a handle on this.