This falls under anthropics, in case you’re interested in related writing. It doesn’t seem very close to Newcomb’s problem to me, but I’m not that well acquainted with these areas.
One question I’d have is: Who counts as an observer for these observer selection effects? Do they need to be sufficiently intelligent? If so, an alternative to us being in a simulation that’s about to end or never making it off Earth is that we’re in base reality and the future and/or simulations could be filled with unintelligent conscious beings (and possibly unconscious but sufficiently intelligent AI, if observers also need to be conscious), but not astronomically many intelligent conscious beings. Unintelligent conscious beings are still possible and matter, at least to me (others might think consciousness requires a pretty high level of cognition or self-awareness), so this argument seems like a reason to prioritize such scenarios further relative to those with many intelligent conscious beings. We might think almost all expected moral patients (by number and moral weight) are not sufficiently intelligent to count as observers.
Thank you! Yes, I’m pretty new here, and now that you say that I think you’re right, anthropics makes more sense.
I am inclined to think the main thing required to be an observer would be enough intelligence to ask whether one is likely to be the entity one is by pure chance, and this doesn’t necessarily require consciousness, just the ability to assess likelihood one is in a simulation into one’s decision calculus.
I had not thought about the possibility that future beings are mostly conscious, but very few are intelligent enough to ask the question. This is definitely a possibility. Though if the vast majority of future beings are unintelligent, you might expect there to be far fewer simulations of intelligent beings like ourselves, somewhat cancelling this possibility out.
So yeah, since I think most future beings (or at least a very large number) will most likely be intelligent, I think the selection affects do likely apply.
This falls under anthropics, in case you’re interested in related writing. It doesn’t seem very close to Newcomb’s problem to me, but I’m not that well acquainted with these areas.
One question I’d have is: Who counts as an observer for these observer selection effects? Do they need to be sufficiently intelligent? If so, an alternative to us being in a simulation that’s about to end or never making it off Earth is that we’re in base reality and the future and/or simulations could be filled with unintelligent conscious beings (and possibly unconscious but sufficiently intelligent AI, if observers also need to be conscious), but not astronomically many intelligent conscious beings. Unintelligent conscious beings are still possible and matter, at least to me (others might think consciousness requires a pretty high level of cognition or self-awareness), so this argument seems like a reason to prioritize such scenarios further relative to those with many intelligent conscious beings. We might think almost all expected moral patients (by number and moral weight) are not sufficiently intelligent to count as observers.
Thank you! Yes, I’m pretty new here, and now that you say that I think you’re right, anthropics makes more sense.
I am inclined to think the main thing required to be an observer would be enough intelligence to ask whether one is likely to be the entity one is by pure chance, and this doesn’t necessarily require consciousness, just the ability to assess likelihood one is in a simulation into one’s decision calculus.
I had not thought about the possibility that future beings are mostly conscious, but very few are intelligent enough to ask the question. This is definitely a possibility. Though if the vast majority of future beings are unintelligent, you might expect there to be far fewer simulations of intelligent beings like ourselves, somewhat cancelling this possibility out.
So yeah, since I think most future beings (or at least a very large number) will most likely be intelligent, I think the selection affects do likely apply.