This seems important to me because, for someone claiming that we should think that we’re at the HoH, the update on the basis of earliness is doing much more work than updates on the basis of, say, familiar arguments about when AGI is coming and what will happen when it does. To me at least, that’s a striking fact and wouldn’t have been obvious before I started thinking about these things.
It seems to me the object level is where the action is, and the non-simulation Doomsday Arguments mostly raise a phantom consideration that cancels out (in particular, cancelling out re whether there is an influenceable lock-in event this century).
You could say a similar thing about our being humans rather than bacteria, which cumulatively outnumber us by more than 1,000,000,000,000,000,000,000,000 times on Earth thus far according to the paleontologists.
Or you could go further and ask why we aren’t neutrinos? There are more than 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 of them in the observable universe.
However extravagant the class you pick, it’s cancelled out by the knowledge that we find ourselves in our current situation. I think it’s more confusing than helpful to say that our being humans rather than neutrinos is doing more than 10^70 times as much work as object-level analysis of AI in the case for attending to x-risk/lock-in with AI. You didn’t need to think about that in the first place to understand AI or bioweapons, it was an irrelevant distraction.
The same is true for future populations that know they’re living in intergalactic societies and the like. If we compare possible world A, where future Dyson spheres can handle a population of P (who know they’re in that era), and possible world B, where future Dyson spheres can support a population of 2P, they don’t give us much different expectations of the number of people finding themselves in our circumstances, and so cancel out.
The simulation argument (or a brain-in-vats story or the like) is different and doesn’t automatically cancel out because it’s a way to make our observations more likely and common. However, for policy it does still largely cancel out, as long as the total influence of people genuinely in our apparent circumstances is a lot greater than that of all simulations with apparent circumstances like ours: a bigger future world means more influence for genuine inhabitants of important early times and also more simulations. [But our valuation winds up being bounded by our belief about the portion of all-time resources allocated to sims in apparent positions like ours.]
Another way of thinking about this is that prior to getting confused by any anthropic updating, if you were going to set a policy for humans who find ourselves in our apparent situation across nonanthropic possibilities assessed at the object level (humanity doomed, Time of Perils, early lock-in, no lock-in), you would just want to add up the consequences of the policy across genuine early humans and sims in each (non-anthropically assessed) possible world.
A vast future gives more chances for influence on lock-in later, which might win out as even bigger than this century (although this gets rapidly less likely with time and expansion), but it shouldn’t change our assessment of lock-in this century, and a substantial chance of that gives us a good chance of HoH (or simulation-adjusted HoH).
It seems to me the object level is where the action is, and the non-simulation Doomsday Arguments mostly raise a phantom consideration that cancels out (in particular, cancelling out re whether there is an influenceable lock-in event this century).
You could say a similar thing about our being humans rather than bacteria, which cumulatively outnumber us by more than 1,000,000,000,000,000,000,000,000 times on Earth thus far according to the paleontologists.
Or you could go further and ask why we aren’t neutrinos? There are more than 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 of them in the observable universe.
However extravagant the class you pick, it’s cancelled out by the knowledge that we find ourselves in our current situation. I think it’s more confusing than helpful to say that our being humans rather than neutrinos is doing more than 10^70 times as much work as object-level analysis of AI in the case for attending to x-risk/lock-in with AI. You didn’t need to think about that in the first place to understand AI or bioweapons, it was an irrelevant distraction.
The same is true for future populations that know they’re living in intergalactic societies and the like. If we compare possible world A, where future Dyson spheres can handle a population of P (who know they’re in that era), and possible world B, where future Dyson spheres can support a population of 2P, they don’t give us much different expectations of the number of people finding themselves in our circumstances, and so cancel out.
The simulation argument (or a brain-in-vats story or the like) is different and doesn’t automatically cancel out because it’s a way to make our observations more likely and common. However, for policy it does still largely cancel out, as long as the total influence of people genuinely in our apparent circumstances is a lot greater than that of all simulations with apparent circumstances like ours: a bigger future world means more influence for genuine inhabitants of important early times and also more simulations. [But our valuation winds up being bounded by our belief about the portion of all-time resources allocated to sims in apparent positions like ours.]
Another way of thinking about this is that prior to getting confused by any anthropic updating, if you were going to set a policy for humans who find ourselves in our apparent situation across nonanthropic possibilities assessed at the object level (humanity doomed, Time of Perils, early lock-in, no lock-in), you would just want to add up the consequences of the policy across genuine early humans and sims in each (non-anthropically assessed) possible world.
A vast future gives more chances for influence on lock-in later, which might win out as even bigger than this century (although this gets rapidly less likely with time and expansion), but it shouldn’t change our assessment of lock-in this century, and a substantial chance of that gives us a good chance of HoH (or simulation-adjusted HoH).