“If we’re doing things right, it shouldn’t matter whether we’re building earliness into our prior or updating on the basis of earliness.”
Thanks, Lukas, I thought this was very clear and exactly right.
“So now we’ve switched over to instead making a guess about P(X in E | X in H), i.e. the probability that one of the 1e10 most influential people also is one of the 1e11 earliest people, and dividing by 10. That doesn’t seem much easier than making a guess about P(X in H | X in E), and it’s not obvious whether our intuitions here would lead us to expect more or less influentialness.”
That’s interesting, thank you—this statement of the debate has helped clarify things for me. It does seem to me that doing the update - going via P(X in E | X in H) rather than directly trying to assess P(X in H | X in E) - is helpful, but I’d understand the position of someone who wanted just to assess P(X in H | X in E) directly.
I think it’s helpful to assess P(X in E | X in H) because it’s not totally obvious how one should update on the basis of earliness. The arrow of causality and the possibility of lock-in over time definitely gives reasons in favor of influential people being earlier. But there’s still the big question of how great an update that should be. And the cumulative nature of knowledge and understanding gives reasons in favor thinking that later people are more likely to be more influential.
This seems important to me because, for someone claiming that we should think that we’re at the HoH, the update on the basis of earliness is doing much more work than updates on the basis of, say, familiar arguments about when AGI is coming and what will happen when it does. To me at least, that’s a striking fact and wouldn’t have been obvious before I started thinking about these things.
This seems important to me because, for someone claiming that we should think that we’re at the HoH, the update on the basis of earliness is doing much more work than updates on the basis of, say, familiar arguments about when AGI is coming and what will happen when it does. To me at least, that’s a striking fact and wouldn’t have been obvious before I started thinking about these things.
It seems to me the object level is where the action is, and the non-simulation Doomsday Arguments mostly raise a phantom consideration that cancels out (in particular, cancelling out re whether there is an influenceable lock-in event this century).
You could say a similar thing about our being humans rather than bacteria, which cumulatively outnumber us by more than 1,000,000,000,000,000,000,000,000 times on Earth thus far according to the paleontologists.
Or you could go further and ask why we aren’t neutrinos? There are more than 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 of them in the observable universe.
However extravagant the class you pick, it’s cancelled out by the knowledge that we find ourselves in our current situation. I think it’s more confusing than helpful to say that our being humans rather than neutrinos is doing more than 10^70 times as much work as object-level analysis of AI in the case for attending to x-risk/lock-in with AI. You didn’t need to think about that in the first place to understand AI or bioweapons, it was an irrelevant distraction.
The same is true for future populations that know they’re living in intergalactic societies and the like. If we compare possible world A, where future Dyson spheres can handle a population of P (who know they’re in that era), and possible world B, where future Dyson spheres can support a population of 2P, they don’t give us much different expectations of the number of people finding themselves in our circumstances, and so cancel out.
The simulation argument (or a brain-in-vats story or the like) is different and doesn’t automatically cancel out because it’s a way to make our observations more likely and common. However, for policy it does still largely cancel out, as long as the total influence of people genuinely in our apparent circumstances is a lot greater than that of all simulations with apparent circumstances like ours: a bigger future world means more influence for genuine inhabitants of important early times and also more simulations. [But our valuation winds up being bounded by our belief about the portion of all-time resources allocated to sims in apparent positions like ours.]
Another way of thinking about this is that prior to getting confused by any anthropic updating, if you were going to set a policy for humans who find ourselves in our apparent situation across nonanthropic possibilities assessed at the object level (humanity doomed, Time of Perils, early lock-in, no lock-in), you would just want to add up the consequences of the policy across genuine early humans and sims in each (non-anthropically assessed) possible world.
A vast future gives more chances for influence on lock-in later, which might win out as even bigger than this century (although this gets rapidly less likely with time and expansion), but it shouldn’t change our assessment of lock-in this century, and a substantial chance of that gives us a good chance of HoH (or simulation-adjusted HoH).
“If we’re doing things right, it shouldn’t matter whether we’re building earliness into our prior or updating on the basis of earliness.”
Thanks, Lukas, I thought this was very clear and exactly right.
“So now we’ve switched over to instead making a guess about P(X in E | X in H), i.e. the probability that one of the 1e10 most influential people also is one of the 1e11 earliest people, and dividing by 10. That doesn’t seem much easier than making a guess about P(X in H | X in E), and it’s not obvious whether our intuitions here would lead us to expect more or less influentialness.”
That’s interesting, thank you—this statement of the debate has helped clarify things for me. It does seem to me that doing the update - going via P(X in E | X in H) rather than directly trying to assess P(X in H | X in E) - is helpful, but I’d understand the position of someone who wanted just to assess P(X in H | X in E) directly.
I think it’s helpful to assess P(X in E | X in H) because it’s not totally obvious how one should update on the basis of earliness. The arrow of causality and the possibility of lock-in over time definitely gives reasons in favor of influential people being earlier. But there’s still the big question of how great an update that should be. And the cumulative nature of knowledge and understanding gives reasons in favor thinking that later people are more likely to be more influential.
This seems important to me because, for someone claiming that we should think that we’re at the HoH, the update on the basis of earliness is doing much more work than updates on the basis of, say, familiar arguments about when AGI is coming and what will happen when it does. To me at least, that’s a striking fact and wouldn’t have been obvious before I started thinking about these things.
It seems to me the object level is where the action is, and the non-simulation Doomsday Arguments mostly raise a phantom consideration that cancels out (in particular, cancelling out re whether there is an influenceable lock-in event this century).
You could say a similar thing about our being humans rather than bacteria, which cumulatively outnumber us by more than 1,000,000,000,000,000,000,000,000 times on Earth thus far according to the paleontologists.
Or you could go further and ask why we aren’t neutrinos? There are more than 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 of them in the observable universe.
However extravagant the class you pick, it’s cancelled out by the knowledge that we find ourselves in our current situation. I think it’s more confusing than helpful to say that our being humans rather than neutrinos is doing more than 10^70 times as much work as object-level analysis of AI in the case for attending to x-risk/lock-in with AI. You didn’t need to think about that in the first place to understand AI or bioweapons, it was an irrelevant distraction.
The same is true for future populations that know they’re living in intergalactic societies and the like. If we compare possible world A, where future Dyson spheres can handle a population of P (who know they’re in that era), and possible world B, where future Dyson spheres can support a population of 2P, they don’t give us much different expectations of the number of people finding themselves in our circumstances, and so cancel out.
The simulation argument (or a brain-in-vats story or the like) is different and doesn’t automatically cancel out because it’s a way to make our observations more likely and common. However, for policy it does still largely cancel out, as long as the total influence of people genuinely in our apparent circumstances is a lot greater than that of all simulations with apparent circumstances like ours: a bigger future world means more influence for genuine inhabitants of important early times and also more simulations. [But our valuation winds up being bounded by our belief about the portion of all-time resources allocated to sims in apparent positions like ours.]
Another way of thinking about this is that prior to getting confused by any anthropic updating, if you were going to set a policy for humans who find ourselves in our apparent situation across nonanthropic possibilities assessed at the object level (humanity doomed, Time of Perils, early lock-in, no lock-in), you would just want to add up the consequences of the policy across genuine early humans and sims in each (non-anthropically assessed) possible world.
A vast future gives more chances for influence on lock-in later, which might win out as even bigger than this century (although this gets rapidly less likely with time and expansion), but it shouldn’t change our assessment of lock-in this century, and a substantial chance of that gives us a good chance of HoH (or simulation-adjusted HoH).