Hey, thanks for sharing these. They seem like a good starting point. But I don’t know whether to take them literally.
On a quick read, things I may not buy:
So that’s about 1e10 FLOP per second per square meter available. So, you could divide the world into 10x10 meter squares and then have a 1e12 FLOP computer assigned to each square to handle the physics and graphics
Not sure if I buy this decomposition. For instance, taking into account that things can move from one 10x10m region to another/simulating the boundaries seems like it would be annoying. But you could have the world as a series of 10x10 rooms?
dynamically allocate compute so that you have more of it where your creatures are and don’t waste as much simulating empty areas
I buy this, but worried about the world being consistent. There is also a memory tradeoff here.
Well totally this thing would take a fuckton of wall-clock time etc. but that’s not a problem, this is just a thought experiment—“If we did this bigass computation, would it work?” If the answer is “Yep, 90% likely to work” then that means our distribution over OOMs should have 90% by +18.
Mmh, then OOMs of compute stops being predictive of timelines in this anchor, because we can’t just think about how much compute we have but also about whether we can use it for this.
Sorta? Like, yeah, suppose you have 10% of your probability mass on the evolution anchor. Well, that means that like maaaaybe in 2090 or so we’ll have enough compute to recapitulate evolution, and so maaaaybe you could say you have 10% credence that we’ll actually build AGI in 2090 using the recapitulate evolution method. But that assumes basically no algorithmic progress on other paths to AGI. But anyhow if you were doing that, then yes it would be a good counterargument that actually even if we had all the compute in 2090 we wouldn’t have the clock time because latency etc. would make it take dozens of years at least to perform this computation. So then (that component of) your timelines would shift out even farther.
I think this matters approximately zero, because it is a negligible component of people’s timelines and it’s far away anyway so making it move even farther away isn’t decision-relevant.
Well, I agree that this is pretty in the weeds, but personally this has made me view the evolutionary anchor as less forceful.
Like, the argument isn’t “ha, we’re not going to be able to simulate evolution, checkmate AGI doomers”, it’s “the evolutionary anchor was a particularly forceful argument for giving a substantial probability to x-risk this century, even to people who might otherwise be very skeptical. The fact that it doesn’t go through has a variety of small update, e.g., it marginally increases the value of non-x-risk longtermism”
Huh, I guess I didn’t realize how much weight some people put on the evolution anchor. I thought everyone was (like me) treating it as a loose upper bound basically, not something to actually clump lots of probability mass on.
In other words: The people I know who were using the evolutionary anchor (people like myself, Ajeya, etc.) weren’t using it in a way that would be significantly undermined by having to push the anchor up 6 OOMs or so. Like I said, it would be a minor change to the bottom line according to the spreadsheet. Insofar as people were arguing for AGI this century in a way which can be undermined by adding 6 OOMs to the evolutionary anchor then those people are silly & should stop, for multiple reasons, one of which is that maaaybe environmental simulation costs mean that the evolution anchor really is 6 OOMs bigger than Ajeya estimates.
Hey, thanks for sharing these. They seem like a good starting point. But I don’t know whether to take them literally.
On a quick read, things I may not buy:
Not sure if I buy this decomposition. For instance, taking into account that things can move from one 10x10m region to another/simulating the boundaries seems like it would be annoying. But you could have the world as a series of 10x10 rooms?
I buy this, but worried about the world being consistent. There is also a memory tradeoff here.
Mmh, maybe I’m not so worried about FLOPs per se but about paralelizability/wall-clock time.
Well totally this thing would take a fuckton of wall-clock time etc. but that’s not a problem, this is just a thought experiment—“If we did this bigass computation, would it work?” If the answer is “Yep, 90% likely to work” then that means our distribution over OOMs should have 90% by +18.
Mmh, then OOMs of compute stops being predictive of timelines in this anchor, because we can’t just think about how much compute we have but also about whether we can use it for this.
Sorta? Like, yeah, suppose you have 10% of your probability mass on the evolution anchor. Well, that means that like maaaaybe in 2090 or so we’ll have enough compute to recapitulate evolution, and so maaaaybe you could say you have 10% credence that we’ll actually build AGI in 2090 using the recapitulate evolution method. But that assumes basically no algorithmic progress on other paths to AGI. But anyhow if you were doing that, then yes it would be a good counterargument that actually even if we had all the compute in 2090 we wouldn’t have the clock time because latency etc. would make it take dozens of years at least to perform this computation. So then (that component of) your timelines would shift out even farther.
I think this matters approximately zero, because it is a negligible component of people’s timelines and it’s far away anyway so making it move even farther away isn’t decision-relevant.
Well, I agree that this is pretty in the weeds, but personally this has made me view the evolutionary anchor as less forceful.
Like, the argument isn’t “ha, we’re not going to be able to simulate evolution, checkmate AGI doomers”, it’s “the evolutionary anchor was a particularly forceful argument for giving a substantial probability to x-risk this century, even to people who might otherwise be very skeptical. The fact that it doesn’t go through has a variety of small update, e.g., it marginally increases the value of non-x-risk longtermism”
Huh, I guess I didn’t realize how much weight some people put on the evolution anchor. I thought everyone was (like me) treating it as a loose upper bound basically, not something to actually clump lots of probability mass on.
In other words: The people I know who were using the evolutionary anchor (people like myself, Ajeya, etc.) weren’t using it in a way that would be significantly undermined by having to push the anchor up 6 OOMs or so. Like I said, it would be a minor change to the bottom line according to the spreadsheet. Insofar as people were arguing for AGI this century in a way which can be undermined by adding 6 OOMs to the evolutionary anchor then those people are silly & should stop, for multiple reasons, one of which is that maaaybe environmental simulation costs mean that the evolution anchor really is 6 OOMs bigger than Ajeya estimates.