This seems like a reasonable assumption for other anchors such as the Lifetime and the Neural Network Horizon anchors, which assume that training environments for TAI are similar to training environments used for AI today. But it seems much more difficult to justify for the evolution anchor, which Ajeya admits would be far more computationally intensive than storing text or simulating a deterministic Atari game.
This post argues that the evolutionary environment is similarly or more complex than the brains of the organisms within it, while the second paragraph of the above quotation disagrees. Neither argument seems detailed enough to definitively answer the question, so I’d be interested to read any further research on the two questions proposed in the post:
Coming up with estimates of what the least fine-grained world that we would expect might be able to produce intelligent life if we simulated natural selection in it.
Calculating how much compute it would take to in fact simulate it.
But it seems much more difficult to justify for the evolution anchor, which Ajeya admits would be far more computationally intensive than storing text or simulating a deterministic Atari game.
The evolution anchor involves more compute than the other anchors (because you need to get so many more data points and train the AI on them), but it’s not obvious to me that it requires a larger proportion of compute spent on the environment than the other anchors. Like, it seems plausible to me that the evolution anchor looks more like having the AI play pretty simple games for an enormously long time, rather than having a complicated physically simulated environment.
This seems like a reasonable assumption for other anchors such as the Lifetime and the Neural Network Horizon anchors, which assume that training environments for TAI are similar to training environments used for AI today. But it seems much more difficult to justify for the evolution anchor, which Ajeya admits would be far more computationally intensive than storing text or simulating a deterministic Atari game.
This post argues that the evolutionary environment is similarly or more complex than the brains of the organisms within it, while the second paragraph of the above quotation disagrees. Neither argument seems detailed enough to definitively answer the question, so I’d be interested to read any further research on the two questions proposed in the post:
Coming up with estimates of what the least fine-grained world that we would expect might be able to produce intelligent life if we simulated natural selection in it.
Calculating how much compute it would take to in fact simulate it.
The evolution anchor involves more compute than the other anchors (because you need to get so many more data points and train the AI on them), but it’s not obvious to me that it requires a larger proportion of compute spent on the environment than the other anchors. Like, it seems plausible to me that the evolution anchor looks more like having the AI play pretty simple games for an enormously long time, rather than having a complicated physically simulated environment.
Fair enough. Both seem plausible to me, we’d probably need more evidence to know which one would require more compute.