Chaos theory is about systems where tiny deviations in initial conditions cause large deviations in what happens in the future. My impression (though I don’t know much about the field) is that, assuming some model of a system (e.g. the weather), you can prove things about how far ahead you can predict the system given some uncertainty (normally about the initial conditions, though uncertainty brought about by limited compute that forces approximations should work similarly). Whether the weather corresponds to any particular model isn’t really susceptible to proofs, but that question can be tackled by normal science.
This topic also comes up when discussing ajeya cotra’s biological anchors—how much compute is required to simulate evolution and create AGI in the first place—which is another reason why I was curious about this topic. If re-running evolution requires simulating the weather and if this is computationally too difficult then re-running evolution may not be a viable path to AGI. (And out of the all of biological anchors, the evolutionary one is the only one that matters imo.) I wonder if it’s worth studying this topic further.
If re-running evolution requires simulating the weather and if this is computationally too difficult then re-running evolution may not be a viable path to AGI.
There are many things that prevent us from literally rerunning human evolution. The evolution anchor is not a proof that we could do exactly what evolution did, but instead an argument that if something as inefficient as evolution spit out human intelligence with that amount of compute, surely humanity could do it if we had a similar amount of compute. Evolution is very inefficient — it has itself been far less optimized than the creatures it produces.
(I’d have more specific objections to the idea that chaos-theory-in-weather in particular would be an issue: I think that a weather-distribution approximated with a different random generation procedure would be as likely to produce human intelligence as a weather distribution generated by Earth’s precise chaotic behavior. But that’s not very relevant, because there would be far bigger differences between Earthly evolution and what-humans-would-do-with-1e40-FLOP than the weather.)
There are many things that prevent us from literally rerunning human evolution. The evolution anchor is not a proof that we could do exactly what evolution did, but instead an argument that if something as inefficient as evolution spit out human intelligence with that amount of compute, surely humanity could do it if we had a similar amount of compute. Evolution is very inefficient — it has itself been far less optimized than the creatures it produces.
Yup I feel like there’s different ways to interpret it, you’ve picked one interpretation which is fair!
Another way of interpreting it that I found was: “what’s an argument for AI timelines this century that is straightforward and airtight, and doesn’t rely on things like hard-to-convey inside views, lots of deference or arbitrary ways of setting priors”. Many AI risk ppl anyway seem to agree that if you’re aiming for accuracy you can’t rely on the anchor as much, at best its a sort of upper bound. But if you are aiming for airtight arguments that can convince literally anybody, then biological anchors might more persuasive than other ways of thinking about AI timelines.
And if you are aiming for airtightness, I wonder if “we can literally re-run evolution and this is how we will do it at a technical level” can be made more airtight than the broader arguments in your first para. [Broader arguments such as: we can do different things with the compute and still get AGI, that evolution was in fact a “dumb” unoptimised process and not smart in some unknown way, that we as humans can in fact do better than evolution (at finding AGI) because we’re smart, that evolution didn’t get astronomically lucky because of some instantiation choices etc etc.]
(I’d have more specific objections to the idea that chaos-theory-in-weather in particular would be an issue: I think that a weather-distribution approximated with a different random generation procedure would be as likely to produce human intelligence as a weather distribution generated by Earth’s precise chaotic behavior. But that’s not very relevant, because there would be far bigger differences between Earthly evolution and what-humans-would-do-with-1e40-FLOP than the weather.)
This is fair! Although I do wonder more broadly, not just restricted to the weather but to tasks in general: Is it possible to train/select/evolve RL agents to get to AGI only by training on fast-to-evaluate tasks, or is training on slow-to-evaluate tasks a necessary condition? By fast-to-evaluate I’d just mean doing a forward pass of the environment is not significantly slower than doing a forward pass of the agent, and that you can in fact spend most of the compute during training on the agent not the environment.
Some of MIRI stuff on decision theory does make me wonder if acting in environments that are more complicated* than you as an agent are, is a qualitatively different kind of problem than acting in environments that are simpler than you are.
*Ways an environment may be “complicated”: possess more computational complexity than you, contain your perfect clones, contain agents with much higher intelligence than you, contain chaos-theoretic / quantum / physical / chemical stuff necessary for life or intelligent behaviour, be literally incomputable etc.
Chaos theory is about systems where tiny deviations in initial conditions cause large deviations in what happens in the future. My impression (though I don’t know much about the field) is that, assuming some model of a system (e.g. the weather), you can prove things about how far ahead you can predict the system given some uncertainty (normally about the initial conditions, though uncertainty brought about by limited compute that forces approximations should work similarly). Whether the weather corresponds to any particular model isn’t really susceptible to proofs, but that question can be tackled by normal science.
Thanks for this reply!
This topic also comes up when discussing ajeya cotra’s biological anchors—how much compute is required to simulate evolution and create AGI in the first place—which is another reason why I was curious about this topic. If re-running evolution requires simulating the weather and if this is computationally too difficult then re-running evolution may not be a viable path to AGI. (And out of the all of biological anchors, the evolutionary one is the only one that matters imo.) I wonder if it’s worth studying this topic further.
There are many things that prevent us from literally rerunning human evolution. The evolution anchor is not a proof that we could do exactly what evolution did, but instead an argument that if something as inefficient as evolution spit out human intelligence with that amount of compute, surely humanity could do it if we had a similar amount of compute. Evolution is very inefficient — it has itself been far less optimized than the creatures it produces.
(I’d have more specific objections to the idea that chaos-theory-in-weather in particular would be an issue: I think that a weather-distribution approximated with a different random generation procedure would be as likely to produce human intelligence as a weather distribution generated by Earth’s precise chaotic behavior. But that’s not very relevant, because there would be far bigger differences between Earthly evolution and what-humans-would-do-with-1e40-FLOP than the weather.)
Yup I feel like there’s different ways to interpret it, you’ve picked one interpretation which is fair!
Another way of interpreting it that I found was: “what’s an argument for AI timelines this century that is straightforward and airtight, and doesn’t rely on things like hard-to-convey inside views, lots of deference or arbitrary ways of setting priors”. Many AI risk ppl anyway seem to agree that if you’re aiming for accuracy you can’t rely on the anchor as much, at best its a sort of upper bound. But if you are aiming for airtight arguments that can convince literally anybody, then biological anchors might more persuasive than other ways of thinking about AI timelines.
And if you are aiming for airtightness, I wonder if “we can literally re-run evolution and this is how we will do it at a technical level” can be made more airtight than the broader arguments in your first para. [Broader arguments such as: we can do different things with the compute and still get AGI, that evolution was in fact a “dumb” unoptimised process and not smart in some unknown way, that we as humans can in fact do better than evolution (at finding AGI) because we’re smart, that evolution didn’t get astronomically lucky because of some instantiation choices etc etc.]
This is fair! Although I do wonder more broadly, not just restricted to the weather but to tasks in general: Is it possible to train/select/evolve RL agents to get to AGI only by training on fast-to-evaluate tasks, or is training on slow-to-evaluate tasks a necessary condition? By fast-to-evaluate I’d just mean doing a forward pass of the environment is not significantly slower than doing a forward pass of the agent, and that you can in fact spend most of the compute during training on the agent not the environment.
Some of MIRI stuff on decision theory does make me wonder if acting in environments that are more complicated* than you as an agent are, is a qualitatively different kind of problem than acting in environments that are simpler than you are.
*Ways an environment may be “complicated”: possess more computational complexity than you, contain your perfect clones, contain agents with much higher intelligence than you, contain chaos-theoretic / quantum / physical / chemical stuff necessary for life or intelligent behaviour, be literally incomputable etc.