By cortical neuron count, systems like AlphaZero are at about the same level as a blackbird (albeit one that lives for 18 years)
That comparison makes me think AI algorithms need a lot of work, because blackbirds seem vastly more impressive to me than AlphaZero. Some reasons:
Blackbirds can operate in the real world with a huge action space, rather than a simple toy world with a limited number of possible moves.
Blackbirds don’t need to play millions of rounds of games to figure things out. Indeed, they only have one shot to figure the most important things out or else they die. (One could argue that evolution has been playing millions/trillions/etc of rounds of the game over time, with most animals failing and dying, but it’s questionable how much of that information can be transmitted to future generations through a limited number of genes.)
Blackbirds seem to have “common sense” when solving problems, in the sense of figuring things out directly rather than stumbling upon them through huge amounts of trial and error. (This is similar to point 2.) Here’s a random example of what I have in mind by common sense: “One researcher reported seeing a raven carry away a large block of frozen suet by using his beak to carve a circle around the entire chunk he wanted.” Presumably the raven didn’t have to randomly peck around on thousands of previous chunks of ice in order to discover how to do that.
Perhaps one could argue that if we have the hardware for it, relatively dumb trial and error can also get to AGI as long as it works, whether or not it has common sense. But this gets back to point #1: I’m skeptical that dumb trial and error of the type that works for AlphaZero would scale to a world as complex as a blackbird’s. (Plus, we don’t have realistic simulation environments in which to train such AIs.)
All of that said, I acknowledge there’s a lot of uncertainty on these issues, and nobody really knows how long it will take to get the right algorithms.
Yeah, I agree—I’d rather have a blackbird than AlphaZero. For one thing, it’d make our current level of progress in AI much clearer. But on your second and third points, I think of ML training as somewhat analogous to evolution, and the trained agent as analogous to an animal. Both the training process and evolution are basically blind but goal-directed processes with a ton of iterations (I’m bullish on evolution’s ability to transmit information through generations) that result in well-adapted agents.
If that’s the right analogy, then we can compare AlphaZero’s superhuman board game abilities with a blackbird’s subhuman-but-general performance. If we’re not meaningfully compute-constrained, then the question is: what kinds of problems will we soon be able to train AI systems to solve? AI research might be one such problem. There are a lot of different training techniques out in the wild, and many of the more impressive recent developments have come from combining multiple techniques in novel ways (with lots of compute). That strikes me as the kind of search space that an AI system might be able to explore much faster than human teams.
Thanks for the interesting post!
That comparison makes me think AI algorithms need a lot of work, because blackbirds seem vastly more impressive to me than AlphaZero. Some reasons:
Blackbirds can operate in the real world with a huge action space, rather than a simple toy world with a limited number of possible moves.
Blackbirds don’t need to play millions of rounds of games to figure things out. Indeed, they only have one shot to figure the most important things out or else they die. (One could argue that evolution has been playing millions/trillions/etc of rounds of the game over time, with most animals failing and dying, but it’s questionable how much of that information can be transmitted to future generations through a limited number of genes.)
Blackbirds seem to have “common sense” when solving problems, in the sense of figuring things out directly rather than stumbling upon them through huge amounts of trial and error. (This is similar to point 2.) Here’s a random example of what I have in mind by common sense: “One researcher reported seeing a raven carry away a large block of frozen suet by using his beak to carve a circle around the entire chunk he wanted.” Presumably the raven didn’t have to randomly peck around on thousands of previous chunks of ice in order to discover how to do that.
Perhaps one could argue that if we have the hardware for it, relatively dumb trial and error can also get to AGI as long as it works, whether or not it has common sense. But this gets back to point #1: I’m skeptical that dumb trial and error of the type that works for AlphaZero would scale to a world as complex as a blackbird’s. (Plus, we don’t have realistic simulation environments in which to train such AIs.)
All of that said, I acknowledge there’s a lot of uncertainty on these issues, and nobody really knows how long it will take to get the right algorithms.
My pleasure!
Yeah, I agree—I’d rather have a blackbird than AlphaZero. For one thing, it’d make our current level of progress in AI much clearer. But on your second and third points, I think of ML training as somewhat analogous to evolution, and the trained agent as analogous to an animal. Both the training process and evolution are basically blind but goal-directed processes with a ton of iterations (I’m bullish on evolution’s ability to transmit information through generations) that result in well-adapted agents.
If that’s the right analogy, then we can compare AlphaZero’s superhuman board game abilities with a blackbird’s subhuman-but-general performance. If we’re not meaningfully compute-constrained, then the question is: what kinds of problems will we soon be able to train AI systems to solve? AI research might be one such problem. There are a lot of different training techniques out in the wild, and many of the more impressive recent developments have come from combining multiple techniques in novel ways (with lots of compute). That strikes me as the kind of search space that an AI system might be able to explore much faster than human teams.