Assuming the median estimate given by Joseph Carlsmith for the compute usage of the human brain, it should eventually be possible to train human-level AI with only about 10^24 FLOP.
This assumes that AI training algorithms will be as good as human learning algorithms.
This assumes that AI training algorithms will be as good as human learning algorithms.
Since my statement was that this will “eventually” be possible, I think my claim is a fairly low bar. All it requires is that, during the pause, algorithmic progress continues until we reach algorithms that match the efficiency of the human brain. Preventing algorithmic progress may be possible, but as I argued, enforcing technological stasis would be very tough.
You might think that the human brain has a lot of “evolutionary pre-training” that is exceptionally difficult to match. But I think this thesis is largely ruled out because of the small size of the human genome, the even smaller part that we think encodes information about the brain, and the even tinier part that differs between chimpanzees and humans.
Minor:
This assumes that AI training algorithms will be as good as human learning algorithms.
Since my statement was that this will “eventually” be possible, I think my claim is a fairly low bar. All it requires is that, during the pause, algorithmic progress continues until we reach algorithms that match the efficiency of the human brain. Preventing algorithmic progress may be possible, but as I argued, enforcing technological stasis would be very tough.
You might think that the human brain has a lot of “evolutionary pre-training” that is exceptionally difficult to match. But I think this thesis is largely ruled out because of the small size of the human genome, the even smaller part that we think encodes information about the brain, and the even tinier part that differs between chimpanzees and humans.