On AI and Compute

This is a post on OpenAI’s “AI and Com­pute” piece, as well as ex­cel­lent re­sponses by Ryan Carey and Ben Garfinkel, Re­search Fel­lows at the Fu­ture of Hu­man­ity In­sti­tute. (Cross­posted on Less Wrong)

In­tro: AI and Compute

Last May, OpenAI re­leased an anal­y­sis on AI progress that blew me away. The key take­away is this: the com­put­ing power used in the biggest AI re­search pro­jects has been dou­bling ev­ery 3.5 months since 2012. That means that more re­cent pro­jects like AlphaZero have tens of thou­sands of times the “com­pute” be­hind them as some­thing like AlexNet did in 2012.

When I first saw this, it seemed like ev­i­dence that pow­er­ful AI is closer than we think. Moore’s Law dou­bled gen­er­ally-available com­pute about ev­ery 18 months to 2 years, and has re­sulted in the most im­pres­sive achieve­ments of the last half cen­tury. Per­sonal com­put­ers, mo­bile phones, the In­ter­net...in all like­li­hood, none of these would ex­ist with­out the re­morse­less progress of con­stantly shrink­ing, ever cheaper com­puter chips, pow­ered by the mys­te­ri­ous straight line of Moore’s Law.

So with a dou­bling cy­cle for AI com­pute that’s more than five times faster (let’s call it AI Moore’s Law), we should ex­pect to see huge ad­vances in AI in the rel­a­tive blink of an eye...or so I thought. But OpenAI’s anal­y­sis has led some peo­ple to the ex­act op­po­site view.[1]

In­ter­pret­ing the Evidence

Ryan Carey points out that while the com­pute used in these pro­jects is dou­bling ev­ery 3.5 months, the com­pute you can buy per dol­lar is grow­ing around 4-12 times slower. The trend is be­ing driven by firms in­vest­ing more money, not (for the most part) in­vent­ing bet­ter tech­nol­ogy, at least on the hard­ware side. This means that the grow­ing cost of pro­jects will keep even Google and Ama­zon-sized com­pa­nies from sus­tain­ing AI Moore’s Law for more than roughly 2.5 years. And that’s likely an up­per bound, not a lower one; com­pa­nies may try keep their re­search bud­gets rel­a­tively con­stant. This means that in­creased fund­ing for AI re­search would have to dis­place other R&D, which firms will be re­luc­tant to do.[2] But for lack of good data, for the rest of the post I’ll as­sume we’ve more or less been fol­low­ing the trend since the pub­li­ca­tion of “AI and Com­pute”.[3]

While Carey thinks that we’ll pass some in­ter­est­ing mile­stones for com­pute dur­ing this time which might be promis­ing for re­search, Ben Garfinkel is much more pes­simistic. His ar­gu­ment is that we’ve seen a cer­tain amount of progress in AI re­search re­cently, so re­al­iz­ing that it’s been driven by huge in­creases in com­pute means we should re­con­sider how much adding more will ad­vance the field. He adds that this also means AI ad­vances at the cur­rent pace are un­sus­tain­able, agree­ing with Carey. Both of their views are some­what sim­plified here, and worth read­ing in full.

Thoughts on Garfinkel

To ad­dress Garfinkel’s ar­gu­ment, it helps to be a bit more ex­plicit. We can think of the com­pute in an AI sys­tem and the com­pu­ta­tional power of a hu­man brain as me­di­ated by the effec­tive­ness of their al­gorithms, which is un­known for both hu­mans and AI sys­tems. The ba­sic equa­tion is some­thing like: Ca­pa­bil­ity = Com­pute * Al­gorithms. Once AI’s Ca­pa­bil­ity reaches a cer­tain thresh­old, “Hu­man Brain,” we get hu­man-level AI. We can ob­serve the level of Ca­pa­bil­ity that AI sys­tems have reached so far (with some un­cer­tainty), and have now mea­sured their Com­pute. My ini­tial re­ac­tion to read­ing OpenAI’s piece was the op­ti­mistic one—Ca­pa­bil­ity must be higher than we thought, since Com­pute is so much higher! Garfinkel seems to think that Al­gorithms must be lower than we thought, since Ca­pa­bil­ity hasn’t changed. This shows that Garfinkel and I dis­agree on how pre­cisely we can ob­serve Ca­pa­bil­ity. We can avoid low­er­ing Al­gorithms to the ex­tent that our ob­ser­va­tion of Ca­pa­bil­ity is im­pre­cise and has room for re­vi­sion. I think he’s prob­a­bly right that the de­fault ap­proach should be to re­vise Al­gorithms down­ward, though there’s some lee­way to re­vise Ca­pa­bil­ity up­ward.

Much of Garfinkel’s pes­simism about the im­pli­ca­tions of “AI and Com­pute” comes from the re­al­iza­tion that its trend will soon stop—an im­por­tant point. But what if, by that time, the Com­pute in AI sys­tems will have sur­passed the brain’s?

Thoughts on Carey

Carey thinks that one im­por­tant mile­stone for AI progress is when pro­jects have com­pute equal to run­ning a hu­man brain for 18 years. At that point we could ex­pect AI sys­tems to match an 18-year-old hu­man’s cog­ni­tive abil­ities, if their al­gorithms suc­cess­fully imi­tated a brain or oth­er­wise performed at its level. AI Im­pacts has col­lected var­i­ous es­ti­mates of how much com­pute this might re­quire—by the end of AI Moore’s Law they should com­fortably reach and ex­ceed it. Another use­ful marker is the 300-year AlphaGo Zero mile­stone. The idea here is that AI sys­tems might learn much more slowly than hu­mans—it would take some­one about 300 years to play as many Go games as AlphaGo Zero did be­fore beat­ing its pre­vi­ous ver­sion, which beat a top-ranked hu­man Go player. A similar ra­tio might ap­ply to learn­ing to perform other tasks at a hu­man-equiv­a­lent level (al­though AlphaGo Zero’s perfor­mance was su­per­hu­man). Fi­nally we have the brain-evolu­tion mile­stone; that is, how much com­pute it would take to simu­late the evolu­tion of a ner­vous sys­tem as com­plex as the hu­man brain. Only this last mile­stone is out­side the scope of AI Moore’s Law.[4] I tend to agree with Carey that the nec­es­sary com­pute to reach hu­man-level AI lies some­where around the 18 and 300-year mile­stones.

But I be­lieve his anal­y­sis likely over­es­ti­mates the difficulty of reach­ing these com­pu­ta­tional mile­stones. The FLOPS per brain es­ti­mates he cites are con­cerned with simu­lat­ing a phys­i­cal brain, rather than es­ti­mat­ing how much use­ful com­pu­ta­tion the brain performs. The level of de­tail of the simu­la­tions seems to be the main source of var­i­ance among these higher es­ti­mates, and is ir­rele­vant for our pur­poses—we just want to know how well a brain can com­pute things. So I think we should take the lower es­ti­mates as more rele­vant—Mo­ravec’s 10^13 FLOPS and Kurzweil’s 10^16 FLOPS (page 114) are good places to start,[5] though far from perfect. Th­ese es­ti­mates are calcu­lated by com­par­ing ar­eas of the brain re­spon­si­ble for dis­crete tasks like vi­sion to spe­cial­ized com­puter sys­tems—they rep­re­sent some­thing nearer the min­i­mum amount of com­pu­ta­tion to equal the hu­man brain than other es­ti­mates. If ac­cu­rate, the re­duc­tion in re­quired com­pu­ta­tion by 2 or­ders of mag­ni­tude has sig­nifi­cant im­pli­ca­tions for our AI mile­stones. Us­ing the es­ti­mates Kurzweil cites, we’ll com­fortably pass the mile­stones for both 18 and 300-year hu­man-equiv­a­lent com­pute by the time AI Moore’s Law has finished in roughly 2.5 years.[6] There’s also some rea­son to think that AI sys­tems’ learn­ing abil­ities are im­prov­ing, in the sense that they don’t re­quire as much data to make the same in­fer­ences. Deep­Mind cer­tainly seems to be say­ing that AlphaZero is bet­ter at search­ing a more limited set of promis­ing moves than Stock­fish, a tra­di­tional chess en­g­ine (un­for­tu­nately they don’t com­pare it to ear­lier ver­sions of AlphaGo on this met­ric). On the other hand, board games like Chess and Go are prob­a­bly the ideal case for re­in­force­ment learn­ing al­gorithms, as they can play against them­selves rapidly to im­prove. It’s un­clear how cur­rent ap­proaches could trans­fer to situ­a­tions where this kind of self-play isn’t pos­si­ble.

Fi­nal Thoughts

So—what can we con­clude? I don’t agree with Garfinkel that OpenAI’s anal­y­sis should make us more pes­simistic about hu­man-level AI timelines. While it makes sense to re­vise our es­ti­mate of AI al­gorithms down­ward, it doesn’t fol­low that we should do the same for our es­ti­mate of over­all progress in AI. By cor­ti­cal neu­ron count, sys­tems like AlphaZero are at about the same level as a black­bird (albeit one that lives for 18 years),[7] so there’s a clear case for fu­ture ad­vances be­ing more im­pres­sive than cur­rent ones as we ap­proach the hu­man level. I’ve also given some rea­sons to think that level isn’t as high as the es­ti­mates Carey cites. How­ever, we don’t have good data on how re­cent pro­jects fit AI Moore’s Law. It could be that we’ve already di­verged from the trend, as firms may be con­ser­va­tive about dras­ti­cally chang­ing their R&D bud­gets. There’s also a big ques­tion mark hov­er­ing over our cur­rent level of progress in the al­gorithms that power AI sys­tems. To­day’s tech­niques may prove com­pletely un­able to learn gen­er­ally in more com­plex en­vi­ron­ments, though we shouldn’t as­sume they will.[8]

If AI Moore’s Law does con­tinue, we’ll pass the 18 and 300-year hu­man mile­stones in the next two years. I ex­pect to see an 18-year-equiv­a­lent pro­ject in the next five, even if it slows down. After these mile­stones, we’ll have some level of hard­ware over­hang[9] and be left wait­ing on al­gorith­mic ad­vances to get hu­man-level AI sys­tems. Govern­ments and large firms will be able to com­pete to de­velop such sys­tems, and costs will halve roughly ev­ery 4 years,[10] slowly widen­ing the pool of ac­tors. Even­tu­ally the rele­vant break­throughs will be made. That they will likely be soft­ware rather than hard­ware should worry AI safety ex­perts, as these will be harder to mon­i­tor and fore­see.[11] And once soft­ware lets com­put­ers ap­proach a hu­man level in a given do­main, we can quickly find our­selves com­pletely out­matched. AlphaZero went from a bun­dle of blank learn­ing al­gorithms to stronger than the best hu­man chess play­ers in his­tory...in less than two hours.


  1. Im­por­tant to note that while Moore’s Law re­sulted in cheaper com­put­ers (while in­creas­ing the scale and com­plex­ity of the fac­to­ries that make them), this doesn’t seem to be do­ing the same for AI chips. It’s pos­si­ble that AI chips will also de­crease in cost af­ter at­tract­ing more R&D fund­ing/​be­com­ing com­mer­cially available, but with­out a huge con­sumer mar­ket, it seems more likely that these firms will mostly have to eat the costs of their in­vest­ments. ↩︎

  2. This as­sumes cor­po­rate bu­reau­cracy will slow re­al­lo­ca­tion of re­sources, and could be wrong if firms prove will­ing to keep ratch­et­ing up to­tal R&D bud­gets. Both Ama­zon and Google are do­ing so at the mo­ment. ↩︎

  3. In­for­ma­tion about the cost and com­pute of AI pro­jects since then would be very helpful for eval­u­at­ing the con­tinu­a­tion of the trend. ↩︎

  4. Cost and com­pu­ta­tion figures take AlphaGo Zero as the last available data point in the trend, since it’s the last AI sys­tem for which OpenAI has calcu­lated com­pute. AlphaGo Zero was re­leased in Oc­to­ber 2017, but I’m plot­ting how things will go from now, March 2019, as­sum­ing that trends in cost and com­pute have con­tinued. Th­ese es­ti­mates are there­fore 1.5 years shorter than Carey’s, apart from our use of differ­ent es­ti­mates of the brain’s com­pu­ta­tion. ↩︎

  5. Mo­ravec does his es­ti­mate by com­par­ing the num­ber of calcu­la­tions ma­chine vi­sion soft­ware makes to the retina, and ex­trap­o­lat­ing to the size of the rest of the brain. This isn’t ideal, but at least it’s based on a com­par­i­son of ma­chine and hu­man ca­pa­bil­ity, not simu­la­tion of a phys­i­cal brain. Kurzweil cites Mo­ravec’s es­ti­mate as well as a similar one by Lloyd Watts based on com­par­i­sons be­tween the hu­man au­di­tory sys­tem and tele­con­ferenc­ing soft­ware, and fi­nally one by the Univer­sity of Texas repli­cat­ing the func­tions of a small area of the cere­bel­lum. Th­ese lat­ter es­ti­mates come to 10^17 and 10^15 FLOPS for the brain. I know peo­ple are wary of Kurzweil, but he does seem to be on fairly solid ground here. ↩︎

  6. The 18-year mile­stone would be reached in un­der a year and the 300-year mile­stone in slightly over an­other. If the brain performs about 10^16 op­er­a­tions per sec­ond, 18 year’s worth would be roughly 10^25 FLOPS. AlphaGo Zero used about 10^23 FLOPS in Oc­to­ber 2017 (1,000 Petaflop/​s-days, 1 petaflop/​s-day is roughly 10^20 ops). If the trend is hold­ing, Com­pute is in­creas­ing roughly an or­der of mag­ni­tude per year. It’s worth not­ing that this would be roughly a $700M pro­ject in late 2019 (scal­ing AlphaZero up 100x and halv­ing costs ev­ery 4 years), and some­thing like $2-3B if hard­ware costs weren’t spread across mul­ti­ple pro­jects. Google has an R&D bud­get over $20B, so this is fea­si­ble, though sig­nifi­cant. The AlphaGo Zero games mile­stone would take about 14 months more of AI Moore’s Law to reach, or a few decades of cost de­creases if it ends. ↩︎

  7. This is rel­a­tive to 10^16 FLOPS es­ti­mates of the hu­man brain’s com­pu­ta­tion and as­sum­ing com­pu­ta­tion is largely based on cor­ti­cal neu­ron count—a black­bird would be at about 10^14 FLOPS by this mea­sure. ↩︎

  8. An illus­tra­tion of this point is found here, ex­pressed by Richard Sut­ton, one of the in­ven­tors of re­in­force­ment learn­ing. He ex­am­ines the his­tory of AI break­throughs and con­cludes that fairly sim­ple search and learn­ing al­gorithms have pow­ered the most suc­cess­ful efforts, driven by in­creas­ing com­pute over time. At­tempts to use mod­els that take ad­van­tage of hu­man ex­per­tise have largely failed. ↩︎

  9. This ar­gu­ment fails if the piece’s cited es­ti­mates of a hu­man brain’s com­pute are too op­ti­mistic. If more than a cou­ple ex­tra or­ders of mag­ni­tude are needed to get brain-equiv­a­lent com­pute, we could be many decades away from hav­ing the nec­es­sary hard­ware. AI Moore’s Law can’t con­tinue much longer than 2.5 years, so we’d have to wait for long-term trends in cost de­creases to run more ca­pa­ble pro­jects. ↩︎

  10. AI Im­pacts cost es­ti­mates, us­ing the 10-16 year re­cent or­der of mag­ni­tude cost de­creases. ↩︎

  11. If the fi­nal break­throughs de­pend on soft­ware, we’re left with a wide range of pos­si­ble hu­man-level AI timelines—but one that likely pre­cludes cen­turies in the fu­ture. We could the­o­ret­i­cally be months away from such a sys­tem if cur­rent al­gorithms with more com­pute are suffi­cient. See this ar­ti­cle, par­tic­u­larly the graphic on ex­po­nen­tial com­put­ing growth. This com­pletely vi­o­lates my in­tu­itions of AI progress but seems like a le­gi­t­i­mate po­si­tion. ↩︎