If slow-takeoff AGI is somewhat likely, don’t give now

There’s a long­stand­ing de­bate in EA about whether to em­pha­siz­ing giv­ing now or giv­ing later – see Holden in 2007 (a), Robin Han­son in 2011 (a), Holden in 2011 (up­dated 2016) (a), Paul Chris­ti­ano in 2013 (a), Robin Han­son in 2013 (a), Ju­lia Wise in 2013 (a), Michael Dick­ens in 2019 (a).

I think an­swers to the “give now vs. give later” ques­tion rest on deep wor­ld­view as­sump­tions, which makes it fairly in­sol­u­ble (though Michael Dick­ens’ re­cent post (a) is a nice ex­am­ple of some­one chang­ing their mind about the is­sue). So here, I’m not try­ing to an­swer the ques­tion once and for all. In­stead, I just want to make an ar­gu­ment that seems fairly ob­vi­ous but I haven’t seen laid out any­where.

Here’s a sketch of the ar­gu­ment –

Premise 1: If AGI hap­pens, it will hap­pen via a slow take­off.

Premise 2: The fron­tier of AI ca­pa­bil­ity re­search will be pushed for­ward by re­search labs at pub­li­cly-traded com­pa­nies that can be in­vested in.

  • e.g. Google Brain, Google Deep­Mind, Face­book AI, Ama­zon AI, Microsoft AI, Baidu AI, IBM Watson

  • OpenAI is a con­founder here – it’s un­clear who will con­trol the benefits re­al­ized by the OpenAI ca­pa­bil­ities re­search team.

    • From the OpenAI char­ter (a): “To­day this in­cludes pub­lish­ing most of our AI re­search, but we ex­pect that safety and se­cu­rity con­cerns will re­duce our tra­di­tional pub­lish­ing in the fu­ture, while in­creas­ing the im­por­tance of shar­ing safety, policy, and stan­dards re­search.”

  • Chi­nese com­pa­nies that can’t be ac­cessed by for­eign in­vest­ment are an­other con­founder – I don’t know much about that space yet.

Premise 3: A large share of the re­turns un­locked by ad­vances in AI will ac­crue to share­hold­ers of the com­pa­nies that in­vent & de­ploy the new ca­pa­bil­ities.

Premise 4: Be­ing an in­vestor in such com­pa­nies will gen­er­ate out­sized re­turns on the road to slow-take­off AGI.

  • It’d be difficult to iden­tify the par­tic­u­lar com­pany that will achieve a par­tic­u­lar ad­vance in AI ca­pa­bil­ities, but rel­a­tively sim­ple to hold a bas­ket of the com­pa­nies most likely to achieve an ad­vance (similar to an in­dex fund).

  • If you’re skep­ti­cal of be­ing able to se­lect a bas­ket of AI com­pa­nies that will track AI progress, in­vest­ing in a broader in­dex fund (e.g. VTSAX) could be about as good. Dur­ing a slow take­off the re­turns to AI may well rip­ple through the whole econ­omy.

Con­clu­sion: If you’re in­ter­ested in max­i­miz­ing your al­tru­is­tic im­pact, and think slow-take­off AGI is some­what likely (and more likely than fast-take­off AGI), then in­vest­ing your cur­rent cap­i­tal is bet­ter than donat­ing it now, be­cause you may achieve (very) out­sized re­turns that can later be de­ployed to greater al­tru­is­tic effect as AI re­search pro­gresses.

  • Note that this con­clu­sion holds for both per­son-af­fect­ing and longter­mist views. All you need to be­lieve for it to hold is that a slow take­off is some­what likely, and more likely than a fast take­off.

  • If you think a fast take­off is more likely, it prob­a­bly makes more sense to ei­ther in­vest your cur­rent cap­i­tal in tool­ing up as an AI al­ign­ment re­searcher, or to donate now to your fa­vorite AI al­ign­ment or­ga­ni­za­tion (Larks’ 2018 re­view (a) is a good start­ing point here).


Cross-posted to my blog. I’m not an in­vest­ment ad­vi­sor, and the above isn’t in­vest­ment ad­vice.