[Question] Other flavors of FOOM

Robin Han­son says work­ing on AI al­ign­ment to­day is jus­tifi­able only in pro­por­tion to the risk of a FOOM sce­nario. (A.k.a. hard take­off, a.k.a. lumpy AI timeline.) I agree, even though the dis­cus­sion may have moved on a bit.

But “lumpy” timelines don’t seem re­stricted to AI. Ru­n­away growth of ge­net­i­cally en­g­ineered or­ganisms (BLOOM?) seems equally plau­si­ble. Peo­ple have been think­ing about cli­mate tip­ping points for ages.

Can some­one point me to any rele­vant writ­ing on this? I haven’t been able to find any­thing dis­cussing the util­ity of study­ing FOOM-like sce­nar­ios (i.e. catas­troph­i­cally rapid changes due to new tech­nol­ogy) in gen­eral, rather than just in AI. I’m sure it’s out there—just not sure what to Google.

No answers.