My Cause Selection: Dave Denkenberger

I made a very rough spread­sheet that es­ti­mates the ex­pected im­pact of work on differ­ent causes for screen­ing pur­poses. For the risks, it was based on ex­pected dam­age, amount of work that has been done so far, and a func­tional form for marginal im­pact. I did it from sev­eral per­spec­tives, in­clud­ing con­ven­tional eco­nomic (with dis­count­ing fu­ture hu­man util­ity), pos­i­tive util­i­tar­ian (max­i­miz­ing net util­ity and not dis­count­ing), and bio­di­ver­sity. Note that the pure nega­tive util­i­tar­ian of re­duc­ing ag­gre­gate suffer­ing may pre­fer hu­man ex­tinc­tion (I don’t sub­scribe to this view­point). I be­lieve that the fu­ture will gen­er­ally be net benefi­cial.

Of course there has been a lot of talk re­cently that if one val­ues fu­ture gen­er­a­tions not neg­ligibly, re­duc­ing global catas­trophic risk is of over­whelming im­por­tance. But I have not seen the point that even if you do dis­count fu­ture gen­er­a­tions ex­po­nen­tially, there could still be an over­whelming num­ber of dis­counted con­scious­nesses if you give a non-neg­ligible prob­a­bil­ity of com­puter con­scious­nesses this cen­tury. This is rea­son­able be­cause an effi­cient com­puter con­scious­ness would use much less en­ergy than a typ­i­cal hu­man. Fur­ther­more, it would not take very long to con­struct a tra­di­tional Dyson sphere, which is in­de­pen­dent satel­lites or­bit­ing the sun that ab­sorb most of the sun’s out­put. The satel­lites would be ~micron thick so­lar cells plus CPUs, and would re­quire a small frac­tion of the mat­ter in the so­lar sys­tem. Note that this means that even if one thinks that ar­tifi­cial gen­eral in­tel­li­gence will be friendly, it is still of over­whelming im­por­tance to re­duce the risk of not reach­ing the com­puter con­scious­nesses, which could just mean global catas­trophic risk and tech­nolog­i­cal civ­i­liza­tion not re­cov­er­ing. I am open to ar­gu­ments about far fu­ture tra­jec­tory changes other than global catas­trophic risks, but I think they need to be de­vel­oped fur­ther. This also in­cludes po­ten­tial mass an­i­mal suffer­ing as­so­ci­ated with galac­tic coloniza­tion or simu­la­tion of wor­lds.

Look­ing across these global catas­trophic risks, I find the most promis­ing are ar­tifi­cial in­tel­li­gence al­ign­ment, molec­u­lar man­u­fac­tur­ing, high-en­ergy physics ex­per­i­ments, 100% kill en­g­ineered pan­demic, global to­tal­i­tar­i­anism, and al­ter­nate foods as solu­tions to global agri­cul­tural dis­rup­tion. Some of these are fa­mil­iar, so I will fo­cus on the less fa­mil­iar ones.

Though many re­gard high-en­ergy physics ex­per­i­ments as safe, a risk of one in ~1 billion per year of turn­ing the earth into a strangelet or black hole or de­stroy­ing the en­tire visi­ble uni­verse is still very bad. And the risk could be higher be­cause of model er­ror. The net benefit ex­clud­ing the risk of these ex­per­i­ments con­sid­er­ing all the costs I be­lieve is quite low, so I per­son­ally think they should be banned.

Lo­cal to­tal­i­tar­i­anism like in North Korea even­tu­ally gets out­com­peted. How­ever, global to­tal­i­tar­i­anism would have no com­pe­ti­tion, and could last in­definitely. This would be bad for the peo­ple liv­ing un­der it, but it could also stifle fu­ture po­ten­tial like galaxy coloniza­tion and ar­tifi­cial gen­eral in­tel­li­gence. I am less fa­mil­iar with the in­ter­ven­tions to pre­vent this.

Global agri­cul­tural dis­rup­tion could oc­cur from risks like nu­clear win­ter, as­ter­oid/​comet im­pact, su­per vol­canic erup­tion, abrupt cli­mate change, or agroter­ror­ism. Though many of these risks have been stud­ied sig­nifi­cantly, there is a new class of in­ter­ven­tions called al­ter­nate foods that do not rely on the sun (dis­clo­sure: I came up with them as catas­trophic solu­tions). Ex­am­ples in­clude grow­ing mush­rooms on dead trees and grow­ing ed­ible bac­te­ria on nat­u­ral gas. I have done some mod­el­ing of the cost effec­tive­ness of al­ter­nate food in­ter­ven­tions, in­clud­ing plan­ning, re­search, and de­vel­op­ment. This will hope­fully be pub­lished soon, and it in­di­cates that ex­pected lives in the pre­sent gen­er­a­tion can be saved at sig­nifi­cantly lower cost than typ­i­cal global poverty in­ter­ven­tions. Fur­ther­more, al­ter­nate foods would re­duce the chance of civ­i­liza­tion col­lapse, and there­fore the chance that civ­i­liza­tion does not re­cover.

There was ear­lier dis­cus­sion on what might con­sti­tute a fifth area within effec­tive al­tru­ism of effec­tive en­vi­ron­men­tal­ism. I would pro­pose that this would not be reg­u­lat­ing pol­lu­tion to save lives in de­vel­oped coun­tries at $5 mil­lion apiece. How­ever, there are frame­works that value bio­di­ver­sity highly. One could ar­gue that we will even­tu­ally be able to re­con­struct ex­tinct species or put or­ganisms in zoos to pre­vent ex­tinc­tion. But the safer route is to keep the species al­ive in the wild. In an agri­cul­tural catas­tro­phe, not only would many species go ex­tinct with­out hu­man in­ter­ven­tion, but des­per­ate hu­mans would ac­tively eat many species to ex­tinc­tion. There­fore, I have es­ti­mated that the most cost-effec­tive way of sav­ing species is fur­ther­ing al­ter­nate foods.

Over­all, there are sev­eral promis­ing causes, but I think the most promis­ing is al­ter­nate foods. This is be­cause it is com­pet­i­tive with other global catas­trophic risk causes, but it has the fur­ther benefit of be­ing more cost-effec­tive at sav­ing lives in the pre­sent gen­er­a­tion than global poverty in­ter­ven­tions, and more cost-effec­tive at sav­ing species than con­ven­tional in­ter­ven­tions like buy­ing rain­for­est land. I am work­ing on a mechanism to al­low peo­ple to sup­port this cause.

Edit: here is the cost per life saved pa­per now that it is pub­lished.