My Cause Selection: Dave Denkenberger

I made a very rough spreadsheet that estimates the expected impact of work on different causes for screening purposes. For the risks, it was based on expected damage, amount of work that has been done so far, and a functional form for marginal impact. I did it from several perspectives, including conventional economic (with discounting future human utility), positive utilitarian (maximizing net utility and not discounting), and biodiversity. Note that the pure negative utilitarian of reducing aggregate suffering may prefer human extinction (I don’t subscribe to this viewpoint). I believe that the future will generally be net beneficial.

Of course there has been a lot of talk recently that if one values future generations not negligibly, reducing global catastrophic risk is of overwhelming importance. But I have not seen the point that even if you do discount future generations exponentially, there could still be an overwhelming number of discounted consciousnesses if you give a non-negligible probability of computer consciousnesses this century. This is reasonable because an efficient computer consciousness would use much less energy than a typical human. Furthermore, it would not take very long to construct a traditional Dyson sphere, which is independent satellites orbiting the sun that absorb most of the sun’s output. The satellites would be ~micron thick solar cells plus CPUs, and would require a small fraction of the matter in the solar system. Note that this means that even if one thinks that artificial general intelligence will be friendly, it is still of overwhelming importance to reduce the risk of not reaching the computer consciousnesses, which could just mean global catastrophic risk and technological civilization not recovering. I am open to arguments about far future trajectory changes other than global catastrophic risks, but I think they need to be developed further. This also includes potential mass animal suffering associated with galactic colonization or simulation of worlds.

Looking across these global catastrophic risks, I find the most promising are artificial intelligence alignment, molecular manufacturing, high-energy physics experiments, 100% kill engineered pandemic, global totalitarianism, and alternate foods as solutions to global agricultural disruption. Some of these are familiar, so I will focus on the less familiar ones.

Though many regard high-energy physics experiments as safe, a risk of one in ~1 billion per year of turning the earth into a strangelet or black hole or destroying the entire visible universe is still very bad. And the risk could be higher because of model error. The net benefit excluding the risk of these experiments considering all the costs I believe is quite low, so I personally think they should be banned.

Local totalitarianism like in North Korea eventually gets outcompeted. However, global totalitarianism would have no competition, and could last indefinitely. This would be bad for the people living under it, but it could also stifle future potential like galaxy colonization and artificial general intelligence. I am less familiar with the interventions to prevent this.

Global agricultural disruption could occur from risks like nuclear winter, asteroid/​comet impact, super volcanic eruption, abrupt climate change, or agroterrorism. Though many of these risks have been studied significantly, there is a new class of interventions called alternate foods that do not rely on the sun (disclosure: I came up with them as catastrophic solutions). Examples include growing mushrooms on dead trees and growing edible bacteria on natural gas. I have done some modeling of the cost effectiveness of alternate food interventions, including planning, research, and development. This will hopefully be published soon, and it indicates that expected lives in the present generation can be saved at significantly lower cost than typical global poverty interventions. Furthermore, alternate foods would reduce the chance of civilization collapse, and therefore the chance that civilization does not recover.

There was earlier discussion on what might constitute a fifth area within effective altruism of effective environmentalism. I would propose that this would not be regulating pollution to save lives in developed countries at $5 million apiece. However, there are frameworks that value biodiversity highly. One could argue that we will eventually be able to reconstruct extinct species or put organisms in zoos to prevent extinction. But the safer route is to keep the species alive in the wild. In an agricultural catastrophe, not only would many species go extinct without human intervention, but desperate humans would actively eat many species to extinction. Therefore, I have estimated that the most cost-effective way of saving species is furthering alternate foods.

Overall, there are several promising causes, but I think the most promising is alternate foods. This is because it is competitive with other global catastrophic risk causes, but it has the further benefit of being more cost-effective at saving lives in the present generation than global poverty interventions, and more cost-effective at saving species than conventional interventions like buying rainforest land. I am working on a mechanism to allow people to support this cause.

Edit: here is the cost per life saved paper now that it is published.