Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood. In this article, I clarify the concept of existential risk and develop an improved classification scheme. I discuss the relation between existential risks and basic issues in axiology, and show how existential risk reduction (via the maxipok rule) can serve as a strongly action-guiding principle for utilitarian concerns. I also show how the notion of existential risk suggests a new way of thinking about the ideal of sustainability.
This is my favorite introduction to existential risk. It’s loosely written from the perspective of global policy, but it’s quite valuable for other approaches to existential risk as well. Topics discussed (with remarkable lucidity) include:
Natural vs anthropogenic existential risk
Meta-uncertainty
Qualitative risk categories
Magnitude of our long-term potential
Maxipok heuristic
Classification of existential risk
Option value
Sustainable state vs sustainable trajectory
Neglectedness of existential risk