...but is increasing the value of futures tractable?

The central question being discussed in the current debate is whether marginal efforts should prioritize reducing existential risk or improving the quality of futures conditional on survival. Both are important, both are neglected, though the latter admittedly more so, at least within EA. But this post examines the tractability of shaping the long-term future if humanity survives, and the uncertainty about our ability to do so effectively.

I want to very briefly argue that given the complexity of long-term trajectories, the lack of empirical evidence, and the difficulty of identifying robust interventions, efforts to improve future value are significantly less tractable than reducing existential risk.

We have strong reasons to think we know what the likely sources of existential risk are—as @Sean_o_h’s new paper lays out very clearly. The most plausible risks are well known, and we also have at least some paths towards mitigating them, at least in the form of not causing them. On the other hand, if we condition on humanity’s survival, we are dealing with an open-ended set of possible futures that is both not well characterized, and poorly explored. Exploration of futures is also not particularly tractable, given the branching nature and the complexity of the systems being predicted. And this problem is not just about characterizing futures—the tractability of interventions decreases as the system’s complexity increases, especially over multi-century timescales. The complexity of socio-technological and moral evolution makes it infeasible, in my view, to shape long-term outcomes with even moderate confidence. It seems plausible that most interventions would have opposite signs in many plausible futures, and we seem unlikely to know the relative probabilities or the impacts.

And despite @William_MacAskill’s book on the topic, we have very limited evidence for what works to guide the future—one of the few key criticisms I think should be generally convincing about the entire premise of longtermism. The exception, of course, is avoiding extinction.

And compared to existential risk, where specific interventions may have clear leverage points, such as biosecurity or AI safety, increasing the quality of long-term futures is a vast and nebulous goal. There is no singular control knob for “future value,” making interventions more speculative. So identifying interventions today that will robustly steer the future in a particular direction is difficult because, as noted, we lack strong historical precedent for guiding complex civilizations over thousands of years, and also, the existence of unpredictable attractor states (e.g., technological singularities, value shifts) makes long-term interventions unreliable. Work to change this seems plausibly valuable, but also more interesting than important, as I previously argued.