Finally, returning to x-risk. The examples above were also chosen to illustrate a different ‘vibe’ that could apply to x-risk besides ‘impeding disaster and heroic drama’. Safety engineering is non-heroic by design: a saviour snatching affairs from the jaws of disaster indicates an intolerable single point of failure. Rather, success is a team effort which is resilient to an individual’s mistake, and their excellence only slightly notches down the risk even further. Yet this work remains both laudable and worthwhile: a career spent investigating not-so-near misses to tease out human factors to make them even more distant has much to celebrate, even if not much of a highlight reel.
‘Existential safety’ could be something similar. Risks of AI, nukes, pandemics etc. should be at least as remote of those of a building collapsing, a plane crashing, or a nuclear power plant melting down. Hopefully these risks are similarly remote, and hopefully one’s contribution amounts to a slight incremental reduction. Only the vainglorious would wish otherwise.
Unfortunately, for the biggest risk, AI, the fact that we depend on EA, MIRI and other organizations to be heroes is a dangerous deficiency in civilizational competence. A lot of that issue is AGI is weird to outsiders, and politicization is strong enough to make civilizational competence negative. Hell, everything in existential risks could be like this.
Unfortunately, for the biggest risk, AI, the fact that we depend on EA, MIRI and other organizations to be heroes is a dangerous deficiency in civilizational competence. A lot of that issue is AGI is weird to outsiders, and politicization is strong enough to make civilizational competence negative. Hell, everything in existential risks could be like this.