What Is The Most Effective Way To Look At Existential Risk?
This article will argue that a focus on particular technology driven existential risks may be misguided, and that shifting our focus to the source of such risks may be more productive.
When we examine a particular existential risk we are presuming that we have a chance to mitigate the risk and avoid a disaster. And in any particular case that may indeed be true. But there’s more to it...
By examining particular existential risks we are also presuming that we can somehow successfully manage all such risks. After all, it’s not going to matter if we find a perfect solution for say, AI risks, if we then go on to die in a nuclear war.
Let’s keep in mind here that real success will require us to conquer all such existential risks, not just this or that particular risk. A failure in any one case makes the successes we do have irrelevant.
Before we dive too deeply in to examining the risks presented by any particular technological risk it seems wise to first pause to consider whether our effort to avoid all technological existential risks can be successful.
The attempt to calculate our odds for success can begin by trying to understand how many such risks we are talking about. If we look at the source of technological risks, an ever accelerating knowledge explosion, it should become clear that...
1) Knowledge development feeds back on itself leading to an ever accelerating development of knowledge. And so...
2) Going forward we will face ever more, ever larger risks, which will come online at an ever faster pace.
This sounds hopeless at first, but a solution might be found in shifting our focus from examining particular existential risks one by one by one, to bringing our attention to the accelerating knowledge explosion which is generating all these risks.
To illustrate, let’s explore a quick thought experiment by imagining that we’re working at the end of an assembly line in an Amazon warehouse.
Amazon’s business is booming, and the packages to be shipped are coming down the assembly line towards us faster and faster. We can keep up with the accelerating assembly line for awhile by working hard, working smart, being creative and resourceful etc.
But if the assembly line is indeed accelerating, and our human ability is indeed limited, then common sense logic suggests that sooner or later the assembly line will overwhelm our ability to keep up.
At that point our only option would seem to be to try to take control of the assembly line and slow it down to a pace that we can manage.
A key obstacle in our war with technology based existential risks may be that we spend too much time and effort focused on the details of particular risks, an activity which distracts us from putting our attention where it should be, on the knowledge explosion assembly line which is the source of all these risks.
It is the contention of this post that to the degree we attempt to mitigate particular technological existential risks one by one by one we are playing a game of wack-a-mole that we will inevitably sooner or later lose. It’s only by addressing this problem at it’s source that we can have hope of success, and survival.
- 27 Aug 2022 21:33 UTC; -1 points) 's comment on Introducing the Existential Risks Introductory Course (ERIC) by (
Let’s make this simpler.
Whatever your well intended plans for AI or genetic engineering may be, please explain how you will impose those plans upon engineers in Russia, China, North Korea, Iran etc.
Whatever your well intended plans for AI or genetic engineering may be, please explain how you will impose those plans upon millions of average citizens all over the world cooking up existential scale projects in their home workshops.
Readers might debunk this post by explaining how we will successfully manage the ever more, ever larger powers which will emerge from an accelerating knowledge explosion at an ever faster pace, seemingly without limit. Debunkers might explain how we will accomplish this for every single existential threat which may emerge, without failing even once.
This is what a focus on particular emerging threats entails, the assumption that we will be able to defeat all such threats, no matter how many, how large, or how fast they may arrive. This is what an unwillingness to focus on the machinery generating all the threats entails. Very dangerous irrationality.