(Unimportant discussion of probably useless and confused terminology.)
I sometimes use terms like “inner existential risks” to refer to risk factors like malevolence and fanaticism. Inner existential risks primarily arise from “within the human heart”—that is, they are primarily related to the values, goals and/or beliefs of (some) humans.
My sense is that most x-risk discourse focuses on outer existential risks, that is, x-risks which primarily arise from outside the human mind. These could be physical or natural processes (asteroids, lethal pathogens) or technological processes that once originated in the human mind but are now out of their control (e.g., AI, nuclear weapons, engineered pandemics).
Of course, most people already believe that the most worrisome existential risks are anthropogenic, that is, caused by humans. One could argue that, say, AI and engineered pandemics are actually inner existential risks because they arose from within the human mind. I agree that the distinction between inner and outer existential risks is not super clear. Still, it seems to me that the distinction between inner and outer existential risks captures something vaguely real and may serve as some kind of intuition pump.
Then there is the related issue of more external or structural risk factors, like political or economic systems. These are systems invented by human minds and which in turn are shaping human minds and values. I will conveniently ignore this further complication.
Other potential terms for inner existential risks could be intraanthropic, idioanthropic, or psychogenic (existential) risks.
Existential risks from within?
(Unimportant discussion of probably useless and confused terminology.)
I sometimes use terms like “inner existential risks” to refer to risk factors like malevolence and fanaticism. Inner existential risks primarily arise from “within the human heart”—that is, they are primarily related to the values, goals and/or beliefs of (some) humans.
My sense is that most x-risk discourse focuses on outer existential risks, that is, x-risks which primarily arise from outside the human mind. These could be physical or natural processes (asteroids, lethal pathogens) or technological processes that once originated in the human mind but are now out of their control (e.g., AI, nuclear weapons, engineered pandemics).
Of course, most people already believe that the most worrisome existential risks are anthropogenic, that is, caused by humans. One could argue that, say, AI and engineered pandemics are actually inner existential risks because they arose from within the human mind. I agree that the distinction between inner and outer existential risks is not super clear. Still, it seems to me that the distinction between inner and outer existential risks captures something vaguely real and may serve as some kind of intuition pump.
Then there is the related issue of more external or structural risk factors, like political or economic systems. These are systems invented by human minds and which in turn are shaping human minds and values. I will conveniently ignore this further complication.
Other potential terms for inner existential risks could be intraanthropic, idioanthropic, or psychogenic (existential) risks.