Unimportant bonus info about the history of the term/concept “existential security”, and of this post:
It seems that concepts corresponding to what Ord calls “existential security”, or something similar, had been discussed under various names by various authors for several years prior to the release of The Precipice. But there didn’t seem to be any really detailed discussion of the concept until The Precipice.
And the term “existential security” had almost never been used for this concept, based on the first two pages of results when I googled ““existential security” “existential risk”” in February 2020 (before The Precipice was released). The only really relevant result was Will MacAskill, in a 2018 podcast interview, saying “The first [stage] is to reduce extinction risks down basically to zero, put us a position of kind of existential security”. Most results were just things calling climate change an “existential security risk”.
I was doing that googling in February because I was pretty sure I’d heard of this concept, but I couldn’t find any proper write-up on it, and thus decided I might write a post about it. I was intending to use the term “existential security”, but to also suggest the terms “existential safety” and “existential stability” as options. But then I decided that, as The Precipice would be released a month later, I’d hold off till I read that, in case Ord discussed this idea.
And indeed, it turned out Ord discussed this concept thoroughly and well, and using the term I’d been leaning towards.[1]
But there was no summary of Ord’s conceptualisation of existential security on the EA Forum or LessWrong. So I decided to adapt my draft into such a summary, as well as a discussion of how this concept relates to other terms and concepts. And then I later abandoned the idea of comparing the concept to other terms and concepts, though you can find my unpolished notes on that here.
[1] I’m unsure whether this is a result of:
me independently converging on the same idea (perhaps primed by MacAskill’s one mention of the term), or
the idea having been occasionally discussed verbally in ways that reached me—but that I’ve since forgotten—despite the idea having not been on the internet yet.
Unimportant bonus info about the history of the term/concept “existential security”, and of this post:
It seems that concepts corresponding to what Ord calls “existential security”, or something similar, had been discussed under various names by various authors for several years prior to the release of The Precipice. But there didn’t seem to be any really detailed discussion of the concept until The Precipice.
And the term “existential security” had almost never been used for this concept, based on the first two pages of results when I googled ““existential security” “existential risk”” in February 2020 (before The Precipice was released). The only really relevant result was Will MacAskill, in a 2018 podcast interview, saying “The first [stage] is to reduce extinction risks down basically to zero, put us a position of kind of existential security”. Most results were just things calling climate change an “existential security risk”.
I was doing that googling in February because I was pretty sure I’d heard of this concept, but I couldn’t find any proper write-up on it, and thus decided I might write a post about it. I was intending to use the term “existential security”, but to also suggest the terms “existential safety” and “existential stability” as options. But then I decided that, as The Precipice would be released a month later, I’d hold off till I read that, in case Ord discussed this idea.
And indeed, it turned out Ord discussed this concept thoroughly and well, and using the term I’d been leaning towards.[1]
But there was no summary of Ord’s conceptualisation of existential security on the EA Forum or LessWrong. So I decided to adapt my draft into such a summary, as well as a discussion of how this concept relates to other terms and concepts. And then I later abandoned the idea of comparing the concept to other terms and concepts, though you can find my unpolished notes on that here.
[1] I’m unsure whether this is a result of:
me independently converging on the same idea (perhaps primed by MacAskill’s one mention of the term), or
the idea having been occasionally discussed verbally in ways that reached me—but that I’ve since forgotten—despite the idea having not been on the internet yet.