What is the significance of the people on the ISS? Are you suggesting that six people could repopulate the human species? And what sort of disaster takes less time than a flight, and only kills people on the ground?
Also, I expect to see small engineered pandemics, but only after effective genetic engineering is widespread. So the fact that we haven’t seen any so far is not much evidence.
Also, I expect to see small engineered pandemics, but only after effective genetic engineering is widespread. So the fact that we haven’t seen any so far is not much evidence.
Yes, that was broadly the response I had in mind as well. Same goes for most of the “unforeseen”/”other” anthropogenic risks; those categories are in the chapter on “Future risks”, and are mostly things Ord appears to think either will or may get riskier as certain technologies are developed/advanced.
Sleepy reply to Tobias’ “Ord’s estimates seem too high to me”: An important idea in the book is that “the per-century extinction risks from “natural” causes must be very low, based in part on our long history of surviving such risks” (as I phrase it in this post). The flipside of that is roughly the argument that we haven’t got strong evidence of our ability to survive (uncollapsed and sans dystopia) a long period with various technologies that will be developed later, but haven’t been yet.
Of course, that doesn’t seem sufficient by itself as a reason for a high level of concern, as some version of that could’ve been said at every point in history when “things were changing”. But if you couple that general argument with specific reasons to believe upcoming technologies could be notably risky, you could perhaps reasonably arrive at Ord’s estimates. (And there are obviously a lot of specific details and arguments and caveats that I’m omitting here.)
What is the significance of the people on the ISS? Are you suggesting that six people could repopulate the human species? And what sort of disaster takes less time than a flight, and only kills people on the ground?
Also, I expect to see small engineered pandemics, but only after effective genetic engineering is widespread. So the fact that we haven’t seen any so far is not much evidence.
Yes, that was broadly the response I had in mind as well. Same goes for most of the “unforeseen”/”other” anthropogenic risks; those categories are in the chapter on “Future risks”, and are mostly things Ord appears to think either will or may get riskier as certain technologies are developed/advanced.
Sleepy reply to Tobias’ “Ord’s estimates seem too high to me”: An important idea in the book is that “the per-century extinction risks from “natural” causes must be very low, based in part on our long history of surviving such risks” (as I phrase it in this post). The flipside of that is roughly the argument that we haven’t got strong evidence of our ability to survive (uncollapsed and sans dystopia) a long period with various technologies that will be developed later, but haven’t been yet.
Of course, that doesn’t seem sufficient by itself as a reason for a high level of concern, as some version of that could’ve been said at every point in history when “things were changing”. But if you couple that general argument with specific reasons to believe upcoming technologies could be notably risky, you could perhaps reasonably arrive at Ord’s estimates. (And there are obviously a lot of specific details and arguments and caveats that I’m omitting here.)