This reminds me of doomsday argument, and even more of “black balls” from Bostrom’s Vulnerable World Hypothesis.
But I find some issues with it:
First of all there’s no guarantee that there is such a technology that inevitably leads to extinction.
Second, even if there is such a technology, there’s no guarantee that we will ever develop it, regardless of how many people are born in the future. (We might use proper safety measures and avoid it forever, or until we go extinct from some other cause)
Third, even if we do develop it eventually, the speed of its arrival probably depends on many of factors of which the total number of human lives probably isn’t most important. (As more important factors I’d mention, presence/absence of AGI/ASI and whether they are aligned, whether we are pursuing differential technological development or not, how robust our institutions are at preventing existential risks, how good is global coordination and cooperation, how closely the development of potentially harmful technologies is monitored, etc.)
Fourth, even if such a deterministic relationship does exist, and 200 billion human lives inevitably leads to development of such a technology, from utilitarian point of view it doesn’t seem to matter much when we’ll reach 200 billion humans who have ever lived, as whenever we reach it, the total amount of humans who have ever lived will be the same.
This reminds me of doomsday argument, and even more of “black balls” from Bostrom’s Vulnerable World Hypothesis.
But I find some issues with it:
First of all there’s no guarantee that there is such a technology that inevitably leads to extinction.
Second, even if there is such a technology, there’s no guarantee that we will ever develop it, regardless of how many people are born in the future. (We might use proper safety measures and avoid it forever, or until we go extinct from some other cause)
Third, even if we do develop it eventually, the speed of its arrival probably depends on many of factors of which the total number of human lives probably isn’t most important. (As more important factors I’d mention, presence/absence of AGI/ASI and whether they are aligned, whether we are pursuing differential technological development or not, how robust our institutions are at preventing existential risks, how good is global coordination and cooperation, how closely the development of potentially harmful technologies is monitored, etc.)
Fourth, even if such a deterministic relationship does exist, and 200 billion human lives inevitably leads to development of such a technology, from utilitarian point of view it doesn’t seem to matter much when we’ll reach 200 billion humans who have ever lived, as whenever we reach it, the total amount of humans who have ever lived will be the same.