I think this is directionally likely to be true, and I agree with the main claims of the book that Richard nicely summarizes here. That shouldn’t be too surprising—I work with Mike and Dean and helped with a bunch of the research! Thanks for sparking this debate :)
My main uncertainty has to do with a further idea I’ve been developing (with Loren Fryxell). The thrust of it can be seen with an oversimplified example. Suppose it will take 200 billion human-lives worth of progress to get to the technology that is so dangerous that we go extinct soon after its invention. The effect of adding people today is to accelerate the arrival of this dangerous tech in a way that undoes all of the purported benefits of the earlier people (ie, the exact same number of lives with the exact same average quality are lived). The model we write down is in agreement with essentially all of the important claims in the book, it just layers on an explicit extinction assumption that turns out to generate a negative effect that exactly offsets the positive ones.
This toy example isn’t enough to move me to be completely uncertain about whether depopulation is bad (I put 60% agreement, after all). But we show that additional people in the near-term can be harmful, neutral, or beneficial depending on assumptions made about the shape of the people-progress relationship and the types of existential risks we face. I am not very confident that our world is one where those parameters are such that people today are beneficial—there’s too little evidence imo.
This reminds me of doomsday argument, and even more of “black balls” from Bostrom’s Vulnerable World Hypothesis.
But I find some issues with it:
First of all there’s no guarantee that there is such a technology that inevitably leads to extinction.
Second, even if there is such a technology, there’s no guarantee that we will ever develop it, regardless of how many people are born in the future. (We might use proper safety measures and avoid it forever, or until we go extinct from some other cause)
Third, even if we do develop it eventually, the speed of its arrival probably depends on many of factors of which the total number of human lives probably isn’t most important. (As more important factors I’d mention, presence/absence of AGI/ASI and whether they are aligned, whether we are pursuing differential technological development or not, how robust our institutions are at preventing existential risks, how good is global coordination and cooperation, how closely the development of potentially harmful technologies is monitored, etc.)
Fourth, even if such a deterministic relationship does exist, and 200 billion human lives inevitably leads to development of such a technology, from utilitarian point of view it doesn’t seem to matter much when we’ll reach 200 billion humans who have ever lived, as whenever we reach it, the total amount of humans who have ever lived will be the same.
I think this is directionally likely to be true, and I agree with the main claims of the book that Richard nicely summarizes here. That shouldn’t be too surprising—I work with Mike and Dean and helped with a bunch of the research! Thanks for sparking this debate :)
My main uncertainty has to do with a further idea I’ve been developing (with Loren Fryxell). The thrust of it can be seen with an oversimplified example. Suppose it will take 200 billion human-lives worth of progress to get to the technology that is so dangerous that we go extinct soon after its invention. The effect of adding people today is to accelerate the arrival of this dangerous tech in a way that undoes all of the purported benefits of the earlier people (ie, the exact same number of lives with the exact same average quality are lived). The model we write down is in agreement with essentially all of the important claims in the book, it just layers on an explicit extinction assumption that turns out to generate a negative effect that exactly offsets the positive ones.
This toy example isn’t enough to move me to be completely uncertain about whether depopulation is bad (I put 60% agreement, after all). But we show that additional people in the near-term can be harmful, neutral, or beneficial depending on assumptions made about the shape of the people-progress relationship and the types of existential risks we face. I am not very confident that our world is one where those parameters are such that people today are beneficial—there’s too little evidence imo.
This reminds me of doomsday argument, and even more of “black balls” from Bostrom’s Vulnerable World Hypothesis.
But I find some issues with it:
First of all there’s no guarantee that there is such a technology that inevitably leads to extinction.
Second, even if there is such a technology, there’s no guarantee that we will ever develop it, regardless of how many people are born in the future. (We might use proper safety measures and avoid it forever, or until we go extinct from some other cause)
Third, even if we do develop it eventually, the speed of its arrival probably depends on many of factors of which the total number of human lives probably isn’t most important. (As more important factors I’d mention, presence/absence of AGI/ASI and whether they are aligned, whether we are pursuing differential technological development or not, how robust our institutions are at preventing existential risks, how good is global coordination and cooperation, how closely the development of potentially harmful technologies is monitored, etc.)
Fourth, even if such a deterministic relationship does exist, and 200 billion human lives inevitably leads to development of such a technology, from utilitarian point of view it doesn’t seem to matter much when we’ll reach 200 billion humans who have ever lived, as whenever we reach it, the total amount of humans who have ever lived will be the same.