Using the Superforecaster estimates, we need the value of all future utility outside the current epistemic regime to be equivalent to many tens of thousands of years at current consumption and population levels, specifically 66,500-178,000 population years. With domain experts we obtain much lower estimates. Given implied extinction risks, we would prefer to pause science if future utility is roughly equivalent to 400-1000 years of current population-years utility.
I don’t really know why the author thinks that 100,000x is a difficult threshold to hit for the value of future civilization. My guess is this must be a result of exponential discount rate, but assuming any kind of space colonization (which my guess is expert estimates of the kind the author puts a lot of weight on would put at least in the tens of percent likely in the next few thousand years), it seems almost inevitable that human population size will grow to at least 100x-10,000x its present size. You only need to believe in 10-100 years of that kind of future to reach the higher thresholds of valuing the future at ~100,000x current population levels.
And of course in-expectation, taking the average across many futures and taking into account the heavy right tail, as many thinkers have written about, there might very well be more than 10^30 humans alive, dominating many expected value estimates here, and easily crushing the threshold of 100,000x present population value.
To be clear, I am not particularly in-favor of halting science, but I find the reasoning in this report not very compelling for that conclusion.
To embrace this as a conclusion, you also need to fairly strongly buy total utilitarianism across the future light cone, as opposed to any understanding of the future, and the present, that assumes that humanity as a species doesn’t change much in value just because there are more people. (Not that I think either view is obviously wrong—but it is so generally assumed in EA that it’s often unnoticed, but it’s very much not a widely shared view among philosophers or the public.)
Commenting more, this report also says:
I don’t really know why the author thinks that 100,000x is a difficult threshold to hit for the value of future civilization. My guess is this must be a result of exponential discount rate, but assuming any kind of space colonization (which my guess is expert estimates of the kind the author puts a lot of weight on would put at least in the tens of percent likely in the next few thousand years), it seems almost inevitable that human population size will grow to at least 100x-10,000x its present size. You only need to believe in 10-100 years of that kind of future to reach the higher thresholds of valuing the future at ~100,000x current population levels.
And of course in-expectation, taking the average across many futures and taking into account the heavy right tail, as many thinkers have written about, there might very well be more than 10^30 humans alive, dominating many expected value estimates here, and easily crushing the threshold of 100,000x present population value.
To be clear, I am not particularly in-favor of halting science, but I find the reasoning in this report not very compelling for that conclusion.
To embrace this as a conclusion, you also need to fairly strongly buy total utilitarianism across the future light cone, as opposed to any understanding of the future, and the present, that assumes that humanity as a species doesn’t change much in value just because there are more people. (Not that I think either view is obviously wrong—but it is so generally assumed in EA that it’s often unnoticed, but it’s very much not a widely shared view among philosophers or the public.)