To embrace this as a conclusion, you also need to fairly strongly buy total utilitarianism across the future light cone, as opposed to any understanding of the future, and the present, that assumes that humanity as a species doesn’t change much in value just because there are more people. (Not that I think either view is obviously wrong—but it is so generally assumed in EA that it’s often unnoticed, but it’s very much not a widely shared view among philosophers or the public.)
To embrace this as a conclusion, you also need to fairly strongly buy total utilitarianism across the future light cone, as opposed to any understanding of the future, and the present, that assumes that humanity as a species doesn’t change much in value just because there are more people. (Not that I think either view is obviously wrong—but it is so generally assumed in EA that it’s often unnoticed, but it’s very much not a widely shared view among philosophers or the public.)