Thanks for the considered response. You’re right that the Total View is not the only view on which future good lives has moral value (though that does seem to be the main one bandied about). Perhaps I should have written “I don’t subscribe to the idea that adding happy people is intrinsically good in itself” as I think that better reflects my position — I subscribe to the Person-Affecting View (PAV).
The reason I prefer the PAV is not because of the repugnant conclusion (which I don’t actually find “repugnant”) but more the problem of existence comparativism — I don’t think that, for a given person, existing can be better or worse than not existing.
Given my PAV, I agree with your last point that there is some moral value to ensuring happy people in the future, if that would satisfy the preferences of current people. But in my experience, most people seem to have very weak preferences for the continued existence of “humanity” as a whole. Most people seem very concerned about the immediate impacts on those within their moral circle (i.e. themselves and their children, maybe grandchildren), but not that much beyond that. So on that basis, I don’t think reducing extinction risk will beat out increasing the value of futures where we survive.
To be clear, I don’t have an objection to the extinction risk work EA endorses that is robustly good on a variety of worldviews (e.g. preventing all-out nuclear war is great on the PAV, too). But I don’t have a problem with humans or digital minds going extinct per se. For example, if humans went extinct because of declining fertility rates (which I don’t think is likely), I wouldn’t see that as a big moral catastrophe that requires intervention.
No I wouldn’t create a person who would spend their entire life in agony. But I think the reason many people including myself hold the PAV despite the procreation asymmetry is because we recognise that, in real life, two thing are separate: (1) creating a person; (2) making that person happy. I disagree that (1) alone is good. At best, it is neutral. I only think that (2) is good.
If I were to create a child and abandon it, I do not think that is better than not creating the child in the first place. That is true even if the child ends up being happy for whatever reason (e.g. it ends up being adopted by a great parent).
In contrast, it is indeed possible to create a child who would spend their entire life in agony. In fact, if I created a child and did nothing more, that child’s life would likely be miserable and short. So I see any asymmetric preference to avoid creating unhappy lives, without wanting to create happy lives, as entirely reasonable.
Moreover, I do not think moral realism is correct and see different views of population ethics as being subjective. They depend on each person’s intrinsic values. And no intrinsic values are logical. Logic can help you find ways to achieve your intrinsic values. But it cannot tell you what your intrinsic values should be. Logic is a powerful tool, but it has limits. I think it is important to recognise where logic can help—and where it can’t.