It seems to me that there’s another aspect to longtermism, an explicit formulation of future lives as having measurable importance.
Longtermists seem to think that maximizing the number of future people is a moral activity, that is, that the more people there are, the greater the altruism of the outcome, all other things equal (that the people have happy lives, etc).
Longtermism allows a comparison between the number of present lives and future lives. There are plausible contexts, to do with provision of resources, that force a compromise between altruism toward present lives and altruism toward future lives (say, 200 years in advance).
Therefore, longtermist EA plans can emphasize the welfare of a larger number of nonexistent people over the welfare of a smaller number of existent people (including babies in the womb).
I don’t believe that a nonexistent person not considered certain to exist in future (preconception, before the meeting of sperm and ovum) has a moral status measurable against a present person that can plausibly certainly exist in future. Notice the kinds of absurd plans that such a belief supports, if you believe that the future allows for a potentially vast number of future people.
For example, in a situation where humanity faces existential risk, the expectation that there will be many future people is no longer certain. In that context, longtermism devolves into a plan to ensure that the future will contain future people, a lot of them, probably at the expense of people who are voluntarily sterile (such as myself) or are too old to have children or are children but not close to reproductive age.
I see this taking general shape in longtermist goals to ensure far off and unlikely futures occur (for example, that we become space-faring people numbering in the trillions or that we develop technology that supports the lives of digital people) at the expense of actions with more probable outcomes (for example, that we alleviate global poverty). In context (for example, in the context of global poverty, there’s still plenty of people and resources working to alleviate global poverty despite longtermist efforts subtracting resources from efforts to alleviate global poverty), longtermist goals seem harmless and relative to personal interests. However, the moral framework justifying the goals, built with expected utility calculations not bound by common-sense, becomes harmful when humanity faces a genuine existential crisis and resource constraints force real compromises.
NOTE: I made some light edits to this for clarity some 13 hours after the original post, unfortunately I cannot improve it much more, sorry.
It seems to me that there’s another aspect to longtermism, an explicit formulation of future lives as having measurable importance.
Longtermists seem to think that maximizing the number of future people is a moral activity, that is, that the more people there are, the greater the altruism of the outcome, all other things equal (that the people have happy lives, etc).
Longtermism allows a comparison between the number of present lives and future lives. There are plausible contexts, to do with provision of resources, that force a compromise between altruism toward present lives and altruism toward future lives (say, 200 years in advance).
Therefore, longtermist EA plans can emphasize the welfare of a larger number of nonexistent people over the welfare of a smaller number of existent people (including babies in the womb).
I don’t believe that a nonexistent person not considered certain to exist in future (preconception, before the meeting of sperm and ovum) has a moral status measurable against a present person that can plausibly certainly exist in future. Notice the kinds of absurd plans that such a belief supports, if you believe that the future allows for a potentially vast number of future people.
For example, in a situation where humanity faces existential risk, the expectation that there will be many future people is no longer certain. In that context, longtermism devolves into a plan to ensure that the future will contain future people, a lot of them, probably at the expense of people who are voluntarily sterile (such as myself) or are too old to have children or are children but not close to reproductive age.
I see this taking general shape in longtermist goals to ensure far off and unlikely futures occur (for example, that we become space-faring people numbering in the trillions or that we develop technology that supports the lives of digital people) at the expense of actions with more probable outcomes (for example, that we alleviate global poverty). In context (for example, in the context of global poverty, there’s still plenty of people and resources working to alleviate global poverty despite longtermist efforts subtracting resources from efforts to alleviate global poverty), longtermist goals seem harmless and relative to personal interests. However, the moral framework justifying the goals, built with expected utility calculations not bound by common-sense, becomes harmful when humanity faces a genuine existential crisis and resource constraints force real compromises.
NOTE: I made some light edits to this for clarity some 13 hours after the original post, unfortunately I cannot improve it much more, sorry.