Will longtermists self-efface

TL;DR

  • Longtermism does not hold that a far future containing many people is a certain future.

  • The moral status of individuals is not a proxy for the moral status of a species or the moral value of producing more individuals.

  • It is a mistake to create a theory of moral value that is contingent on actions that repeat human failures to manage our population size effectively.

  • Longtermists could be more cautious and deliberate in discussing future population sizes and goals relevant to them.

  • Longtermism with a goal of a far-off future with fewer humans is morally preferable, all other things equal.

  • Longtermists should self-efface, on the assumption that if they believe that their actions will increase their control over future humans, then they are making errors.

  • One such error is failure to respect the moral status of the humans they control.

  • Longtermism does not provide moral clarity about preventing causes of harm to future humans as opposed to achieving causes of help to future humans.

Introduction

Drawing from a few sources about longtermism, including McAskill’s own summary of longtermism, and Ezra Klein’s recent podcast, though I browsed several more, I want to offer my objections to longtermism, as I understand it. Hopefully any interested readers will inform me if the concerns I raise are addressed elsewhere.

The fundamental beliefs of longtermism

So, taking the fundamental beliefs of longtermism to be:

  1. future people have moral status

  2. there can be a lot of future people

  3. we can make their lives better

lets turn them into questions:

  1. Do future people have moral status?

  2. Can there be a lot of future people?

  3. Can we make their lives better?

and I will provide my answers:

  1. Yes, if you believe that a future person will exist, then that person has moral status.

  2. Sure, it’s plausible that the future will contain lots of future people.

  3. Yes, it’s plausible that people now can make the lives of future people better.

My concerns about the fundamental beliefs of longtermism

My concerns about the beliefs include:

  1. Longtermists want to protect against human extinction. That means that longtermism does not hold that future people will exist. Rather, it means that longtermism holds that future people could exist, perhaps contingent on longtermist actions.

    Depending on what beliefs longtermists have and what conditions hold, longtermists could maximize the likelihood of large future populations by working against the well-being of present populations. In other words, longtermist moral calculations weighing the moral status of future humans against present humans could favor actions that cause future humans in ways that work against the welfare or longevity of present humans. While such choices might seem appropriate for other reasons, morality shouldn’t be one of them.

  2. Longtermists do not guarantee that the far-future will contain lots of people, but only that it could. It is obvious that any plan that sacrifices the well-being of the present human population to serve a presumed larger future human population will not be morally justifiable, per se, even assuming all sorts of factored-in discount rates. While individuals have moral status, that moral status is not a proxy for the moral status of a species or the moral value of producing more of a species.

    I wonder if the longtermist assumption of a far future containing lots of people with moral status is intended to slip in a theory of value supporting the idea that, all other things equal, a future containing more people is morally preferable to one that contains fewer people. I like humans and the human species and other species but I oppose any theory of moral value that proposes that conception of individuals or ensuring species continuity is, per se, a moral act. My reason is that the potential existence of a being does not endow that being with moral status in advance of its conception.

  3. Longtermists do not guarantee the welfare of future people, but only point to the value of contributing to their welfare. Now that I know that longtermists consider the size and welfare of the future human population to be contingent to some extent on longtermist actions, I’m much more interested in longtermism that reduces the number of far-future humans added to my moral calculations.

    Longtermism should bring about a smaller far-future population of beings with moral status relevant to my altruism toward humans. That preference might seem pessimistic or selfish or anthropocentric, but we are currently experiencing resource limits on planet Earth that are extinguishing other species at rates equivalent to a great extinction, an event that has only occurred five times previously in the planet’s 4+ billion-year history. Homo sapiens face an extinction threat from our own mistakes in managing our resources and population size.

    It is a mistake to create a theory of moral value that is contingent on actions that repeat human failures to manage our population size. Of course, there are workarounds for that concern. Longtermists could be more cautious and deliberate in discussing future population sizes and moral goals relevant to future humans.

Longtermists should self-efface

But I have a final concern, and a hope that longtermism will self-efface to address it. In particular, I hope that longtermists will presume that creation of some utility in the experience of a being with moral status, when accomplished through control of that being in context, will contain errors, one or more of:

  1. errors in longtermist accounts of the experience caused for the being.

  2. errors in longtermist beliefs about the control achieved over the being.

  3. errors in longtermist recognition of the moral status of the being.

Total control, as a goal, will suffer at least those three types of error, the least obvious error is the last.

You cannot totally control someone that you believe has moral status

Errors of recognition of moral status become clear in thought experiments about on-going total control over another person’s behavior and experience. One of the implications of that control is the subtraction of any degree of autonomy and independent consciousness from that person. A person subject to control to such a degree that the person has no autonomy and no independent consciousness is also a person without intrinsic value to their controller. A person without intrinsic value is also a person without moral status. A person under total control and without moral status is supremely vulnerable to the instrumental whims of their human controller.

While I don’t doubt the good intentions behind longtermism, there is a practical reality embedded in contexts of influence, to do with degree of influence over time. In practice, as the future of other humans’ behaviors becomes less certain (regardless of why), a plausible longtermist effort will be to seek increased control over those others and their circumstances. The consequence is one of the errors listed earlier. Maintenance of control despite errors has knock-on effects that either increase uncertainty or require additional control, generating more errors.

To accord with reality, I advocate that longtermists self-efface about their attempts to control humans. Longtermists should self-efface by acknowledging that if they believe that their actions will increase their control over future people, then they are making some sort of error already.

I doubt whether longtermists will self-efface in this way, but hopefully they will acknowledge that the life of a person that will never be conceived has no moral status. That acknowledgement will let them avoid some obvious errors in their moral calculations.

Seeking moral clarity about what longtermists cause

Keep in mind that the only requirement for you to control a future person is for you to cause something for that person’s experience or context.

For example, consider a woman walking along a path in a forested park. Many years ago, some dude threw a bottle on that path, shattering it and leaving shards along the path. The woman is wearing sandals, and not looking down, walks through some shards that cut her feet.

Now lets rewind that story. A longtermist is walking along the path, carrying an empty glass water bottle. Out of consideration for possible future people on the path, the longtermist puts the empty bottle in his pack rather than throw it on the path. Years later, a woman walking in sandals down the path finishes her walk without injury on a path free from shattered glass.

Here are some questions about that thought experiment:

* Did the longtermist cause anything in that woman’s experience of her walk?
* How about the dude who threw the bottle down on the path?
* Should someone have caused anything in advance for some other person who went off the path and trampled in flip-flops through some brambles and poison ivy?
* Should someone plan to do something for all the nonexistent people who walk the park paths after the park is turned into a nature preserve closed to tourism? What about the people that will actually still walk the paths?
* Do safe walking paths in that park still have value if no one walks them?

Conclusion

I don’t think that continuation of the species is a moral activity. It is in fact a selfish one if it is undertaken at all. However, our grief over our losses or desire for our gains, current or pending, does not grant us dominion, by right or by fitness, over humans who live now or who might in the future.

When I take a longtermist view, it is morally preferable to me that fewer humans exist in the far future than are alive now, perhaps a few million in four or five hundred years, accomplished through long-term family planning, and maintained for millennia afterward. My preferences reflect my interests and altruistic designs for everyone else now living, including those in the womb.

My belief is that while the longtermist project to ensure a valuable far future containing large populations of humans living valuable lives has no moral justification, a similar project built on selfish preferences is no less feasible or appropriate. I doubt the project’s feasibility either way. I only support it to the extent that it adds value in my selfish calculations of my circumstances and life conditions.

Having children is either a selfish or an altruistic act, from the perspective of parents and others. The decisions of prospective parents are not mine to control, but I wish them well.