Common-sense cases where “hypothetical future people” matter

Motivation

A few weeks ago, I was speaking with a student (call him Josh) who was skeptical of the importance of existential risk and the validity of longtermism. He said something like, “What matters to me is kids going hungry today, not hypothetical future people.” I found this to be a moving, sincere objection, so I thought about where it seemed to go wrong and offered Josh a version of Case 1 below, and he seemed pretty convinced.[1]

Josh’s skepticism echoes the dismissal of “merely possible people” expressed by critics who hold presentist person-affecting views — that is, they believe that “an act can only be bad [or good] if it is bad [or good] for someone,” where “someone” is a person who exists at the time of the act. The current non-existence of future people is a common objection to taking their well-being into moral consideration, and it would be good for longtermists to have cases ready to go that illustrate the weaknesses of this view.

I developed a couple more cases in Twitter threads and figured I’d combine them into a linkable forum post.

(Edit: since most of the comments have raised this point, yes, showing that improving the lives of future people seems morally good does not imply that causing more future people to exist is morally good. My intention with these cases is not to create an airtight case for total utilitarianism or to argue against the strongest steelman of person-affecting views. Instead, I want to provide some examples that drive intuitions against objection that I, and presumably other EAs, commonly encounter “in the wild” — namely, that the interests of future people are counterintuitive or invalid interests on which to focus your actions. I am not super familiar with more sophisticated person-affecting views. I’ll briefly say that I find Joe Carlsmith, along with the transitivity arguments linked in the conclusion, to be pretty persuasive that we should think creating happy lives is good, and I make a few more arguments in the comments.)

Case 1: The Reformer

You work in a department of education. You spend a full year working on a report on a new kindergarten curriculum that makes kids happier and learn better. It takes a few years for this to circulate and get approved, and a few more for teachers to learn it.

By the time it’s being taught, 6 years have passed since your work. I think your work, 6 years ago, was morally significant because of the happier, better-educated students now. But these kindergarteners are (mostly) 5 years old. They didn’t even exist at the time of your work.

You remember a conversation you had, while working on the curriculum, with your friend who thinks that “hypothetical future people can’t have interests” (and who is familiar with the turnaround times of education reform). The friend shook her head. “I don’t know why you’re working on this kindergarten curriculum for future people,” she said. “You could be helping real people who are alive today. Why not switch to working on a second-grade curriculum?”

Indeed, if only already-existing people matter, you’d be in the weird position where your work would’ve been morally valuable if you’d written a 2nd grade curriculum but your kindergarten curriculum is morally worthless. Why should the birth year of beneficiaries affect this evaluation?

Case 2: The Climate Resiliency Project

After finishing architecture school, you choose to work at a firm that designs climate resiliency projects. The government of Bangladesh has contracted that firm to design sea walls, on the condition that the work be expedited. You could have worked at a commercial firm for more pay and shorter hours, but you choose to work at the climate-focused firm.

The team works for a year on the sea walls project. The Bangladeshi government builds it over the next 20 years. In 2042, a typhoon strikes, and the walls save thousands of lives.

Now, you consider how your choice to work at the climate resiliency firm compared to its alternatives. You think your work on the sea walls accounted for, say, 1% of the impact, saving dozens of lives. But maybe you could have donated a big share of your larger salary to Against Malaria and saved dozens of lives that way instead.

If “nonexistent future people” don’t matter, we are again in the absurd position of asking, “Well, how many of the lives saved were over the age of 20?” After all, those under 20 didn’t exist yet, so you should not have taken their non-existent interests into consideration.

As the decades progress, the sea walls save more lives, as the effects of climate change get worse. But the “future people don’t matter” view holds that these effects should matter less in your 2022 decision-making, because more and more of the beneficiaries don’t yet exist to “have interests.”

Case 3: The Exonerated Hiker

William MacAskill writes a crisp case in the New York Times: “Suppose that I drop a glass bottle while hiking. If I don’t clean it up, a child might cut herself on the shards. Does it matter when the child will cut herself — a week, or a decade, or a century from now? No. Harm is harm, whenever it occurs.”

I propose a modification that shows the implausibility of the alternative view.

You drop the bottle and don’t clean it up. Ten years later, you return to the same spot and remember the glass bottle. The shards are still there, and, to your horror, before your eyes, a child does cut herself on the shards.

You feel a pang of guilt, realizing that your lack of care 10 years ago was morally reprehensible. But then, you remember the totally plausible moral theory that hypothetical future people don’t matter, and shout out: “How old are you?”

The child looks up, confused. “I’m eight.”

“Whew,” you say. Off the hook! While it’s a shame that a child was injured, your decision not to clean up 10 years ago turns out not to have had any moral significance.

In your moment of relief, you accidentally drop another glass bottle. Since it turned out okay last time, you decide not to clean up this one either.

Conclusion

Looking beyond individual moral actions, these implications might play out in analogous policy or philanthropic decisions. Governments or grantmakers might have to decide whether it’s worth undertaking a study or project that would help kindergarteners in 2028, hikers in 2032, or the residents of floodplains in 2042. But it’s hard for the cost-benefit analysis to work out if you ignore the interests of anyone who doesn’t yet exist. Taking the presentist person-affecting view seriously would lead to dramatically under-investing not only in the long-term future and the reduction of existential risk, but also in very familiar medium-term policies with fairly typical lead times.

Other flavors of person-affecting views might not have this problem, though they encounter transitivity problems. But critics who echo the refrain of “hypothetical non-existent future ‘people’” should be reminded that this stance justifies policy decisions that they presumably disagree with.

  1. ^

    He remained skeptical that existential risks, especially from AI, are as high as I claimed, which seems reasonable, since he hadn’t been given very strong evidence. He agreed that if he thought AI was as high of a risk as I did, he would change his mind, so this whole episode is further evidence that seemingly normative disagreements are often actually empirical disagreements.