Suffering of the Nonexistent
Summary: We should consider adding entities which don’t exist into our moral circle. For various reasons, our naive ontology of separating things into “existent” and “non-existent” should be questioned. When taken to its logical extent, this means that we should consider logically impossible entities as moral patients.
The only justifiable stopping place for for the expansion of altruism is the point at which all whose welfare can be affected by our actions are included within the circle of altruism.
--Peter Singer
Nothing is more real than nothing.
--Samuel Beckett
Effective altruists are no strangers to strange ideas. A core principle which underlies much of EA work is the notion of taking ideas seriously. Understanding this principle can help explain why the community works on projects that many people think are complete nonsense. Here are some examples
The idea that at some point in the coming decades an artificial superintelligence will be constructed, and humanity may become extinct shortly afterwards.
Entities in other universes causally disconnected from our own may still be morally relevant, and we should take steps to enter a positive sum trade with them.
The natural suffering of animals, which has been occurring for hundreds of millions of years, is a moral tragedy and humans ought to reduce their pain.
There is no other community, as far as I’m aware, which takes these ideas very seriously. It’s no coincidence, either. If you want to be a successful consequentialist, then you must be able to put aside your own personal biases and misconceptions.
Being successful at reducing suffering, or achieving any other goal, requires trying to find out what reality is really like, and accepting what you find. If you simply rationalize to yourself that you are doing good, then you will never discover your own mistakes.
Here, I will argue for something which I predict will be controversial. With a fair warning, I do not believe that everything that is controversial should be listened to. However, I do believe that most the things worth listening to were at some point controversial.
Many effective altruists have rightly noted that over time, we humans have become concerned with ever larger circles of moral concern. Roughly speaking, in the beginning all that people cared about was their own tribe and family. Gradually, people shifted to caring about local strangers and people of other religions, races, nationalities and so on. These days animal welfare is still on the fringe, but millions of people recognize that animal suffering should be taken seriously. Effective altruists are pushing the boundary by adding small minds and digital agents.
Brian Tomasik has admirably pushed the boundary a bit further, asking whether we should be concerned with the happenings of fundamental particle physics. The motivation behind this jump is intuitive, even though its conclusion may be highly counterintuitive. How can we be physicalists—that is, believe that physics is all that exists—and maintain that our ethics has nothing to say about the most fundamental parts of our universe? In a different world, perhaps one with a slightly different evolutionary history, this question might seem natural rather than absurd.
The question I have is whether our moral circle should be extended even further. After seeing many people’s reactions to Tomasik’s post, I have no doubt that some people will view this ethical project negatively. However, unlike some, I don’t think that making the point is a large hazard. At worst, it will provide our critics with a potential source of mockery and derision. At best it will open our eyes to the most important issue that we should care about. After our long search, perhaps the time for the true Cause X has arrived?
My idea starts by commenting that we are fundamentally confused by reality. Historically, confusion has been a rich source for making new discoveries. Physicists were confused by the aberrations of Mercury’s orbit, which led them to discover the General Theory of Relativity. Even small confusions can lead to large discoveries, and there is no confusion as enormous as the confusion over what’s real and what’s not real.
One technique some particularly reductive philosophers have used to dissolve confusion is by appealing to our own cognitive algorithms. Instead of asking why something is a certain way, for example “Why does anything exist at all?” we should instead ask what cognitive algorithm produced the question in the first place. In this case, the idea is that our brain has a mental model of “existence” and “non-existence” and has placed us in the former category. Then, our brain asks what sort of process could have possibly placed us in the category. The mistake appears almost obvious if you put it this way: why should we expect any process to create the universe at all. Processes are causal things, and we are using this in-universe speak to talk about the entire universe. It is no wonder why we are getting confused. We are applying concepts beyond their explanatory boundary.
This simple argument has been used to argue for quite a few radical theories. One idea is to recognize that there really is no distinction between real, and non-real entities. Perhaps all possible worlds really exist in some grand ensemble, and we are merely looking at one small part.
There is a certain standard way that this argument goes. I too see some insight in the idea that we are small creatures living in a big world. Everything that is not outright contradictory could perhaps describe a possible world, and no entity within such a world could tell the difference between a possible world and an actual one. Ergo, all possible worlds exist.
The main error I see is that the argument doesn’t go far enough. We still think that concepts such as “logically possible” and “lawful universes” should be coherent when we look at the larger scale structure of things. Why do we believe that only logically possible entities should exist? Have we no creativity!
The ultimate confusion comes down to something even deeper, which is the idea that any of this language we use to discuss possibilities and impossibilities in any way refers to the true makeup of reality. I hardly have the words to describe my misapprehension with this approach. Indeed, natural language is unsuitable for discussing metaphysical matters without being misinterpreted.
Therefore, out of fear of being misunderstood, I will tread lightly. My main point is not that I have everything figured out, or that people should believe my (at times) wacky metaphysical views. Instead, I want to argue for the weakest possible thesis which can still support my main point: we should extend our moral sympathies to those outside of the traditional boundary we call “existence.”
Understandably, many will find the idea that we should care about non-existent entities to be absurd. One question is how it is even possible for non-existent entities to suffer, if they don’t exist. The implicit premise is that suffering is something that can only happen to logically possible entities. Logically impossible entities are exempt.
My view of existence is different, and doesn’t lend itself to such questions in a natural manner. Take the claim that bats exist. Surely, a thesis as solid as “bats exist” could never be disproven. But anyone trained in the art of making careful considerations should already see some flaws.
When someone makes the claim that bats exist, in their head they have a model of bat-like things, flying around in the darkness. They see that in their minds eye, and then internally check whether such bat-like things are consistent with their memory. Their brain’s response is immediate and unquestioned. Of course bats exist, I remember seeing one once. But in philosophy we should never accept something as obvious.
If our idea of existence is merely that we have some internal check for consistency in our own minds, and that we believe that the universe has a sort of order and rhythm to it, then this idea of existence should scare us. We are making a metaphysical assumption as grand as we can in order to support a much smaller belief that bats really do exist. It really shouldn’t shock us that it is possible to question this model of ours. Evolution designed organisms which could reliably reproduce, not those which understood the fundamental nature of reality. It is difficult to understand this idea without introducing analogies to other confusions. Take the example of our confusion over consciousness. The confusion over what counts as conscious and what doesn’t count is similar in the sense that it produces deep seated (and often irrational) intuitions. We are not very good at perfect introspection.
Once we realize that our assumptions are a bit baseless, we can start exploring what reality might really look like. Perhaps this whole notion of existence should be discarded altogether and replaced with more precise ideas.
In what sense do I think we can talk meaningfully about things which don’t exist? I think the logically possible versus impossible paradigm is a good enough model for our purposes. We take existence to be the claim that something is consistent with a certain set of axioms. In this frame, multiverse trade and cooperation is about cooperating with a set of logically possible entities, and ignoring those which are not possible. But the distinction is really all in our heads. It is a sort of arbitrary distinction. You might even call it existencist, by analogy with speciesism.
It is not that logically possible entities “exist” and logically impossible entities “don’t exist.” My thesis could more aptly be stated as the idea that we are biased towards thinking that logically consistent entities are in some sense more real, and therefore are due more ethical consideration. I share this same bias, but I have moved away from it somewhat over time. In a way I have come to recognize that the bias for logically possible entities is entirely arbitary. And to the extent that I want my ethical theories to be elegant and insightful, I want to remove this type of arbitariness whenever I can.
I still want my ethical theories to do what ethical theories should do: point us in the direction of what we ought to care about. But I have experienced a sort of transformation in my thinking patterns. These days I don’t think it’s really that important whether a logically possible being experiences something versus a logically impossible being. I have moved away from the in-built bias and by doing so I don’t want to look back.
The obvious next question is “How can we possibly influence something that doesn’t exist?” In order to answer, I need to take a step back. How is it possible to influence anything? It might seem obvious that we can influence the world by our actions. Our brains naturally run a script which allows us to consider various counterfactuals and then act on the world based on its models. This seems intuitive, but again we introduce a metaphysical confusion just by talking about counterfactuals in this way.
In a deterministic universe there is no such thing as a counterfactual. There is simply that which happened, or that which will occur. Our mistake is assuming that there is ontological content contained in our ideas of counterfactuals. Yet somehow we are able to imagine counterfactuals without them actually existing. The exact mechanism of how this occurs is not well understood, and a solution to it could be one of the largest successes in decision theory today. Regardless, it should give us some pause to consider the idea that influencing the world is not as straightforward as it first appears.
In my view, I think about influencing something as playing a sort of symbiotic role with that entity. This is part poetry, and part real decision theory. Let’s say that I wanted to produce one paperclip. Given that whether I actually produced the paperclip is just a fact about the environment, in some sense I am making no real decision. In another sense, if I do make the paperclip, then I am playing a role in the causal web of reality, having certain attributes which look from an inside view as “making a paperclip.” This model is important because it allows us to view actions in a much broader scope. Many people, I believe, are confused by functional decision theory for similar reasons as the ones I have described. They ask how it is possible to influence something which happened in the past, or perhaps in another universe. I offer my model to reduce their confusion. I see that I am one such entity in the vast space of logical entities which are composed of logical parts, and I am playing an ethical role. Different logical pieces necessarily play different roles, but this is the role that I play. Rather than affecting things by having a causal influence on the world, I am connected to others by my logical link to them.
How could we possibly sum up everyone who doesn’t exist and decide the method to affect them? This question appears intractable except when you consider that we already do it with multiverse intuitions. Many people have proposed that we should take an Occam-like measure over the ultimate ensemble, and then do our multiverse trading under this assumption. This Occam-like assumption for measure should not be construed as a metaphysical claim, but should be viewed more like a bias towards simpler mathematical structures. One can instead have a different measure. The number of measures that we can come up with is simply breathtaking. It may feel cold and arbitrary to abandon Occam-like reasoning in the multiverse. What else could we replace it with?! However, this is just the human bias of ambiguity aversion. Consider that before we had no problem with using Occam’s razor. Now, we feel more uncertain choosing a measure. But just because our confusion has been unveiled doesn’t mean that we should feel any less like we are actually doing good. The idea that Occam’s razor had to do with the metaphysical was simply a mistake we made. Whether we used the razor or not would not have actually changed the reality in which we live. At least now that we realize Occam’s razor is more like a personal bias, we have the option of returning to it like before without confusion. We should not fear ambiguity in our universe. Ambiguity is what gives humans choice, even if it seems arbitrary.
So what sort of measure do I have in mind? I don’t really know. I haven’t worked out the details. But I am hesitant to endorse something that gives weight to simple structures, either logically consistent or inconsistent. Simplicity leaves out a ton of structures which I feel could be in a lot of pain.
What we can do
In my discussion I have noticeably neglected to give any solid recommendations for how we can affect those which do not exist. The reasons for this are twofold
It’s not really clear at the moment what we can do. We are still too confused. However, we shouldn’t be afraid to widen our moral circles, even if we aren’t sure what to do yet. Widening our moral circles is an asymmetric good. If we widen them too far then we can always pull back at a later point when we have concluded that there really is nothing we can do. If we don’t widen them enough then we become OK with atrocities like factory farming and suffering risks.
Anything which can help their suffering is likely to be done much better by artificial intelligence, or other types of outsourced cognitive effort.
From these reasons, I see some potential avenues for improving the lives of those who don’t exist. We can add this to our checklist, and make sure that whenever we are making decisions about the future that we should include the welfare of the non-existent. If we are developing methods of value learning, perhaps an additional question would be, “Could this value learning scheme ever care about beings who don’t exist?” I am skeptical that certain entities will end up in our moral circle by default, and I am especially skeptical that entities which don’t exist will end up there unless we make an active effort to ensure that possibility.
Studying logical uncertainty and more generally mathematical fields which give us insight into reality could help. The easiest thing now I think is just getting the idea out there and seeing what people have to say. Spreading the idea is half the battle.
Hi, I hope this doesn’t offend, but is this meant to be satire? I’m unclear if that’s the case (and I don’t think this post is well structured whether it’s meant to be satire or serious). If it’s not satire, I’ll engage more.
No offense taken. It’s a serious post, but I completely understand why people would assume otherwise. I can have a bit of an eccentric take on certain topics and I’m probably not the best at explaining my own views :).
If you have a recommendation on how to change the structure to make it look more serious, please tell me.
I agree with Sarthak. You seem to take a long time to get to your point.
Regarding the content of your post, you may be interested in reading up on population ethics. Your post basically maps onto the debate about whether we should adopt a person-affecting view of ethics. https://en.m.wikipedia.org/wiki/Person-affecting_view
I don’t think OP was going for the same idea as the debate over population ethics. The article wasn’t about future people that don’t currently exist but “might” exist as a result of our actions. Rather, it is about people living in worlds whose existence is causally disconnected from us, and logically impossible according to our current understanding of “existence”.
You’re right, I think I didn’t read carefully enough, and I pattern matched to the nearest sensible view.
Got it. I would recommend cutting this post down roughly in half—you take a while to get to the point (stating your thesis in roughly the 14th paragraph). I understand the desire to try and warn the audience for what is coming, but the first section until you get to the thesis just seems overwrought to me. I know cutting is hard, but I’m confident the rewards from increased clarity will be worth it.
I’ve added a summary. Thanks, this was the first time I wrote a post on this forum.
I liked the long introductory exposition, though I also agree with adding the summary.
Agree with the gist of the previous comments. This is just basic semantic confusion, people or agents who do not exist only exist as theoretical exercises in mindspace such as this one, they do not exist in any other space by definition, and so cannot have real rights to which ethics should be applied.
So focusing on [not just some but] all nonexistent is not controversial, it is just wrong on a basic level. I do not think that this is being close minded ironically, it is simply by definition.
What would be more productive to discuss are possible agents who could be realized with their potential rights, which is fundamentally different although not mutually exclusive with nonexistent agents.
There’s a small, new literature analysing the subset of nonexistence I think you mean, under the name “impossible worlds”. (The authors have no moral or meta-ethical aims.) It might help to use their typology of impossible situations: Impossible Ways vs Logic Violators vs Classical Logic Violators vs Contradiction-Realizers.
To avoid confusion, consider ‘necessarily-nonexistent’ or ‘impossible moral patients’ or some new coinage like that, instead of just ‘nonexistent beings’ otherwise people will think you’re talking about the old Nonidentity Problem.
I think you’ll struggle to make progress, because the intuition that only possible people can be moral patients is so strong, stronger than the one about electrons or microbial life and so on. In the absence of positive reasons (rather than just speculative caution), the project can be expected to move attention away from moral patients to nonpatients—at least, your attention.
Meta: If you don’t want to edit out the thirteen paragraphs of preamble, maybe add a biggish summary paragraph at the top; the first time I read it (skimming, but still) I couldn’t find the proposition.
I’ve taken your advice and added a summary. Thanks for the information, it’s a very insightful comment!
Every time we consider a utility function, its constituent components will carve out large swaths of non-existence space which it cannot apply to. Maximizing happiness excludes those which have no concept of happiness/reward (and offends the non-existent ones for whom reward is pain!).
The space of non-existence is simply too vast and self contradictory to merit consideration. Any utility function will cancel out to 0 across the entire space (because one can conceivably contrive just as many universes where it harms as where it helps).
Though, given this challenge, someone will probably munchkin a self referential, turing-complete utility function that can have a uniform effect across the entire space of existent and non-existent entities.