Suffering of the Nonexistent

Sum­mary: We should con­sider adding en­tities which don’t ex­ist into our moral cir­cle. For var­i­ous rea­sons, our naive on­tol­ogy of sep­a­rat­ing things into “ex­is­tent” and “non-ex­is­tent” should be ques­tioned. When taken to its log­i­cal ex­tent, this means that we should con­sider log­i­cally im­pos­si­ble en­tities as moral pa­tients.

The only jus­tifi­able stop­ping place for for the ex­pan­sion of al­tru­ism is the point at which all whose welfare can be af­fected by our ac­tions are in­cluded within the cir­cle of al­tru­ism.
--Peter Singer

Noth­ing is more real than noth­ing.
--Sa­muel Beckett

Effec­tive al­tru­ists are no strangers to strange ideas. A core prin­ci­ple which un­der­lies much of EA work is the no­tion of tak­ing ideas se­ri­ously. Un­der­stand­ing this prin­ci­ple can help ex­plain why the com­mu­nity works on pro­jects that many peo­ple think are com­plete non­sense. Here are some examples

  • The idea that at some point in the com­ing decades an ar­tifi­cial su­per­in­tel­li­gence will be con­structed, and hu­man­ity may be­come ex­tinct shortly af­ter­wards.

  • En­tities in other uni­verses causally dis­con­nected from our own may still be morally rele­vant, and we should take steps to en­ter a pos­i­tive sum trade with them.

  • The nat­u­ral suffer­ing of an­i­mals, which has been oc­cur­ring for hun­dreds of mil­lions of years, is a moral tragedy and hu­mans ought to re­duce their pain.

There is no other com­mu­nity, as far as I’m aware, which takes these ideas very se­ri­ously. It’s no co­in­ci­dence, ei­ther. If you want to be a suc­cess­ful con­se­quen­tial­ist, then you must be able to put aside your own per­sonal bi­ases and mis­con­cep­tions.

Be­ing suc­cess­ful at re­duc­ing suffer­ing, or achiev­ing any other goal, re­quires try­ing to find out what re­al­ity is re­ally like, and ac­cept­ing what you find. If you sim­ply ra­tio­nal­ize to your­self that you are do­ing good, then you will never dis­cover your own mis­takes.

Here, I will ar­gue for some­thing which I pre­dict will be con­tro­ver­sial. With a fair warn­ing, I do not be­lieve that ev­ery­thing that is con­tro­ver­sial should be listened to. How­ever, I do be­lieve that most the things worth listen­ing to were at some point con­tro­ver­sial.

Many effec­tive al­tru­ists have rightly noted that over time, we hu­mans have be­come con­cerned with ever larger cir­cles of moral con­cern. Roughly speak­ing, in the be­gin­ning all that peo­ple cared about was their own tribe and fam­ily. Grad­u­ally, peo­ple shifted to car­ing about lo­cal strangers and peo­ple of other re­li­gions, races, na­tion­al­ities and so on. Th­ese days an­i­mal welfare is still on the fringe, but mil­lions of peo­ple rec­og­nize that an­i­mal suffer­ing should be taken se­ri­ously. Effec­tive al­tru­ists are push­ing the bound­ary by adding small minds and digi­tal agents.

Brian To­masik has ad­mirably pushed the bound­ary a bit fur­ther, ask­ing whether we should be con­cerned with the hap­pen­ings of fun­da­men­tal par­ti­cle physics. The mo­ti­va­tion be­hind this jump is in­tu­itive, even though its con­clu­sion may be highly coun­ter­in­tu­itive. How can we be phys­i­cal­ists—that is, be­lieve that physics is all that ex­ists—and main­tain that our ethics has noth­ing to say about the most fun­da­men­tal parts of our uni­verse? In a differ­ent world, per­haps one with a slightly differ­ent evolu­tion­ary his­tory, this ques­tion might seem nat­u­ral rather than ab­surd.

The ques­tion I have is whether our moral cir­cle should be ex­tended even fur­ther. After see­ing many peo­ple’s re­ac­tions to To­masik’s post, I have no doubt that some peo­ple will view this eth­i­cal pro­ject nega­tively. How­ever, un­like some, I don’t think that mak­ing the point is a large haz­ard. At worst, it will provide our crit­ics with a po­ten­tial source of mock­ery and de­ri­sion. At best it will open our eyes to the most im­por­tant is­sue that we should care about. After our long search, per­haps the time for the true Cause X has ar­rived?

My idea starts by com­ment­ing that we are fun­da­men­tally con­fused by re­al­ity. His­tor­i­cally, con­fu­sion has been a rich source for mak­ing new dis­cov­er­ies. Physi­cists were con­fused by the aber­ra­tions of Mer­cury’s or­bit, which led them to dis­cover the Gen­eral The­ory of Rel­a­tivity. Even small con­fu­sions can lead to large dis­cov­er­ies, and there is no con­fu­sion as enor­mous as the con­fu­sion over what’s real and what’s not real.

One tech­nique some par­tic­u­larly re­duc­tive philoso­phers have used to dis­solve con­fu­sion is by ap­peal­ing to our own cog­ni­tive al­gorithms. In­stead of ask­ing why some­thing is a cer­tain way, for ex­am­ple “Why does any­thing ex­ist at all?” we should in­stead ask what cog­ni­tive al­gorithm pro­duced the ques­tion in the first place. In this case, the idea is that our brain has a men­tal model of “ex­is­tence” and “non-ex­is­tence” and has placed us in the former cat­e­gory. Then, our brain asks what sort of pro­cess could have pos­si­bly placed us in the cat­e­gory. The mis­take ap­pears al­most ob­vi­ous if you put it this way: why should we ex­pect any pro­cess to cre­ate the uni­verse at all. Pro­cesses are causal things, and we are us­ing this in-uni­verse speak to talk about the en­tire uni­verse. It is no won­der why we are get­ting con­fused. We are ap­ply­ing con­cepts be­yond their ex­plana­tory bound­ary.

This sim­ple ar­gu­ment has been used to ar­gue for quite a few rad­i­cal the­o­ries. One idea is to rec­og­nize that there re­ally is no dis­tinc­tion be­tween real, and non-real en­tities. Per­haps all pos­si­ble wor­lds re­ally ex­ist in some grand en­sem­ble, and we are merely look­ing at one small part.

There is a cer­tain stan­dard way that this ar­gu­ment goes. I too see some in­sight in the idea that we are small crea­tures liv­ing in a big world. Every­thing that is not out­right con­tra­dic­tory could per­haps de­scribe a pos­si­ble world, and no en­tity within such a world could tell the differ­ence be­tween a pos­si­ble world and an ac­tual one. Ergo, all pos­si­ble wor­lds ex­ist.

The main er­ror I see is that the ar­gu­ment doesn’t go far enough. We still think that con­cepts such as “log­i­cally pos­si­ble” and “lawful uni­verses” should be co­her­ent when we look at the larger scale struc­ture of things. Why do we be­lieve that only log­i­cally pos­si­ble en­tities should ex­ist? Have we no cre­ativity!

The ul­ti­mate con­fu­sion comes down to some­thing even deeper, which is the idea that any of this lan­guage we use to dis­cuss pos­si­bil­ities and im­pos­si­bil­ities in any way refers to the true makeup of re­al­ity. I hardly have the words to de­scribe my mis­ap­pre­hen­sion with this ap­proach. In­deed, nat­u­ral lan­guage is un­suit­able for dis­cussing meta­phys­i­cal mat­ters with­out be­ing mis­in­ter­preted.

There­fore, out of fear of be­ing mi­s­un­der­stood, I will tread lightly. My main point is not that I have ev­ery­thing figured out, or that peo­ple should be­lieve my (at times) wacky meta­phys­i­cal views. In­stead, I want to ar­gue for the weak­est pos­si­ble the­sis which can still sup­port my main point: we should ex­tend our moral sym­pa­thies to those out­side of the tra­di­tional bound­ary we call “ex­is­tence.”

Un­der­stand­ably, many will find the idea that we should care about non-ex­is­tent en­tities to be ab­surd. One ques­tion is how it is even pos­si­ble for non-ex­is­tent en­tities to suffer, if they don’t ex­ist. The im­plicit premise is that suffer­ing is some­thing that can only hap­pen to log­i­cally pos­si­ble en­tities. Log­i­cally im­pos­si­ble en­tities are ex­empt.

My view of ex­is­tence is differ­ent, and doesn’t lend it­self to such ques­tions in a nat­u­ral man­ner. Take the claim that bats ex­ist. Surely, a the­sis as solid as “bats ex­ist” could never be dis­proven. But any­one trained in the art of mak­ing care­ful con­sid­er­a­tions should already see some flaws.

When some­one makes the claim that bats ex­ist, in their head they have a model of bat-like things, fly­ing around in the dark­ness. They see that in their minds eye, and then in­ter­nally check whether such bat-like things are con­sis­tent with their mem­ory. Their brain’s re­sponse is im­me­di­ate and un­ques­tioned. Of course bats ex­ist, I re­mem­ber see­ing one once. But in philos­o­phy we should never ac­cept some­thing as ob­vi­ous.

If our idea of ex­is­tence is merely that we have some in­ter­nal check for con­sis­tency in our own minds, and that we be­lieve that the uni­verse has a sort of or­der and rhythm to it, then this idea of ex­is­tence should scare us. We are mak­ing a meta­phys­i­cal as­sump­tion as grand as we can in or­der to sup­port a much smaller be­lief that bats re­ally do ex­ist. It re­ally shouldn’t shock us that it is pos­si­ble to ques­tion this model of ours. Evolu­tion de­signed or­ganisms which could re­li­ably re­pro­duce, not those which un­der­stood the fun­da­men­tal na­ture of re­al­ity. It is difficult to un­der­stand this idea with­out in­tro­duc­ing analo­gies to other con­fu­sions. Take the ex­am­ple of our con­fu­sion over con­scious­ness. The con­fu­sion over what counts as con­scious and what doesn’t count is similar in the sense that it pro­duces deep seated (and of­ten ir­ra­tional) in­tu­itions. We are not very good at perfect in­tro­spec­tion.

Once we re­al­ize that our as­sump­tions are a bit base­less, we can start ex­plor­ing what re­al­ity might re­ally look like. Per­haps this whole no­tion of ex­is­tence should be dis­carded al­to­gether and re­placed with more pre­cise ideas.

In what sense do I think we can talk mean­ingfully about things which don’t ex­ist? I think the log­i­cally pos­si­ble ver­sus im­pos­si­ble paradigm is a good enough model for our pur­poses. We take ex­is­tence to be the claim that some­thing is con­sis­tent with a cer­tain set of ax­ioms. In this frame, mul­ti­verse trade and co­op­er­a­tion is about co­op­er­at­ing with a set of log­i­cally pos­si­ble en­tities, and ig­nor­ing those which are not pos­si­ble. But the dis­tinc­tion is re­ally all in our heads. It is a sort of ar­bi­trary dis­tinc­tion. You might even call it ex­is­ten­cist, by anal­ogy with speciesism.

It is not that log­i­cally pos­si­ble en­tities “ex­ist” and log­i­cally im­pos­si­ble en­tities “don’t ex­ist.” My the­sis could more aptly be stated as the idea that we are bi­ased to­wards think­ing that log­i­cally con­sis­tent en­tities are in some sense more real, and there­fore are due more eth­i­cal con­sid­er­a­tion. I share this same bias, but I have moved away from it some­what over time. In a way I have come to rec­og­nize that the bias for log­i­cally pos­si­ble en­tities is en­tirely ar­bitary. And to the ex­tent that I want my eth­i­cal the­o­ries to be el­e­gant and in­sight­ful, I want to re­move this type of ar­bitari­ness when­ever I can.

I still want my eth­i­cal the­o­ries to do what eth­i­cal the­o­ries should do: point us in the di­rec­tion of what we ought to care about. But I have ex­pe­rienced a sort of trans­for­ma­tion in my think­ing pat­terns. Th­ese days I don’t think it’s re­ally that im­por­tant whether a log­i­cally pos­si­ble be­ing ex­pe­riences some­thing ver­sus a log­i­cally im­pos­si­ble be­ing. I have moved away from the in-built bias and by do­ing so I don’t want to look back.

The ob­vi­ous next ques­tion is “How can we pos­si­bly in­fluence some­thing that doesn’t ex­ist?” In or­der to an­swer, I need to take a step back. How is it pos­si­ble to in­fluence any­thing? It might seem ob­vi­ous that we can in­fluence the world by our ac­tions. Our brains nat­u­rally run a script which al­lows us to con­sider var­i­ous coun­ter­fac­tu­als and then act on the world based on its mod­els. This seems in­tu­itive, but again we in­tro­duce a meta­phys­i­cal con­fu­sion just by talk­ing about coun­ter­fac­tu­als in this way.

In a de­ter­minis­tic uni­verse there is no such thing as a coun­ter­fac­tual. There is sim­ply that which hap­pened, or that which will oc­cur. Our mis­take is as­sum­ing that there is on­tolog­i­cal con­tent con­tained in our ideas of coun­ter­fac­tu­als. Yet some­how we are able to imag­ine coun­ter­fac­tu­als with­out them ac­tu­ally ex­ist­ing. The ex­act mechanism of how this oc­curs is not well un­der­stood, and a solu­tion to it could be one of the largest suc­cesses in de­ci­sion the­ory to­day. Re­gard­less, it should give us some pause to con­sider the idea that in­fluenc­ing the world is not as straight­for­ward as it first ap­pears.

In my view, I think about in­fluenc­ing some­thing as play­ing a sort of sym­biotic role with that en­tity. This is part po­etry, and part real de­ci­sion the­ory. Let’s say that I wanted to pro­duce one pa­per­clip. Given that whether I ac­tu­ally pro­duced the pa­per­clip is just a fact about the en­vi­ron­ment, in some sense I am mak­ing no real de­ci­sion. In an­other sense, if I do make the pa­per­clip, then I am play­ing a role in the causal web of re­al­ity, hav­ing cer­tain at­tributes which look from an in­side view as “mak­ing a pa­per­clip.” This model is im­por­tant be­cause it al­lows us to view ac­tions in a much broader scope. Many peo­ple, I be­lieve, are con­fused by func­tional de­ci­sion the­ory for similar rea­sons as the ones I have de­scribed. They ask how it is pos­si­ble to in­fluence some­thing which hap­pened in the past, or per­haps in an­other uni­verse. I offer my model to re­duce their con­fu­sion. I see that I am one such en­tity in the vast space of log­i­cal en­tities which are com­posed of log­i­cal parts, and I am play­ing an eth­i­cal role. Differ­ent log­i­cal pieces nec­es­sar­ily play differ­ent roles, but this is the role that I play. Rather than af­fect­ing things by hav­ing a causal in­fluence on the world, I am con­nected to oth­ers by my log­i­cal link to them.

How could we pos­si­bly sum up ev­ery­one who doesn’t ex­ist and de­cide the method to af­fect them? This ques­tion ap­pears in­tractable ex­cept when you con­sider that we already do it with mul­ti­verse in­tu­itions. Many peo­ple have pro­posed that we should take an Oc­cam-like mea­sure over the ul­ti­mate en­sem­ble, and then do our mul­ti­verse trad­ing un­der this as­sump­tion. This Oc­cam-like as­sump­tion for mea­sure should not be con­strued as a meta­phys­i­cal claim, but should be viewed more like a bias to­wards sim­pler math­e­mat­i­cal struc­tures. One can in­stead have a differ­ent mea­sure. The num­ber of mea­sures that we can come up with is sim­ply breath­tak­ing. It may feel cold and ar­bi­trary to aban­don Oc­cam-like rea­son­ing in the mul­ti­verse. What else could we re­place it with?! How­ever, this is just the hu­man bias of am­bi­guity aver­sion. Con­sider that be­fore we had no prob­lem with us­ing Oc­cam’s ra­zor. Now, we feel more un­cer­tain choos­ing a mea­sure. But just be­cause our con­fu­sion has been un­veiled doesn’t mean that we should feel any less like we are ac­tu­ally do­ing good. The idea that Oc­cam’s ra­zor had to do with the meta­phys­i­cal was sim­ply a mis­take we made. Whether we used the ra­zor or not would not have ac­tu­ally changed the re­al­ity in which we live. At least now that we re­al­ize Oc­cam’s ra­zor is more like a per­sonal bias, we have the op­tion of re­turn­ing to it like be­fore with­out con­fu­sion. We should not fear am­bi­guity in our uni­verse. Am­bi­guity is what gives hu­mans choice, even if it seems ar­bi­trary.

So what sort of mea­sure do I have in mind? I don’t re­ally know. I haven’t worked out the de­tails. But I am hes­i­tant to en­dorse some­thing that gives weight to sim­ple struc­tures, ei­ther log­i­cally con­sis­tent or in­con­sis­tent. Sim­plic­ity leaves out a ton of struc­tures which I feel could be in a lot of pain.

What we can do

In my dis­cus­sion I have no­tice­ably ne­glected to give any solid recom­men­da­tions for how we can af­fect those which do not ex­ist. The rea­sons for this are twofold

  • It’s not re­ally clear at the mo­ment what we can do. We are still too con­fused. How­ever, we shouldn’t be afraid to widen our moral cir­cles, even if we aren’t sure what to do yet. Wi­den­ing our moral cir­cles is an asym­met­ric good. If we widen them too far then we can always pull back at a later point when we have con­cluded that there re­ally is noth­ing we can do. If we don’t widen them enough then we be­come OK with atroc­i­ties like fac­tory farm­ing and suffer­ing risks.

  • Any­thing which can help their suffer­ing is likely to be done much bet­ter by ar­tifi­cial in­tel­li­gence, or other types of out­sourced cog­ni­tive effort.

From these rea­sons, I see some po­ten­tial av­enues for im­prov­ing the lives of those who don’t ex­ist. We can add this to our check­list, and make sure that when­ever we are mak­ing de­ci­sions about the fu­ture that we should in­clude the welfare of the non-ex­is­tent. If we are de­vel­op­ing meth­ods of value learn­ing, per­haps an ad­di­tional ques­tion would be, “Could this value learn­ing scheme ever care about be­ings who don’t ex­ist?” I am skep­ti­cal that cer­tain en­tities will end up in our moral cir­cle by de­fault, and I am es­pe­cially skep­ti­cal that en­tities which don’t ex­ist will end up there un­less we make an ac­tive effort to en­sure that pos­si­bil­ity.

Study­ing log­i­cal un­cer­tainty and more gen­er­ally math­e­mat­i­cal fields which give us in­sight into re­al­ity could help. The eas­iest thing now I think is just get­ting the idea out there and see­ing what peo­ple have to say. Spread­ing the idea is half the bat­tle.