Stefan Schubert: Psychology of Existential Risk and Long-Termism

Con­sider three sce­nar­ios: sce­nario A, where hu­man­ity con­tinues to ex­ist as we cur­rently do, sce­nario B, where 99% of us die, and sce­nario C, where ev­ery­one dies. Clearly op­tion A is bet­ter than op­tion B, and op­tion B is bet­ter than op­tion C. But ex­actly how much bet­ter is B than C?? In this talk from EA Global 2018: Lon­don, Ste­fan Schu­bert de­scribes his ex­per­i­ments ex­am­in­ing pub­lic opinion on this ques­tion, and how best to en­courage a more com­pre­hen­sive view of ex­tinc­tion’s harms.

A tran­script of Ste­fan’s talk is be­low, in­clud­ing ques­tions from the au­di­ence, which CEA has lightly ed­ited for clar­ity. You can also read this talk on effec­tivealtru­ism.org, and watch it on YouTube.

The Talk

Here is a graph of eco­nomic growth over the last two mil­len­nia.

1430 Stefan Schubert (1)

As you can see, for a very long time, there was very lit­tle growth, but then it grad­u­ally started to pick up dur­ing the 1700s, and then in the 20th cen­tury, it re­ally sky­rock­eted.

So now the ques­tion is, what can we tell about fu­ture growth, on the ba­sis of this pic­ture of past growth?

1430 Stefan Schubert (2)

Here is one pos­si­bil­ity which is, per­haps, the one which is clos­est at hand, that growth will con­tinue into the fu­ture and hope­fully into the long-term fu­ture. And that will mean not only greater wealth, but also bet­ter health, ex­tended life span, more sci­en­tific dis­cov­er­ies, and more hu­man flour­ish­ing in all kinds of other ways. So, a much bet­ter long-term fu­ture in all kinds of ways.

But, un­for­tu­nately, that’s not the only pos­si­bil­ity. Ex­perts worry that it could be that growth con­tinues for some time, but then civ­i­liza­tion col­lapses.

1430 Stefan Schubert (3)

For in­stance, civ­i­liza­tion could col­lapse be­cause of a nu­clear war be­tween great pow­ers or an ac­ci­dent in­volv­ing pow­er­ful AI sys­tems. Ex­perts worry that civ­i­liza­tion wouldn’t re­cover from such a col­lapse.

1430 Stefan Schubert (4)

The philoso­pher Nick Bostrom, at Oxford, has called these kinds of col­lapses or catas­tro­phes “ex­is­ten­tial catas­tro­phes.” One kind of ex­is­ten­tial catas­tro­phe is hu­man ex­tinc­tion. In that case, the hu­man species goes ex­tinct, no hu­mans ever live any­more. So that will be my sole fo­cus here. But he also defines an­other kind of ex­is­ten­tial catas­tro­phe, which is that hu­man­ity doesn’t go ex­tinct but its po­ten­tial is per­ma­nently and dras­ti­cally cur­tailed. I won’t talk about such ex­is­ten­tial catas­tro­phes here.

1430 Stefan Schubert (5)

So, to­gether with an­other Oxford philoso­pher, the late Derek Parfit, Bostrom has ar­gued that hu­man ex­tinc­tion would be uniquely bad, much worse than non-ex­is­ten­tial catas­tro­phes. And that is be­cause ex­tinc­tion would for­ever de­prive hu­man­ity of a po­ten­tially grand fu­ture. We saw that grand fu­ture on one of the pre­ced­ing slides.

So, in or­der to make this in­tu­ition clear and vivid, Derek Parfit cre­ated the fol­low­ing thought ex­per­i­ment where he asked us to con­sider three out­comes: First, peace; sec­ond, a nu­clear war that kills 99% of the hu­man pop­u­la­tion; and then third, a nu­clear war that kills 100% of the hu­man pop­u­la­tion.

Parfit’s rank­ing of these out­comes, from best to worst, was as fol­lows: peace is the best, near ex­tinc­tion is num­ber two, and then ex­tinc­tion is the worst. So, no sur­prises so far. But then he asks a more in­ter­est­ing ques­tion: “Which differ­ence, in terms of bad­ness, is greater?” Is it the First Differ­ence, as we call it, be­tween peace and 99% dead? Or the Se­cond Differ­ence, be­tween 99% dead and 100% dead? This is a piece of ter­minol­ogy that I will use through­out this talk, the First Differ­ence and Se­cond Differ­ence, so it will be good to re­mem­ber this ter­minol­ogy.

1430 Stefan Schubert (7)

So which differ­ence do you find the greater? That de­pends on what key value you have. If your key value is the bad­ness of in­di­vi­d­ual deaths or the in­di­vi­d­u­als that suffer, then you’re gonna think that the First Differ­ence is greater, be­cause the First Differ­ence is greater in terms of in­di­vi­d­ual deaths. But there is also an­other key value which one might have, and that is that ex­tinc­tion and the lost fu­ture that it en­tails is very bad. And of course, only the third of these out­comes, 100% death rate, means ex­tinc­tion and a lost fu­ture.

So only the Se­cond Differ­ence in­volves a com­par­i­son be­tween an ex­tinc­tion and a non-ex­tinc­tion out­come. So this means if you fo­cus on the bad­ness of ex­tinc­tion and the lost fu­ture it en­tails, then you are gonna think that the Se­cond Differ­ence is greater.

Parfit hy­poth­e­sized that most peo­ple will find the First Differ­ence to be greater be­cause they’re gonna fo­cus on the in­di­vi­d­ual deaths and all the in­di­vi­d­u­als that suffer. So this is, in effect, a psy­cholog­i­cal hy­poth­e­sis. But his own eth­i­cal view was that the Se­cond Differ­ence is greater. So, in effect, that means that ex­tinc­tion is uniquely bad, and much worse than a non-ex­is­ten­tial catas­tro­phe. And that is be­cause Parfit’s key value was the lost fu­ture that hu­man ex­tinc­tion would en­tail.

So then, to­gether with my col­leagues, Lu­cius Cavi­ola and Nadira Faber, at the Univer­sity of Oxford, we wanted to test this psy­cholog­i­cal hy­poth­e­sis of Parfit’s, namely that peo­ple don’t find ex­tinc­tion uniquely bad.

1430 Stefan Schubert (8)

So we did this us­ing a slightly tweaked ver­sion of Parfit’s hy­poth­e­sis. We asked again peo­ple on differ­ent on­line plat­forms to con­sider three out­comes. But the first out­come wasn’t peace, be­cause we found that peo­ple had cer­tain pos­i­tive emo­tional as­so­ci­a­tions with the word “peace” and we didn’t want that to con­found them. In­stead we just said there’s no catas­tro­phe.

1430 Stefan Schubert (9)

With re­gards to the sec­ond out­come we made two changes; we re­placed nu­clear war with a generic catas­tro­phe be­cause we weren’t speci­fi­cally in­ter­ested in nu­clear war, and then we re­duced the num­ber of deaths from 99% to 80% be­cause we wanted peo­ple to be­lieve that it’s likely that we could re­cover from this catas­tro­phe. And then the third out­come was that 100% of peo­ple died.

We first asked peo­ple to rank the three out­comes. Our hy­poth­e­sis was that most peo­ple would rank these out­comes as Parfit thought that one should, namely that no catas­tro­phe is the best and then near ex­tinc­tion is sec­ond and ex­tinc­tion is the worst.

1430 Stefan Schubert (10)

This was in­deed the case. 90% gave these rank­ings and all other rank­ings only got 10% be­tween them. But then we went on to an­other ques­tion, and this we only gave to those par­ti­ci­pants who had given these pre­dicted rank­ings. The other 10% were out of the study from now on.

1430 Stefan Schubert (11)

We asked, “In terms of bad­ness, which differ­ence is greater, the First Differ­ence be­tween no catas­tro­phe and near ex­tinc­tion?” And as you’ll re­call, Parfit’s hy­poth­e­sis was that most peo­ple would find this differ­ence to be greater, “Or the Se­cond Differ­ence be­tween near ex­tinc­tion and ex­tinc­tion?” Mean­ing that ex­tinc­tion is uniquely bad.

1430 Stefan Schubert (12)

And then we found that Parfit was, in­deed, right. A clear ma­jor­ity found the First Differ­ence to be greater and only minor­ity found ex­tinc­tion to be uniquely bad.

So then we wanted to know “Why is it that peo­ple don’t find ex­tinc­tion uniquely bad?” Is it be­cause they fo­cus very strongly on the first key value, they re­ally fo­cus on the bad­ness of in­di­vi­d­ual deaths and in­di­vi­d­ual suffer­ing? Or is it be­cause they fo­cus only weakly on the other key value, on the bad­ness of ex­tinc­tion and the lost fu­ture which it en­tails?

So we in­cluded a se­ries of ma­nipu­la­tions in our study to test these hy­pothe­ses. Some of these de­creased the bad­ness of in­di­vi­d­ual suffer­ing, and oth­ers em­pha­sized or in­creased the bad­ness of a lost fu­ture, so they latched on to ei­ther the first or the sec­ond of these hy­pothe­ses. So this meant that the con­di­tion which I’ve shown you the re­sults from, that acted as a con­trol con­di­tion, and then we had a num­ber of ex­per­i­men­tal con­di­tions or ma­nipu­la­tions.

1430 Stefan Schubert (13)

In to­tal, we had more than twelve hun­dred par­ti­ci­pants in the Bri­tish sam­ple, mak­ing it a fairly large study. We also ran an­other study which was iden­ti­cal on the US sam­ple, and that yielded similar re­sults, but I will here fo­cus on the larger Bri­tish sam­ple.

1430 Stefan Schubert (14)

Our first ma­nipu­la­tion in­volved ze­bras. So here we had ex­actly the same three out­comes, like in the con­trol con­di­tion, only that we re­placed hu­mans with ze­bras. So our rea­son­ing here was that peo­ple likely em­pathize less with in­di­vi­d­ual ze­bras; they don’t feel as strongly with an in­di­vi­d­ual ze­bra that dies as they do with an in­di­vi­d­ual hu­man that dies. So, there­fore, there would be less fo­cus on in­di­vi­d­ual suffer­ing, the first key value; whereas, peo­ple might still care pretty strongly about the ze­bra species, we thought, so ex­tinc­tion would still be bad.

1430 Stefan Schubert (15)

So over­all this would mean that more peo­ple would find ex­tinc­tion uniquely bad when it comes to ze­bras. That was our hy­poth­e­sis, and it was proved true. A sig­nifi­cantly larger pro­por­tion of peo­ple found ex­tinc­tion uniquely bad when it comes to ze­bras, 44% ver­sus 23%.

1430 Stefan Schubert (16)

So then our sec­ond ma­nipu­la­tion, here we went back to hu­mans, but what we changed was that the hu­mans were no longer get­ting kil­led, but, in­stead, they couldn’t have any chil­dren. And of course if no one can have chil­dren then hu­man­ity will even­tu­ally go ex­tinct.

1430 Stefan Schubert (17)

So here, again, we felt, we thought, that peo­ple will feel less about ster­il­iza­tion than about death. So then there would be less of a fo­cus on the first key value, in­di­vi­d­ual suffer­ing. Whereas ex­tinc­tion and the lost fu­ture that it en­tails is as bad as in the con­trol con­di­tion.

1430 Stefan Schubert (18)

So over­all this should make more peo­ple find ex­tinc­tion uniquely bad when it comes to ster­il­iza­tion. And this hy­poth­e­sis was also proved true. So here we found that 47% said ex­tinc­tion was uniquely bad in this con­di­tion. Again, that was a sig­nifi­cant differ­ence com­pared to the con­trol con­di­tion.

1430 Stefan Schubert (19)

And then our third ma­nipu­la­tion was some­what differ­ent. So here we had, again, the three out­comes from the con­trol con­di­tion. But then af­ter that we added the fol­low­ing text: “Please re­mem­ber to con­sider the long term con­se­quences each sce­nario will have for hu­man­ity. If hu­man­ity does not go ex­tinct, it could go on to a long fu­ture. This is true even if many, but not all, hu­mans die in a catas­tro­phe, since that leaves open the pos­si­bil­ity of re­cov­ery. How­ever, if hu­man­ity goes ex­tinct, there would be no fu­ture for hu­man­ity.”

1430 Stefan Schubert (20)

So the ma­nipu­la­tion makes it clear that ex­tinc­tion means no fu­ture and non-ex­tinc­tion may mean a long fu­ture. So it em­pha­sizes the bad­ness of ex­tinc­tion and los­ing the fu­ture, and has an effect on that key value, whereas, the other key value, the bad­ness of in­di­vi­d­ual suffer­ing, isn’t re­ally af­fected.

1430 Stefan Schubert (22)

So over­all it should, again, make more peo­ple find ex­tinc­tion uniquely bad. So here we found a similar effect as be­fore, so 50% now found ex­tinc­tion to be uniquely bad. So in the salience ma­nipu­la­tion, we didn’t re­ally add any new in­for­ma­tion, we just high­lighted cer­tain in­fer­ences which one, in prin­ci­ple, could have made even in the con­trol con­di­tion.

1430 Stefan Schubert (23)

But we also wanted to in­clude one con­di­tion where we ac­tu­ally added new in­for­ma­tion. We called this the good fu­ture ma­nipu­la­tion. So here in the first out­come we said not only that there is no catas­tro­phe, but also, hu­man­ity goes on to live for a very long time in a fu­ture which is bet­ter than to­day in ev­ery con­ceiv­able way. There are no longer any wars, any crimes, any peo­ple ex­pe­rienc­ing de­pres­sion or sad­ness and so on.

1430 Stefan Schubert (24)

So, re­ally a utopia. And then the sec­ond out­come was very similar, so, here of course, there was a catas­tro­phe, but we re­cover from the catas­tro­phe, and then go on to the same utopia. And then the third out­come was the same, but we also re­ally em­pha­sized that ex­tinc­tion means that no hu­mans will ever live any­more and all of hu­man knowl­edge and cul­ture will be lost for­ever.

1430 Stefan Schubert (25)

So this was re­ally a very strong ma­nipu­la­tion. We ham­mered home the ex­treme differ­ence be­tween these three differ­ent out­comes, so, I think this should be re­mem­bered when we look at the re­sults here. Be­cause here we found quite a strik­ing differ­ence.

This ma­nipu­la­tion says then that the fu­ture will be very good if hu­man­ity sur­vives, and that we would re­cover from a non-ex­tinc­tion catas­tro­phe. So the ma­nipu­la­tion then makes it worse to lose the fu­ture, so it af­fects that key value. But the other key value, the bad­ness of in­di­vi­d­ual suffer­ing, is not af­fected.

1430 Stefan Schubert (26)

So, over­all, this should make more peo­ple find ex­tinc­tion uniquely bad, and that’s re­ally what we found here. 77% found ex­tinc­tion uniquely bad given that we would lose this very great fu­ture in­deed.

So let’s sum up then, what have we learned from these four ex­per­i­men­tal con­di­tions about why peo­ple don’t find ex­tinc­tion uniquely bad in the con­trol con­di­tion. So one hy­poth­e­sis we had was that this was be­cause peo­ple fo­cus strongly on the bad­ness of peo­ple dy­ing from the catas­tro­phes. And this is some­thing that we find is true, be­cause when we re­duce the bad­ness of in­di­vi­d­ual suffer­ing, as we did in the ze­bra ma­nipu­la­tion and the ster­il­iza­tion ma­nipu­la­tion, then we do find that more peo­ple find ex­tinc­tion to be uniquely bad.

Our sec­ond hy­poth­e­sis was that peo­ple don’t feel that strongly about the other key value, the lost fu­ture; and we found some sup­port for that hy­poth­e­sis as well, be­cause one rea­son why peo­ple don’t feel as strongly about that key value is that they don’t con­sider the long-term con­se­quences that much. And we know this be­cause when we high­light the long-term con­se­quences, as we did in the salience ma­nipu­la­tion, then more peo­ple find ex­tinc­tion uniquely bad.

1430 Stefan Schubert (27)

Another rea­son why peo­ple fo­cus weakly on the lost fu­ture is that they have cer­tain em­piri­cal be­liefs which re­duce the value of the fu­ture. So they may be­lieve that the fu­ture will not be that good if hu­man­ity sur­vives. And they may be­lieve that we won’t re­cover if 80% die. And we know this be­cause when we said that the fu­ture will be good if hu­man­ity sur­vives, and that we will re­cover if 80% die, as we did in the good fu­ture ma­nipu­la­tion, then more peo­ple found ex­tinc­tion uniquely bad.

1430 Stefan Schubert (28)

More briefly, I should also pre­sent an­other study that we ran in­volv­ing “X-risk re­ducer sam­ple.” So this is a sam­ple of peo­ple fo­cused on re­duc­ing ex­is­ten­tial risk, and we pro­duced this sam­ple via the EA Newslet­ter and so­cial me­dia and some of you may have taken this test ac­tu­ally, so if so I should thank you for helping our re­search.

So here we had only two con­di­tions. We had the con­trol con­di­tion and we had the good fu­ture con­di­tion. We hy­poth­e­sized that nearly all par­ti­ci­pants would find the sec­ond differ­ence greater both in the con­trol con­di­tion and in the good fu­ture con­di­tion. So nearly all par­ti­ci­pants would find ex­tinc­tion uniquely bad.

1430 Stefan Schubert (29)

And this was, in­deed, what we found, so that’s a quite strik­ing differ­ence com­pared to laypeo­ple, where we found a big differ­ence be­tween the good fu­ture con­di­tion and the con­trol­led con­di­tion. Among the x-risk re­duc­ers, we find that they find ex­tinc­tion uniquely bad even in the ab­sence of in­for­ma­tion about how good the fu­ture is gonna be.

So that sums up what I had to say about this spe­cific study. So let me now zoom out a lit­tle bit and say some words about the psy­chol­ogy of ex­is­ten­tial risk and the long-term fu­ture in gen­eral. We think that this is just one ex­am­ple of a study that one could run, and that there could be many valuable stud­ies in this area.

1430 Stefan Schubert (30)

And one rea­son why we think that is that it seems that there is a gen­eral fact about hu­man psy­chol­ogy, which is that we think quite differ­ently about differ­ent do­mains. So one ex­am­ple of this, which should be close to mind for many effec­tive al­tru­ists, is that we think, or peo­ple in gen­eral think, very differ­ently about char­i­ta­ble dona­tions and in­di­vi­d­ual con­sump­tion; so it seems that most peo­ple think much more about what they get for their money when it comes to in­di­vi­d­ual con­sump­tion com­pared to a char­ity.

So similarly, it may be that we think quite differ­ently when we think about the long-term fu­ture com­pared to when we think about shorter time frames. So it may be, for in­stance, that if we think about the long-term fu­ture we have a more col­lab­o­ra­tive mind­set be­cause we re­al­ize that, in the long-term, we’re all in the same boat.

I don’t know whether that’s the case. I’m spec­u­lat­ing a bit here, but I think we have some prior, which should be quite high, that we do have a unique way of think­ing about the long-term fu­ture. And it’s im­por­tant to learn whether that’s the case and how we do think about the long-term fu­ture.

Be­cause, ul­ti­mately, we want to use that knowl­edge. We don’t just want to gather it for its own sake. But we want to use it in the pro­ject that many of you, thank­fully, are a part of, to cre­ate a long-term fu­ture.

Questions

Ques­tion: What about those 10% of peo­ple who kind of missed the boat, and got the whole thing flipped? Was there any fol­low up on those folks? And what are they think­ing?

Ste­fan: I think that Scott Alexan­der at some had a blog post about how you always get this sort of thing on on­line plat­forms. You always get some re­sponses which are difficult to un­der­stand. So you very rarely get a 100% yes on any­thing. I wouldn’t read in too much in to those 10%.

Ques­tion: Were these stud­ies done on Me­chan­i­cal Turk?

Ste­fan: Yeah, so we run many stud­ies on Me­chan­i­cal Turk, but ac­tu­ally the main study that I pre­sented here, that was on a com­peti­tor to Me­chan­i­cal Turk, which is called Pro­lific. And then we re­cruited Bri­tish par­ti­ci­pants; on Me­chan­i­cal Turk we typ­i­cally re­cruit Amer­i­can par­ti­ci­pants. And I said at one point we also ac­tu­ally ran a study on Amer­i­can par­ti­ci­pants so we just sort of repli­cated the study that I now pre­sented. But what I pre­sented con­cerned Bri­tish par­ti­ci­pants on Pro­lific.

Ques­tion: Were there any differ­ences in ze­bra af­finity be­tween Amer­i­cans and Bri­tons?

Ques­tion: Did you con­sider a suffer­ing fo­cused ma­nipu­la­tion, to in­crease the salience of the First Differ­ence?

Ste­fan: That’s an in­ter­est­ing ques­tion. No, we have not con­sid­ered such a hy­poth­e­sis.

I guess in the ster­il­iza­tion ma­nipu­la­tion there is sig­nifi­cantly less, there is sub­stan­tially less suffer­ing in­volved. What we say is that 80% can’t have chil­dren and the re­main­ing 20% can have chil­dren, so there seems to be much less suffer­ing go­ing on in that sce­nario com­pared with the other sce­nar­ios. But I haven’t thought through all the im­pli­ca­tions of that now, but, that is cer­tainly some­thing to con­sider.

Ques­tion: Where do we go from here with this re­search?

Ste­fan: Yeah, I mean, I guess one thing I found in­ter­est­ing, was, as I said, the good fu­ture ma­nipu­la­tion is very strong and so it’s not ob­vi­ous that quite as many would find ex­tinc­tion uniquely bad if we made that a bit weak.

But that said, we have some con­verg­ing ev­i­dence to the con­clu­sions that I said there, that is we had ac­tu­ally one other pre-study where we asked peo­ple more di­rectly “How good do you think the fu­ture will be if hu­man­ity sur­vives?” And then we found that they thought the fu­ture is gonna be slightly worse than the pre­sent. Which seems some­what un­likely on the ba­sis of the first graph that I showed that the world has ar­guably be­come bet­ter. Some peo­ple don’t agree with that, but, that would be my view.

So it seems, in gen­eral, that one thing that stood out to me that was that peo­ple are prob­a­bly fairly pes­simistic about the long-term fu­ture and that may be one key rea­son why they don’t con­sider hu­man ex­tinc­tion so im­por­tant.

And I mean, in a sense, I find that that’s good news, be­cause this is just some­thing that you can in­form peo­ple about, well, it might not be su­per easy, but it seems some­what tractable. Whereas if peo­ple had some sort of deeply held moral con­vic­tion that ex­tinc­tion isn’t that im­por­tant, then it might have been harder to change peo­ple’s minds.

Ques­tion: Do you know of other ways to shift or nudge peo­ple in the di­rec­tion of in­tu­itively, nat­u­rally tak­ing into ac­count the pos­si­bil­ity that the fu­ture rep­re­sents?

Ste­fan: Yeah, there was ac­tu­ally some­one else who ran more in­for­mal stud­ies where they mapped out the ar­gu­ment for the long-term fu­ture. It was broadly similar to the salience ma­nipu­la­tion, but it was much more in­for­ma­tion, and also eth­i­cal ar­gu­ments.

And then, as I re­call, that seemed to have a fairly strong effect. So that falls broadly in the same bal­l­park. So ba­si­cally, you can in­form peo­ple more com­pre­hen­sively.