Fireside Chat with Toby Ord (2018)

Toby Ord is work­ing on a book about ex­is­ten­tial risks for a gen­eral au­di­ence. This fireside chat with Will MacAskill, from EA Global 2018: Lon­don, illu­mi­nates much of Toby’s re­cent think­ing. Topics in­clude: What are the odds of an ex­is­ten­tial catas­tro­phe this cen­tury? Which risks do we have the most rea­son to worry about? And why should we con­sider print­ing out Wikipe­dia?

Below is a tran­script of Toby’s fireside chat, which CEA has lightly ed­ited for clar­ity. You can also read the tran­script on effec­tivealtru­ism.org, or watch the talk on YouTube.

The Talk

Will: Toby, you’re work­ing on a book at the mo­ment. Just to start off, tell us about that.

Toby: I’ve been work­ing on a book for a cou­ple of years now, and ul­ti­mately, I think with big books—this one is on ex­is­ten­tial risk—I think that they’re of­ten a lit­tle bit like an ice­berg, and cer­tainly Do­ing Good Bet­ter was, where there’s this huge amount of work that goes on be­fore you even de­cide to write the book, com­ing up with ideas and dis­till­ing them.

I’m try­ing to write re­ally the defini­tive book on ex­is­ten­tial risk. I think the best book so far, if you’re look­ing for some­thing be­fore my book comes out, is John Les­lie’s The End of the World. That’s from 1996. That book ac­tu­ally in­spired Nick Bostrom, to some de­gree, to get into this.

I thought about writ­ing an aca­demic book. Cer­tainly a lot of the ideas that are go­ing to be in­cluded are cut­ting edge ideas that haven’t re­ally been talked about any­where be­fore. But I ul­ti­mately thought that it was bet­ter to write some­thing at the re­ally se­ri­ous end of gen­eral non-fic­tion, to try to reach a wider au­di­ence. That’s been an in­ter­est­ing as­pect of writ­ing it.

Will: And how do you define an ex­is­ten­tial risk? What counts as ex­is­ten­tial risks?

Toby: Yeah. This is ac­tu­ally some­thing where even within effec­tive al­tru­ism, peo­ple of­ten make a mis­take here, be­cause the name ex­is­ten­tial risk, that Nick Bostrom coined, is de­signed to be evoca­tive of ex­tinc­tion. But the pur­pose of the idea, re­ally, is that there’s the risk of hu­man ex­tinc­tion, but there’s also a whole lot of other risks which are very similar in how we have to treat them. They all in­volve a cer­tain com­mon method­ol­ogy for deal­ing with them, in that they’re risks that are so se­ri­ous that we can’t af­ford to have even one of them hap­pen. We can’t learn from trial and er­ror, so we have to have a proac­tive ap­proach.

The way that I cur­rently think about it is that ex­is­ten­tial risks are risks that threaten the de­struc­tion of hu­man­ity’s long-term po­ten­tial. Ex­tinc­tion would ob­vi­ously de­stroy all of our po­ten­tial over the long term, as would a per­ma­nent un­re­cov­er­able col­lapse of civ­i­liza­tion, if we were re­duced to a pre-agri­cul­tural state again or some­thing like that, and as would var­i­ous other things that are nei­ther ex­tinc­tion nor col­lapse. There could be some form of per­ma­nent to­tal­i­tar­i­anism. If the Nazis had suc­ceeded in a thou­sand-year Re­ich, and then maybe it went on for a mil­lion years, we might still say that that was an ut­ter, per­haps ir­re­vo­ca­ble, dis­aster.

I’m not sure that at the time it would have been pos­si­ble for the Nazis to achieve that out­come with ex­ist­ing tech­nol­ogy, but as we get more ad­vanced surveillance tech­nol­ogy and ge­netic en­g­ineer­ing and other things, it might be pos­si­ble to have last­ing ter­rible poli­ti­cal states. So ex­is­ten­tial risk in­cludes both ex­tinc­tion and these other re­lated ar­eas.

Will: In terms of what your aims are with the book, what’s the change you’re try­ing to effect?

Toby: One key aim is to in­tro­duce the idea of ex­is­ten­tial risk to a wider au­di­ence. I think that this is ac­tu­ally one of the most im­por­tant ideas of our time. It re­ally de­serves a proper airing, try­ing to re­ally get all of the fram­ing right. And then also, as I said, to in­tro­duce a whole lot of new cut­ting edge ideas that are to do with new con­cepts, math­e­mat­ics of ex­is­ten­tial risk and other re­lated ideas, lots of the best sci­ence, all put into one place. There’s that as­pect as well, so it’s definitely a book for ev­ery­one on ex­is­ten­tial risk. I’ve learned a lot while writ­ing it, ac­tu­ally.

But also, when it comes to effec­tive al­tru­ism, I think that of­ten we have some mis­con­cep­tions around ex­is­ten­tial risk, and we also have some bad fram­ings of it. It’s of­ten framed as if it’s this re­ally coun­ter­in­tu­itive idea. There’s differ­ent ways of do­ing this. A clas­sic one in­volves say­ing “There could be 10 to the power of 53 peo­ple who live in the fu­ture, so even if there’s only a very small chance...” and go­ing from there, which makes it seem un­nec­es­sar­ily nerdy, where you’ve kind of got to be a math per­son to re­ally get any pull from that ar­gu­ment. And even if you are a mathsy per­son, it feels a lit­tle bit like a trick of some sort, like some con­vinc­ing ar­gu­ment that one equals two or some­thing, where you can’t quite see what the prob­lem is, but you’re not com­pel­led by it.

Ac­tu­ally, though, I think that there’s room for a re­ally broad group of peo­ple to get be­hind the idea of ex­is­ten­tial risk. There’s no rea­son that my par­ents or grand­par­ents couldn’t be deeply wor­ried about the per­ma­nent de­struc­tion of hu­man­ity’s long-term po­ten­tial. Th­ese things are re­ally bad, and I ac­tu­ally think that it’s not a coun­ter­in­tu­itive idea at all. In fact, ul­ti­mately I think that the roots of ex­is­ten­tial risk, and wor­ry­ing about them, came from the risk of nu­clear war in the 20th cen­tury.

My par­ents were out on marches against nu­clear weapons. At the time, the biggest protest in US his­tory was 2 mil­lion peo­ple in Cen­tral Park protest­ing nu­clear weapons. It was a huge thing. It was ac­tu­ally the biggest thing at that time, in terms of civic en­gage­ment. And so when peo­ple can see that there’s a real and pre­sent threat that could threaten the whole fu­ture, they re­ally get be­hind it. That’s also one of the as­pects of cli­mate change, is peo­ple per­ceive it as a threat to con­tinued hu­man ex­is­tence, among other things, and that’s one of the rea­sons that mo­ti­vates them.

So I think that you can have a much more in­tu­itive fram­ing of this. The fu­ture is so much longer than the pre­sent, so some of the ways that we could help re­ally could be by helping this long-term fu­ture, if there are ways that we could help that whole time pe­riod.

Will: Look­ing to the next cen­tury, let’s say, where do you see the main ex­is­ten­tial risks be­ing? What are all the ones that we are fac­ing, and which are the ones we should be most con­cerned about?

Toby: I think that there is some ex­is­ten­tial risk re­main­ing from nu­clear war and from cli­mate change. I think that both of those are cur­rent an­thro­pogenic ex­is­ten­tial risks. The nu­clear war risk is via nu­clear win­ter, where the soot from burn­ing cities would rise up into the up­per at­mo­sphere, above the cloud level, so that it can’t get rained down, and then would block sun­light for about eight years or so. The risk there isn’t that it gets re­ally dark and you can’t see or some­thing like that, and it’s not that it’s so cold that we can’t sur­vive, it’s that there are more frosts, and that the tem­per­a­tures are de­pressed by quite a lot, such that the grow­ing sea­son for crops is only a cou­ple of months. And there’s not enough time for the wheat to ger­mi­nate and so forth, and so there’ll be wide­spread famine. That’s the threat there.

And then there’s cli­mate change. Cli­mate change is a warm­ing. Nu­clear win­ter is ac­tu­ally also a change in the cli­mate, but a cool­ing. I think that the amount of warm­ing that could hap­pen from cli­mate change is re­ally un­der­ap­pre­ci­ated. The tail risk, the chance that the warm­ing is a lot worse than we ex­pect, is re­ally big. Even if you set aside the se­ri­ous risks of run­away cli­mate change, of big feed­backs from the methane clathrates or the per­mafrost, even if you set all of those things aside, sci­en­tists say that the es­ti­mate for if you dou­bled CO2 in the at­mo­sphere is three de­grees of warm­ing. And that’s what would hap­pen if you dou­bled it.

But if you look at the fine print, they say it’s ac­tu­ally from 1.5 de­grees to 4.5 de­grees. That’s a huge range. There’s a fac­tor of three be­tween those es­ti­mates, and that’s just a 66% con­fi­dence in­ter­val. They ac­tu­ally think there’s a one in six chance it’s more than 4.5 de­grees. So I think there’s a very se­ri­ous chance that if it dou­bled, it’s more than 4.5 de­grees, but also there’s un­cer­tainty about how many dou­blings will hap­pen. It could eas­ily be the case that hu­man­ity dou­bles the CO2 lev­els twice, in which case, if we also got un­lucky on the sen­si­tivity, there could be nine de­grees of warm­ing.

And so when you hear these things about how many de­grees of warm­ing they’re talk­ing about, they’re of­ten talk­ing about the me­dian of an es­ti­mate. If there say­ing we want to keep it be­low two de­grees, what they mean is want to keep the me­dian be­low two de­grees, such that there’s still a se­ri­ous chance that it’s much higher than that. If you look into all of that, there could be very se­ri­ous warm­ing, much more se­ri­ous than you get in a lot of sci­en­tific re­ports. But if you read the fine print in the analy­ses, this is in there. And so I think there’s a lack of re­ally look­ing into that, so I’m ac­tu­ally a lot more wor­ried about it than I was be­fore I started look­ing into this.

By the same to­ken, though, it’s difficult for it to be an ex­is­ten­tial risk. Even if there were 10 de­grees of warm­ing or some­thing be­yond what you’re read­ing about in the news­pa­pers, the warm­ing… it would be ex­tremely bad, just to clar­ify. But I’ve been think­ing about all these things in terms of whether they could be ex­is­ten­tial risks, rather than whether they could lead to ter­rible situ­a­tions, which could then lead to other bad out­comes. But one thing is that in both cases, both nu­clear win­ter and cli­mate change, coastal ar­eas are a lot less af­fected. There’s ob­vi­ously flood­ing when it comes to cli­mate change, but a coun­try like New Zealand, which is mostly coastal, would be mostly spared the effects of ei­ther of these types of calamities. Civ­i­liza­tion, as far as I can tell, should con­tinue in New Zealand roughly as it does to­day, but per­haps with­out low priced chips com­ing in from China.

Will: I re­ally think we should buy some land in New Zealand.

Toby: Like as a hedge?

Will: I’m com­pletely se­ri­ous about this idea.

Toby: I mean, we definitely should not screw up with cli­mate change. It’s a re­ally se­ri­ous prob­lem. It’s just a ques­tion that I’m look­ing at is, is it an ex­is­ten­tial risk? Ul­ti­mately, it’s prob­a­bly bet­ter thought of as a change in the us­able ar­eas on the earth. They cur­rently don’t in­clude Antarc­tica. They don’t in­clude var­i­ous parts of Sibe­ria and some parts of Canada, which are cov­ered in per­mafrost. Effec­tively, with ex­treme cli­mate change, the us­able parts of the earth would move a bit, and they would also shrink a lot. It would be a catas­tro­phe, but I don’t see why that would be the end.

Will: Between cli­mate change and nu­clear win­ter, do you think cli­mate change is too ne­glected by EA?

Toby: Yeah, ac­tu­ally, I think it prob­a­bly is. Although you don’t see many peo­ple in EA look­ing at ei­ther of those. I think they’re ac­tu­ally very rea­son­able. In both cases, it’s un­clear why they would the end of hu­man­ity, and peo­ple gen­er­ally in the nu­clear win­ter re­search do not say that it would be. They say it would be catas­trophic, and maybe 90% of peo­ple could die, but they don’t say that it would kill ev­ery­one. I think in both cases, they’re such large changes to the earth’s en­vi­ron­ment, huge un­prece­dented changes, that you can’t rule out that some­thing that we haven’t yet mod­eled hap­pens.

I mean, we didn’t even know about nu­clear win­ter un­til more than 30 years af­ter the use of nu­clear weapons. There was a whole pe­riod of time when new effects could have hap­pened, and we would have been com­pletely ig­no­rant of them at the time when we launched a war. So there could be other things like that. And in both cases, that’s where I think most of the dan­ger of ex­is­ten­tial risk lies, just that it’s such a large per­tur­ba­tion of the earth’s sys­tem that one wouldn’t be shocked if it turned out to be an ex­is­ten­tial catas­tro­phe. So there are those ones, but I think the things that are of great­est risk are things that are forth­com­ing.

Will: So, tell us about the risks from un­prece­dented tech­nol­ogy.

Toby: Yeah. The two ar­eas that I’m most wor­ried about in par­tic­u­lar are biotech­nol­ogy and ar­tifi­cial in­tel­li­gence. When it comes to biotech, there’s a lot to be wor­ried about. If you look at some of the great­est dis­asters in hu­man his­tory, in terms of the pro­por­tion of the pop­u­la­tion who died in them, great plagues and pan­demics are in this cat­e­gory. The Black Death kil­led be­tween a quar­ter and 60% of peo­ple in Europe, and it was some­where be­tween 5 and 15% of the en­tire world’s pop­u­la­tion. And there are a cou­ple of other cases that are per­haps at a similar level, such as the spread of Afro-Eurasian germs into the Amer­i­cas when Colum­bus went across and they ex­changed germs. And also, say, the 1918 flu kil­led about 4% of the peo­ple in the world.

So we’ve had some cases that were big, re­ally big. Could they be so big that ev­ery­one dies? I don’t think so, at least from nat­u­ral causes. But maybe. It wouldn’t be silly to be wor­ried about that, but it’s not my main area of con­cern. I’m more con­cerned with biotech­nolog­i­cal ad­vances that we’ve had. We’ve had rad­i­cal break­throughs re­cently. It’s only re­cently that we’ve dis­cov­ered even that there are bac­te­ria and viruses, that we’ve worked out about DNA, and that we’ve worked out how to take parts of DNA from one or­ganism and put them into an­other. How to syn­the­size en­tire viruses just based on their DNA code. Things like this. And these rad­i­cal ad­vances in tech­nol­ogy have let us do some very scary things.

And there’s also been this ex­treme, it’s of­ten called de­moc­ra­ti­za­tion of this tech­nol­ogy, but since the tech­nol­ogy could be used for harm, it’s also a form of pro­lifer­a­tion, and so I’m wor­ried about that. It’s very quick. You prob­a­bly all re­mem­ber when the hu­man genome pro­ject was first an­nounced. That cost billions of dol­lars, and now a com­plete hu­man genome can be se­quenced for $1,000. It’s kind of a rou­tine part of PhD work, that you get a genome se­quenced.

Th­ese things have come so quickly, and other things like CRISPR and also if we look at gene drives, these were tech­nolo­gies, re­ally rad­i­cal things, CRISPR for putting ar­bi­trary ge­netic code from one an­i­mal into an­other, and gene drives for re­leas­ing it into the wild and hav­ing it pro­lifer­ate, that were less than two years be­tween be­ing in­vented by the cut­ting edge labs in the world, the very smartest sci­en­tists, No­bel Prize-wor­thy stuff, to be­ing repli­cated by un­der­grad­u­ates in sci­ence com­pe­ti­tions. Just two years, and so if you think about that, the pool of peo­ple who could have bad mo­tives, who have ac­cess to the abil­ity to do these things, is in­creas­ing mas­sively, from just a se­lect group of peo­ple where you might think there’s only five peo­ple in the world who could do it, who have the skills, who have the money, and who have the time to do it, through to a thing that’s much faster and where the pool of peo­ple is in the mil­lions. There’s just much more chance you get some­one with bad mo­ti­va­tion.

And there’s also states with bioweapons pro­grams. We of­ten think that we’re pro­tected by things like the Bioweapons Con­ven­tion, the BWC. That is the main pro­tec­tion, but there are states who vi­o­late it. We know, for ex­am­ple, that Rus­sia has been vi­o­lat­ing it for a long time. They had mas­sive pro­grams with more than 10,000 sci­en­tists work­ing on ver­sions of smal­l­pox, and they had an out­break when they did a smal­l­pox weapons test, which has been con­firmed, and they also kil­led a whole lot of peo­ple with an­thrax ac­ci­den­tally when they for­got to re­place a filter on their lab and blew a whole lot of an­thrax spores out over the city that the lab was based in.

There’s re­ally bad ex­am­ples of bio-safety there, and also the scary thing is that peo­ple are ac­tu­ally work­ing on these things. The US be­lieves that there are about six coun­tries in vi­o­la­tion of this treaty. Some coun­ties, like Is­rael, haven’t even signed up to it. And also it has the bud­get of a typ­i­cal McDon­ald’s, and it has four em­ploy­ees. So that’s the thing that stands be­tween us and mi­suse of these tech­nolo­gies, and I re­ally think that that is grossly in­ad­e­quate.

Will: The Bioweapons Con­ven­tion has four peo­ple work­ing in it?

Toby: Yeah. It had three. I had to change it in my book, be­cause a new per­son got em­ployed.

Will: How does that com­pare to other sorts of con­ven­tions?

Toby: I don’t know. It’s a good ques­tion. So those are the types of rea­sons that I’m re­ally wor­ried about de­vel­op­ments in bio.

Will: Yeah. And what would you say to the re­sponse that it’s just very hard for a virus to kill liter­ally ev­ery­body, be­cause they have this huge bunker sys­tem in Switzer­land, nu­clear sub­marines have six-month tours, and so on? Ob­vi­ously, this is an uni­mag­in­able tragedy for civ­i­liza­tion, but still there would be enough peo­ple al­ive that over some pe­riod of time, pop­u­la­tions would in­crease again.

Toby: Yeah. I mean, you could add to that un­con­tacted tribes and also re­searchers in Antarc­tica as other hard-to-reach pop­u­la­tions. I think it’s re­ally good that we’ve di­ver­sified some­what like that. I think that it would be re­ally hard, and so I think that even if there is a catas­tro­phe, it’s likely to not be an ex­is­ten­tial dis­aster.

But there are rea­sons for some ac­tors to try to push some­thing to be ex­tremely dan­ger­ous. For ex­am­ple, as I said, the Soviets, then Rus­si­ans af­ter the col­lapse of the Soviet Union, were work­ing on weaponiz­ing smal­l­pox, and weaponiz­ing Ebola. It was crazy stuff, and tens of thou­sands of peo­ple were work­ing on it. And they were in­volved in a mu­tu­ally as­sured de­struc­tion nu­clear weapons sys­tem with a dead hand policy, where even if their com­mand cen­ters were de­stroyed, they would force re­tal­i­a­tion with all of their weapons. There was this logic of mu­tu­ally as­sured de­struc­tion and de­ter­rence, where they needed to have ways of plau­si­bly in­flict­ing ex­treme amounts of harm in or­der to try to de­ter the US. So they were already in­volved in that type of logic, and so it would have made some sense for them to do ter­rible things with bioweapons too, as­sum­ing the un­der­ly­ing logic makes any sense at all. So I think that there could be re­al­is­tic at­tempts to make ex­tremely dan­ger­ous bioweapons.

I should also say that I think this is an area that’s un­der-in­vested in, in EA. I think that some­times there’s about… I would say that the ex­is­ten­tial risk from bio is maybe about half that of AI, or a quar­ter or some­thing like that. But a fac­tor of two or four in how big the risk is. If you re­call, in effec­tive al­tru­ism we’re not in­ter­ested in work on the prob­lem that has the biggest size, we’re in­ter­ested in what marginal im­pact you’ll have. And it’s en­tirely pos­si­ble that some­one would be more than a cou­ple of times bet­ter at work­ing on try­ing to avoid bio prob­lems than they would be on try­ing to avoid AI prob­lems.

And also, the com­mu­nity among EAs who are work­ing on biose­cu­rity is much smaller as well, so one would ex­pect there to be good op­por­tu­ni­ties there. But work on bio-risk does re­quire quite a differ­ent skil­lset, be­cause in bio, lot of the risk is mi­suse risk, ei­ther by lone in­di­vi­d­u­als, small groups, or na­tion states. It’s much more of a tra­di­tional se­cu­rity-type area, where work­ing in biose­cu­rity might in­volve talk­ing a lot with na­tional se­cu­rity pro­grams and so forth. It’s not the kind of thing that one wants free and open dis­cus­sions of all of the differ­ent things. And one also doesn’t want to just say, “Hey, let’s have this open re­search fo­rum where we’re just on the in­ter­net throw­ing out ideas, like, ‘How would you kill ev­ery last per­son? Oh, I know! What about this?’” We don’t ac­tu­ally want that kind of dis­cus­sion about it, which puts it in a bit of a differ­ent zone.

But I think that for peo­ple who think that they ac­tu­ally are able to not talk about things that they find in­ter­est­ing and fas­ci­nat­ing and im­por­tant, which a lot of us have trou­ble not talk­ing about those things, but for peo­ple who could do that and also per­haps who already have a bio back­ground, it could be a very use­ful area.

Will: Okay. And so you think that EA in gen­eral, even though they’re tak­ing these risks more se­ri­ously than maybe most peo­ple, you think we’re still ne­glect­ing it rel­a­tive to the EA port­fo­lio.

Toby: I think so. And then AI, I think, is prob­a­bly the biggest risk.

Will: Okay, so tell us a lit­tle bit about that.

Toby: Yeah. You may have heard more than you ever want to about AI risk. But ba­si­cally, my think­ing about this is that the rea­son that hu­man­ity is in con­trol of its des­tiny, and the rea­son that we have such a large long-term po­ten­tial, is be­cause we are the species that’s in con­trol. For ex­am­ple, go­rillas are not in con­trol of their des­tiny. Whether they flour­ish or not, I hope that they will, but it de­pends upon hu­man choices. We’re not in such a po­si­tion com­pared to any other species, and that’s be­cause of our in­tel­lec­tual abil­ities, both what we think of as in­tel­li­gence, like prob­lem-solv­ing, and also our abil­ity to com­mu­ni­cate and co­op­er­ate.

But these in­tel­lec­tual abil­ities have given us the po­si­tion where we have the ma­jor­ity of the power on the planet, and where we have the con­trol of our des­tiny. If we cre­ate some ar­tifi­cial in­tel­li­gence, gen­er­ally in­tel­li­gent sys­tems, and we make them be smarter than hu­mans and also just gen­er­ally ca­pa­ble and have ini­ti­a­tive and mo­ti­va­tion and agency, then by de­fault, we should ex­pect that they would be in con­trol of our fu­ture, not us. Un­less we made good efforts to stop that. But the rele­vant pro­fes­sional com­mu­nity, who are try­ing to work out how to stop it, how to guaran­tee that they obey com­mands or that they’re just mo­ti­vated to help hu­mans in the first place, they think it’s re­ally hard, and they have higher es­ti­mates of the risk from AI than any­one else.

There’s dis­agree­ment about the level of risk, but there’s also some of the most promi­nent AI re­searchers, in­clud­ing ones who are at­tempt­ing to build such gen­er­ally in­tel­li­gent sys­tems, who are very scared about it. They aren’t the whole AI com­mu­nity, but they are a sig­nifi­cant part of it. There are a cou­ple of other AI ex­perts who say that wor­ry­ing about ex­is­ten­tial risk is a re­ally fringe po­si­tion in AI, but they’re ac­tu­ally ei­ther just ly­ing or they’re in­com­pe­tently ig­no­rant, be­cause they should no­tice that Stu­art Rus­sell and Demis Hass­abis are very promi­nently on the record say­ing this is a re­ally big is­sue.

So I think that that should just give us a whole lot of rea­son to just ex­pect, yeah, I guess cre­at­ing a suc­ces­sor species prob­a­bly could well be the last thing we do. And maybe we’d cre­ate some­thing that also is even more im­por­tant than us, and it would be a great fu­ture to cre­ate a suc­ces­sor. It would be effec­tively our chil­dren, or our “mind chil­dren,” maybe. But also, we don’t have a very good idea how to do that. We have even less of an idea about how to cre­ate ar­tifi­cial in­tel­li­gence sys­tems that have them­selves moral sta­tus and have feel­ings and emo­tions, and strive to achieve greater perfec­tions than us and so on. More likely it would be for some more triv­ial ul­ti­mate pur­pose. Those are the kind of rea­sons that I’m wor­ried about.

Will: Yeah, you hinted briefly, but what’s your over­all… over the next hun­dred years, let’s say, over­all chance you’d as­sign some ex­is­ten­tial risk event, and then how does that break down be­tween these differ­ent risks you’ve sug­gested?

Toby: Yeah. I would say some­thing like a one in six chance that we don’t make it through this cen­tury. I think that there was some­thing like a one in a hun­dred chance that we didn’t make it through the 20th cen­tury. Over­all, we’ve seen this dra­matic trend to­wards hu­man­ity hav­ing more and more power, of­ten in­creas­ing at ex­po­nen­tial rates, de­pend­ing on how you mea­sure it. But there hasn’t been this kind of similar in­crease in hu­man wis­dom, and so our power has been out­strip­ping our wis­dom. The 20th cen­tury is the first one where we re­ally had the po­ten­tial to de­stroy our­selves. I don’t see any par­tic­u­lar rea­son why we wouldn’t ex­pect, then, the 21st cen­tury to have our power even more out­bal­ance our wis­dom, and in­deed that seems to be the case. We also know of par­tic­u­lar tech­nolo­gies that look like this could hap­pen.

And then the 22nd cen­tury, I think would be even more dan­ger­ous. I don’t re­ally see a nat­u­ral end to this un­til we dis­cover al­most all the tech­nolo­gies that can be built or some­thing, or we go ex­tinct, or we get our act to­gether and de­cide that we’ve had enough of that and we’re go­ing to make sure that we never suffer any of these catas­tro­phes. I think that that’s what we should be at­tempt­ing to do. If we had a busi­ness-as-usual cen­tury, I don’t know what I’d put the risk at for this cen­tury. A lot higher than one in six. My one in six is be­cause I think that there’s a good chance, par­tic­u­larly later in the cen­tury, that we get our act to­gether. If I knew we wouldn’t get our act to­gether, it’d be more like one in two, or one in three.

Will: Okay, cool. Okay. So if we just, no one re­ally cared, no one was re­ally tak­ing ac­tion, it would be more like 50/​50?

Toby: Yeah, if it was pretty much like it is at mo­ment, with us just run­ning for­ward, then yeah. I’m not sure. I haven’t re­ally tried to es­ti­mate that, but it would be some­thing, maybe a third or a half.

Will: Okay. And then within that one in six, how does that break down be­tween these differ­ent risks?

Toby: Yeah. Again, these num­bers are all very rough, I should clar­ify to ev­ery­one, but I think it’s use­ful to try to give quan­ti­ta­tive es­ti­mates when you’re giv­ing rough num­bers, be­cause if you just say, “I think it’s tiny,” and the other per­son says, “No, I think it’s re­ally im­por­tant,” you may ac­tu­ally both think it’s the same num­ber, like 1% or some­thing like that. I think that I would say AI risk is some­thing like 10%, and bio is some­thing like 5%.

Will: And then the oth­ers are less than a per­cent?

Toby: Yeah, that’s right. I think that cli­mate change and… I mean, cli­mate change wouldn’t kill us this cen­tury if it kills us, any­way. And nu­clear war, definitely less than a per­cent. And prob­a­bly the re­main­der would be more in the un­known risks cat­e­gory. Maybe I should ac­tu­ally have even more of the per­centage in that un­known cat­e­gory.

Will: Let’s talk a lit­tle bit about that. How se­ri­ously do you take un­known ex­is­ten­tial risks? I guess they are known un­knowns, be­cause we know there are some.

Toby: Yeah.

Will: How se­ri­ously do you take them, and then what do you think we should do, if any­thing, to guard against them?

Toby: Yeah, it’s a good ques­tion. I think we should take them quite se­ri­ously. If we think back­wards, and think what risks would we have known about in the past, we had very lit­tle idea. Only two peo­ple had any idea about nu­clear bombs in, let’s say, 1935 or some­thing like that, a few years be­fore the bomb was first started to be de­signed. It would have been un­known tech­nol­ogy for al­most ev­ery­one. And if you go back five more years, then it was un­known to ev­ery­one. I think that these is­sues about AI and, ac­tu­ally, man-made pan­demics, there were a few peo­ple who were talk­ing these things very early on, but only a cou­ple of peo­ple, and it might have been hard to dis­t­in­guish them from the noise.

But I think ul­ti­mately, we should ex­pect that there are un­known risks. There are things that we can do about them. One of the things that we could do about them is to work on things like stop­ping war. So I think that, say, avoid­ing great power war, as op­posed to avoid­ing all par­tic­u­lar wars. Some po­ten­tial wars have no real chance of caus­ing ex­is­ten­tial catas­tro­phe. But things like World War II or the Cold War were cases where they plau­si­bly could have.

I think the way to think about this is not that war it­self, or great power war, is an ex­is­ten­tial risk, but rather it’s some­thing else, which I call an ex­is­ten­tial risk fac­tor. I take in­spira­tion in this from the Global Bur­den of Disease, which looks at differ­ent dis­eases and shows how much does, say, heart dis­ease cause mor­tal­ity, mor­bidity in the world, and adds up a num­ber of dis­abil­ity ad­justed life years for that. They do that for all the differ­ent dis­eases, and then they also want to ask ques­tions like how much ill health does smok­ing cause, or al­co­hol?

You can think of these things as these pillars for each of the differ­ent par­tic­u­lar dis­eases, but then there’s this ques­tion of cross-cut­ting things, where some­thing like smok­ing in­creases heart dis­ease and also lung can­cer and var­i­ous other as­pects, so it kind of con­tributes a bit to a whole lot of differ­ent out­comes. And they ask the ques­tion, well, what if you took smok­ing from its cur­rent level down to zero, how much ill health would go away? They call that the bur­den of the risk fac­tor, and you can do that with a whole lot of things. Not many peo­ple think about this, though, within ex­is­ten­tial risk. I think our com­mu­nity tends to fix­ate on par­tic­u­lar risks a bit too much, and they think if some­one’s re­ally in­ter­ested in ex­is­ten­tial risk, that’s good. They’ll say, “Oh, you work on as­ter­oid pre­dic­tion and deflec­tion? That’s re­ally cool.” That per­son is part of the in­group, or the team, or some­thing.

And if they hear that some­one else works on global peace and co­op­er­a­tion, then they’ll think, “Oh, I guess that might be good in some way.” But ac­tu­ally, if you ask your­self, con­di­tional upon how much ex­is­ten­tial risk is there this cen­tury, “What if we knew there was go­ing to be no great power war?” How much would it go down from, say, my cur­rent es­ti­mate of about 17%? I don’t know. Maybe down to 10% or some­thing like that, or it could halve. It could ac­tu­ally have very big effect on the amount of risk.

And if you think about, say, World War II, that was a big great power war, they in­vented nu­clear weapons dur­ing that war, be­cause of the war. And then we also started to mas­sively es­ca­late and in­vent new types of nu­clear weapons, ther­monu­clear weapons, be­cause of the Cold War. So war has a his­tory of re­ally pro­vok­ing ex­is­ten­tial risk, and I think that this re­ally con­nects in with the risks that we don’t yet know about, be­cause one way to try to avoid those risks is to try to avoid war, be­cause war has a ten­dency to then try to delve into dark cor­ners of tech­nol­ogy space.

So I think that’s a re­ally use­ful idea that peo­ple should think about. The risk of be­ing wiped out by as­ter­oids is in the or­der of one in a mil­lion per cen­tury. I think it’s ac­tu­ally prob­a­bly lower. Whereas, as I just said, great power war, tak­ing that down to zero in­stead of tak­ing as­ter­oid risk down to zero, is prob­a­bly worth mul­ti­ple per­centage points of ex­is­ten­tial risk, which is way more. It’s like thou­sands of times big­ger. While cer­tain kind of neb­u­lous peace-type thing might have a lot of peo­ple work­ing on them, it might not be that ne­glected to try avoid­ing great power wars in par­tic­u­lar. So, think­ing about the US and China and Rus­sia and maybe the EU, and try­ing to avoid any of these poles com­ing into war with each other, is ac­tu­ally quite a lot more ne­glected. So I think that there would be re­ally good op­por­tu­ni­ties to try to help with these fu­ture risks that way. And that’s not the only one of these ex­is­ten­tial risk fac­tors. You could think of a whole lot of things like this.

Will: Do you have any views on how likely a great power war is over the next cen­tury then?

Toby: I would not have a bet­ter es­ti­mate of that than any­one else in the au­di­ence.

Will: Re­duc­ing great power war is one way of re­duc­ing un­known risks. Another way might be things like re­fuges, or greater de­tec­tion mea­sures, or back­ing up knowl­edge in cer­tain ways. Stuff like David Denken­berger’s work with ALLFED. What’s your view on these sorts of ac­tivi­ties that are about en­sur­ing that small pop­u­la­tions of peo­ple, af­ter the global catas­trophic but not ex­tinc­tion risk, then are able to flour­ish again rather than ac­tu­ally just dwin­dle?

Toby: It sounds good. Definitely, the sign is pos­i­tive. How good it is com­pared to other kinds of di­rect work one could do on ex­is­ten­tial risk, I’m not sure. I tend to think that, at least as­sum­ing we’ve got a breath­able at­mo­sphere and so on, it’s prob­a­bly not that hard to come back from the col­lapse of civ­i­liza­tion. I’ve been look­ing a lot when writ­ing this book at the re­ally long-term his­tory of hu­man­ity and civ­i­liza­tion. And one thing that I was sur­prised to learn is that the agri­cul­tural rev­olu­tion, this abil­ity to move from hunter-gath­erer, for­ager-type life, into some­thing that could en­able civ­i­liza­tion, cities, writ­ing, and so forth, that that hap­pened about five times in differ­ent parts of the world.

So some­times peo­ple, I think mis­tak­enly, re­fer to Me­sopotamia as the cra­dle of civ­i­liza­tion. That’s a very Western ap­proach. Ac­tu­ally, there are many cra­dles, and there were civ­i­liza­tions that started in North Amer­ica, South Amer­ica, New Guinea, China, and Africa. So ac­tu­ally, I think ev­ery con­ti­nent ex­cept for Aus­tralia and Europe. And ul­ti­mately, these civ­i­liza­tions kind of have merged to­gether into some kind of global amalgam at the mo­ment. And they all hap­pened at a very similar time, like within a cou­ple of thou­sand years of each other.

Ba­si­cally, as soon as the most re­cent ice age ended and the rivers started flow­ing and so on, then around these very rivers, civ­i­liza­tions de­vel­oped. So it does seem to me to be some­thing that is not just a com­plete fluke or some­thing like that. I think that there’s a good chance that things would bounce back, but work to try to help on that, par­tic­u­larly to do the very first bits of work. As an ex­am­ple, print­ing out copies of Wikipe­dia, putting them in some kind of dried out, air­tight con­tain­ers, and just putting them in some places scat­tered around the world or some­thing, is prob­a­bly this kind of cheap thing that an in­di­vi­d­ual could fund, and maybe a group of five peo­ple could ac­tu­ally just do. We’re still in the case where there are a whole lot of things you could do, just-in-case type things.

Will: I won­der how big Wikipe­dia is when you print it all out?

Toby: Yeah, it could be pretty big.

Will: You’d prob­a­bly want to edit it some­how.

Toby: You might.

Will: Justin Bie­ber and stuff.

Toby: Yeah, don’t do the Poke­mon sec­tion.

Will: What are the non-con­se­quen­tial­ist ar­gu­ments for car­ing about ex­is­ten­tial risk re­duc­tion? Some­thing that’s dis­tinc­tive about your book is you’re try­ing to unite var­i­ous moral foun­da­tions.

Toby: Yeah, great. That’s some­thing that’s very close to my heart. And this is part of the idea that I think that there’s a re­ally com­mon sense ex­pla­na­tion as to why we should care about these things. It’s not salient to many peo­ple that there are these risks, and that’s a ma­jor rea­son that they don’t take them se­ri­ously, rather than be­cause they’ve thought se­ri­ously about it, and they’ve de­cided that they don’t care whether ev­ery­thing that they’ve ever tried to cre­ate and stand for in civ­i­liza­tion and cul­ture is all de­stroyed. I don’t think that many peo­ple ex­plic­itly think that.

But my main ap­proach, the guid­ing light for me, is re­ally think­ing about the op­por­tu­nity cost, so it’s think­ing about ev­ery­thing that we could achieve, and this great and glo­ri­ous fu­ture that is open to us and that we could do. And ac­tu­ally, the last chap­ter of my book re­ally ex­plores that and looks at the epic du­ra­tions that we might be able to sur­vive for, the types of things that hap­pen over these cos­molog­i­cal time scales that we might be able to achieve. That’s one as­pect, du­ra­tion. I think it’s quite in­spiring to me. And then also the scale of civ­i­liza­tion could go be­yond the Earth and into the stars. I think there’s quite a lot that would be very good there.

But also the qual­ity of life could be im­proved a lot. Peo­ple could live longer and healthier in var­i­ous ob­vi­ous ways, but also they could… If you think about your peak ex­pe­riences, like the mo­ments that re­ally shine through, the very best mo­ments of your life, they’re so much bet­ter, I think, than typ­i­cal ex­pe­riences. Even within hu­man biol­ogy, we are ca­pa­ble of hav­ing these ex­pe­riences, which are much bet­ter, much more than twice as good as the typ­i­cal ex­pe­riences. Maybe we could get much of our life up to that level. So I think there’s a lot of room for im­prove­ment in qual­ity as well.

Th­ese ideas about the fu­ture re­ally are the main guide to me, but there are also these other foun­da­tions, which I think also point to similar things. One of them is a de­on­tolog­i­cal one, where Ed­mund Burke, one of the founders of poli­ti­cal con­ser­vatism, had this idea of the part­ner­ship of the gen­er­a­tions. What he was talk­ing about there was that we’ve had ul­ti­mately a hun­dred billion peo­ple who’ve lived be­fore us, and they’ve built this world for us. And each gen­er­a­tion has made im­prove­ments, in­no­va­tions of var­i­ous forms, tech­nolog­i­cal and in­sti­tu­tional, and they’ve handed down this world to their chil­dren. It’s through that that we have achieved great­ness. Other­wise, we know what it would be like. It would be very much like it was on the sa­vanna in South Africa for the first gen­er­a­tions, be­cause it’s not like we would have some­how been able to cre­ate iPhones from scratch or some­thing like that.

Ba­si­cally, if you look around, pretty much ev­ery sin­gle thing you can see, other, I guess, than the peo­ple in this room, was built up out of thou­sands of gen­er­a­tions of peo­ple work­ing to­gether, pass­ing down all of their achieve­ments to their chil­dren. And it has to be. That’s the only way you can have civ­i­liza­tion at all. And so, is our gen­er­a­tion go­ing to be the one that breaks this chain and that drops the ba­ton and de­stroys ev­ery­thing that all of these oth­ers have built? It’s an in­ter­est­ing kind of back­wards-look­ing idea there, of debts that we owe and a kind of re­la­tion­ship we’re in. One of the rea­sons that so much was passed down to us was an ex­pec­ta­tion of con­tinu­a­tion of this. I think that’s, to me, quite an­other mov­ing way of think­ing about this, which doesn’t ap­peal to thoughts about the op­por­tu­nity cost that would be lost in the fu­ture.

And an­other one that I think is quite in­ter­est­ing is a virtue ap­proach. This is of­ten, when peo­ple talk about virtue ethics, they’re of­ten think­ing about char­ac­ter traits which are par­tic­u­larly ad­mirable or valuable within in­di­vi­d­u­als. I’ve been in­creas­ingly think­ing while writ­ing this book about this at a civ­i­liza­tional level. If you think of hu­man­ity as a group agent, so the kind of col­lec­tive things that we do, in the same way as we might think of, say, United King­dom as a col­lec­tive agent and talk about what the UK wants when it comes to Brexit or some ques­tion like that. That if we think about hu­man­ity, then I think we’re in­cred­ibly im­pru­dent. We take these risks, which are in­sane risks if an in­di­vi­d­ual was tak­ing them, where effec­tively the lifes­pan of hu­man­ity, it’s equiv­a­lent to us tak­ing risks to our whole fu­ture life, just to make the next five sec­onds a lot bet­ter. With no real thought about this at all, no ex­plicit ques­tion­ing of it or even calcu­lat­ing it out or any­thing, we’re just blithely tak­ing these risks. I think that we’re very im­pa­tient and im­pru­dent. I think that we could do with a lot more wis­dom, and I think that you can ac­tu­ally also come from this per­spec­tive. When you look at hu­man­ity’s cur­rent situ­a­tion, it does not look like how a wise en­tity would be mak­ing de­ci­sions about its fu­ture. It looks in­cred­ibly ju­ve­nile and im­ma­ture and like it needs to grow up. And so I think that’s an­other kind of moral foun­da­tion that one could come to these same con­clu­sions through.

Will: What are your views on timelines for the de­vel­op­ment of ad­vanced AI? How has that changed over the course of writ­ing the book, if at all, as well?

Toby: Yeah. I guess my feel­ing on timelines have changed over the last five or 10 years. Ul­ti­mately, the deep learn­ing rev­olu­tion has gone very quickly, and there re­ally are, in terms of the re­main­ing things that need to hap­pen be­fore you get ar­tifi­cial gen­eral in­tel­li­gence, not that many left. Progress seems very quick, and there don’t seem to be any fun­da­men­tal rea­sons why the cur­rent wave of tech­nol­ogy couldn’t take us all the way through to the end.

Now, it may not. I hope it doesn’t, ac­tu­ally. I think that would just be a bit too fast, and we’d have a lot of trou­ble han­dling it. But I can’t rule out it hap­pen­ing in, say, 10 years or even less. Seems un­likely. I guess my best guess for kind of me­dian es­ti­mate, so as much chance of hap­pen­ing be­fore this date as hap­pen­ing af­ter this date, would be some­thing like 20 years from now. But also, if it took more than 100 years, I wouldn’t be that sur­prised. I al­lo­cate, say, a 10% chance or more to it tak­ing longer than that. But I do think that there’s a pretty good chance that it hap­pens within, say, 10 to 20 years from now. Maybe there’s like a 30, 40% chance it hap­pens in that in­ter­val.

That is quite wor­ry­ing, be­cause this is a case where I can’t rely on the idea that hu­man­ity will get its act to­gether. I think ul­ti­mately the case with ex­is­ten­tial risk is fairly clear and com­pel­ling. This is some­thing that is worth a sig­nifi­cant amount of our at­ten­tion and is one of the most im­por­tant pri­ori­ties for hu­man­ity. But we might not have been able to make that case over short time pe­ri­ods, so it does worry me quite a bit.

Another as­pect here, which gets a bit con­fus­ing, and it’s some­times con­fused within effec­tive al­tru­ism, is try to think about the timelines that you think are the most plau­si­ble, so you can imag­ine a prob­a­bil­ity dis­tri­bu­tion over differ­ent years, and when it would ar­rive. But then there’s also the as­pect that your work would have more im­pact if it hap­pened sooner, and I think this is a real thing, such that if AI is de­vel­oped in 50 years’ time, then the ideas we have now about what it’s go­ing to look like are more false. Try­ing to do work now that in­volves these cur­rent ideas will be more short­sighted about what’s ac­tu­ally go­ing to help with the prob­lem. And also, there’ll be many more peo­ple who’ve come to work on the prob­lem by that point, so it’ll be much less ne­glected by the time it ac­tu­ally hap­pens, whereas if it hap­pens sooner, it’ll be much more ne­glected. Your marginal im­pact on the prob­lem is big­ger if it hap­pens sooner.

You could start with your over­all dis­tri­bu­tion about when it’s go­ing to hap­pen, and then mod­ify that into a kind of im­pact-ad­justed dis­tri­bu­tion about when it’s go­ing to hap­pen. That’s ul­ti­mately the kind of thing that would be most rele­vant to when you think about it. Effec­tively, this is per­haps just an un­nec­es­sar­ily fancy way of say­ing, one wants to hedge against it com­ing early, even if you thought that was less likely. But then you also don’t want to get your­self all con­fused and then think it is com­ing early, be­cause you some­how messed up this rather com­plex pro­cess of think­ing about your lev­er­age chang­ing over time, as well as the prob­a­bil­ity chang­ing over time. I think peo­ple of­ten do get con­fused. They then de­cide they’re go­ing to fo­cus on it com­ing early, and then they for­get that they were fo­cus­ing on it be­cause of lev­er­age con­sid­er­a­tions, not prob­a­bil­ity con­sid­er­a­tions.

Will: In re­sponse to the hedg­ing, what would you say to the idea that, well, in very long timelines, we can have un­usual in­fluence? So sup­pos­ing it’s com­ing in 100 years’ time, I’m like, “Wow, I have this 100 years to kind of grow. Per­haps I can in­vest my money, build hope­fully ex­po­nen­tially grow­ing move­ments like effec­tive al­tru­ism and so on.” And this kind of pa­tience, this abil­ity to think on such a long time hori­zon, that’s it­self a kind of un­usual su­per­power or way of get­ting lev­er­age.

Toby: That is a great ques­tion. I’ve thought about that a lot, and I’ve got a short piece on this on­line: The Timing of Labour Aimed at Ex­is­ten­tial Risk Re­duc­tion. And what I was think­ing about was this ques­tion about, sup­pose you’re go­ing to do a year of work. Is it more im­por­tant that a year of work hap­pens now or that a year of work hap­pens closer to the crunch time, when the risks are im­mi­nent? And you could ap­ply this to other things as well as ex­is­ten­tial risk as well. Ul­ti­mately, I think that there are some in­ter­est­ing rea­sons that push in both di­rec­tions, as you’ve sug­gested.

The big one that pushes to­wards later work, such that you’d rather have the year of work be done in the im­me­di­ate vicinity of the difficult time pe­riod, is some­thing I call near­sight­ed­ness. We just don’t know what the shape of the threats are. I mean, as an ex­am­ple, it could be that now we think AI is big­ger than bio, but then it turns out within five or 10 years’ time that there’ve been some rad­i­cal break­throughs in bio, and we think bio’s the biggest threat. And then we think, “Oh, I’d rather have been able to switch my la­bor into bio.”

So that’s an as­pect where it’s bet­ter to be do­ing it later in time, other things be­ing equal. But then there’s also quite a few rea­sons why it’s good to do things ear­lier in time, and these in­clude, what you were sug­gest­ing, growth, but there are var­i­ous things to do with your money in a bank or in­vest­ment could grow, such that you do the work now, you in­vest the money, the money’s much big­ger, and then you pay for much more work later. Ob­vi­ously, there’s growth in terms of peo­ple and ideas, so you do some work grow­ing a move­ment, then you have thou­sands or mil­lions of peo­ple try to help later, in­stead of just a few. Also grow­ing an aca­demic field works that like that. A lot of things do.

And then there’s also other re­lated ideas, like steer­ing. If you’re go­ing to do some work on steer­ing the di­rec­tion of how we deal with one of these is­sues, you want to do that steer­ing work ear­lier, not later. It’s like the idea of di­vert­ing a river. You want to do that closer to the source of the river. And so there’s var­i­ous of these things that push in differ­ent di­rec­tions, and they help you to work out the differ­ent things you were think­ing of do­ing. I like to think of this as a port­fo­lio, in the same way as we think per­haps of a EA port­fo­lio, what we’re all do­ing with our lives. It’s not the case that each one of us has to mir­ror the over­all port­fo­lio of im­por­tant prob­lems in the world, but what we should do to­gether is con­tribute as best we can to hu­man­ity’s port­fo­lio of work on these differ­ent is­sues.

Similarly, you could think of a port­fo­lio over time, of all the differ­ent bits of work and which ones are best to be done at which differ­ent times. So now it’s bet­ter to be think­ing deeply about some of these ques­tions, try­ing to do some steer­ing, try­ing to do some growth. And di­rect work is of­ten more use­ful to be done later, al­though there are some ex­cep­tions. For ex­am­ple, it could be that with AI safety, you ac­tu­ally need to do some di­rect work just to prove that there’s a “there” there. And I think that that is ac­tu­ally sort of effec­tively di­rect work on AI safety at the mo­ment. The main benefit of it is ac­tu­ally that it helps with the growth of the field.

So any­way, there are a few differ­ent as­pects on that ques­tion, but I think that our port­fo­lio should in­volve both these things. I think there’s also a pretty rea­son­able chance, in­deed, that AI comes late or that the risks come late and so on, such that the best thing to be do­ing was grow­ing the in­ter­est in these ar­eas. In some ways, my book is a bet on that, to say it’d be re­ally use­ful if this idea had a re­ally ro­bust and good pre­sen­ta­tion, and to try to do that and pre­sent it in this right way, so that it has the po­ten­tial to re­ally take off and be some­thing that peo­ple all over the world take se­ri­ously.

Ob­vi­ously, that’s in some ten­sion with the pos­si­bil­ity AI could come in five years, or some other risk, bio risk, could hap­pen re­ally soon. Or nu­clear war or some­thing like that. But I think ul­ti­mately, our port­fo­lio should go both places.

Will: Ter­rific. Well, we’ve got time for one last short ques­tion. First ques­tion that we got. Will there be an au­dio­book?

Toby: Yes.

Will: Will you nar­rate it?

Toby: Maybe.

No comments.