Will MacAskill: Why should effective altruists embrace uncertainty?

This tran­script of an EA Global talk, which CEA has lightly ed­ited for clar­ity, is cross­posted from effec­tivealtru­ism.org. You can also watch the talk on YouTube here.

Prob­a­bil­is­tic think­ing is only a few cen­turies old, we have very lit­tle un­der­stand­ing on how most of our ac­tions af­fect the long-term fu­ture, and promi­nent mem­bers of the effec­tive al­tru­ism com­mu­nity have changed their minds on cru­cial con­sid­er­a­tions be­fore. Th­ese are just three of the rea­sons that Will Ma­caskill urges effec­tive al­tru­ists to em­brace un­cer­tainty, and not be­come too at­tached to pre­sent views. This talk was the clos­ing talk for Effec­tive Altru­ism Global 2018: San Fran­cisco.

The Talk

Thanks so much for an awe­some con­fer­ence. I think this may be my fa­vorite EAG ever ac­tu­ally, and we have cer­tain peo­ple to thank for that. So let’s put a big round of ap­plause for Katie, Amy, Ju­lia, Barry, and Kerri did an awe­some job. Thank you.

Now that I’ve been to the TED con­fer­ence, I know that they have an army of 500 peo­ple run­ning it, and we have five, which shows just how ded­i­cated they are. But we also had an amaz­ing team of vol­un­teers, led by Tessa. So, a round of ap­plause for their help as well. You all crushed it, so thank you.

Let’s look at a few con­fer­ence high­lights. There was tons of good stuff at the con­fer­ence, can’t talk about it all, but there were many amaz­ing talks. Sadly, ev­ery EAG, I end up go­ing to about zero, but I heard they were re­ally good. So, I hope you had a good time there. We had awe­some VR.

I talked about, from an­i­mal equal­ity, I talked about the im­por­tance of, or the idea of like re­ally try­ing to get in touch with par­tic­u­lar in­tu­itions. So I hope many of you had a chance to ex­pe­rience that.

We also had loads of fun along the way. This photo makes us look like we had a kind of rave room go­ing on. I want to throw par­tic­u­lar at­ten­tion to Igor’s blank stare, but like a lit­tle smile. So you know, I want to know what he was hav­ing. And then, most im­por­tantly, we had great con­ver­sa­tions.

So look at this photo. Look how nice Max and Becky look. Just like, you know, you want them to be your kids or some­thing like it. It’s kind of heart­warm­ing.

My own per­sonal high­light was get­ting to talk with Holden, but in par­tic­u­lar him tel­ling us about his love of stuffed an­i­mals. You might not know that from his Open Philan­thropy posts, but he’s go­ing to write about it in the fu­ture.

I talked about like hav­ing a differ­ent kind of gestalt, a differ­ent wor­ld­view. The as­pect of that, that feel­ing of gestalt shift, was ac­tu­ally most pre­sent for me in some stuff Holden said. In par­tic­u­lar, he em­pha­sized the im­por­tance of self care, this idea that he worked out what’s the av­er­age num­ber of hours he works in a week, and that’s his fixed point, he can’t work harder than that, re­ally. And that there’s no rea­son to feel bad about it. And yeah, in my own case, I was like, “Well ob­vi­ously I kind of know that on an ab­stract level, or some­thing.” But hear­ing it from some­one who I ad­mire as much, and I know is as pro­duc­tive as Holden is, re­ally helped turn that into some­thing that I feel… now I think I am able to feel that on more of a gut level.

So, the theme of the con­fer­ence was Stay Cu­ri­ous. And I talked ear­lier on about the con­trast be­tween Athens and Sparta. I think we definitely got a good demon­stra­tion that you are ex­cel­lent Athe­ni­ans, ex­cel­lent philoso­phers. In par­tic­u­lar, I told the story about philoso­phers at this old con­fer­ence not be­ing able to make it to the bar af­ter the con­fer­ence. Well, last night, at­tempt­ing to go to the speak­ers’ re­cep­tion, two groups of us, one goes into an ele­va­tor be­fore us, me and my group go in, go down, and the oth­ers just aren’t there. Scott Garrabrant tells me they went from the fourth floor down to the first, doors opened, doors closed again, and they went right back up to the fourth. So, I don’t want to say I told you so, but yeah, we’re definitely do­ing well on the philoso­pher side of things.

So, we talked about be­ing cu­ri­ous over the course of this con­fer­ence. Now I’m go­ing to talk a bit about tak­ing that at­ti­tude and con­tin­u­ing it over to the fol­low­ing year. And I’m go­ing to give quickly three ar­gu­ments, or ways of think­ing, just to em­pha­size how lit­tle we know, and how im­por­tant it there­fore is to keep such an open mind.

The first ar­gu­ment is just how re­cent many in­tel­lec­tual in­no­va­tions were. The idea of prob­a­bil­ity the­ory is only a few cen­turies old. So for most of hu­man civ­i­liza­tion, we just didn’t re­ally have the con­cept of think­ing prob­a­bil­is­ti­cally. So, if we’d made the ar­gu­ment like, “Oh, we’re re­ally con­cerned about risk of hu­man ex­tinc­tions, not that we think it’s definitely go­ing to hap­pen, but there’s some chance it’d be re­ally bad.” Peo­ple would have said just, “I don’t get it.”

I can’t even re­ally imag­ine what it’d be like to just not have the con­cept of prob­a­bil­ity, but yet for thou­sands of years peo­ple were op­er­at­ing with­out that. Sim­ple util­i­tar­i­anism, again. I mean, this kind of goes back a lit­tle bit to the Mo­hists in early China, but at least in its mod­ern form, it was only de­vel­oped in the 18th cen­tury. And while effec­tive al­tru­ism is definitely not util­i­tar­i­anism, it’s clearly part of a similar in­tel­lec­tual cur­rent. And the fact that this moral view that I think has one of the best shots of be­ing the cor­rect moral view was only de­vel­oped a few cen­turies ago, well, who knows what the com­ing cen­turies have?

More re­cently as well, the idea of ev­i­dence-based medicine. The term ev­i­dence-based medicine only arose in the 90s. It ac­tu­ally only re­ally started to be prac­ticed in the late 1960s. There was al­most no at­tempt to ap­ply the ex­per­i­men­tal method more than 80 years ago. And again, this is just such an ob­vi­ous part of our wor­ld­view. It’s amaz­ing that this didn’t ex­ist be­fore that point. The whole field of pop­u­la­tion ethics, again, what we think of as among the most fun­da­men­tal cru­cial con­sid­er­a­tions, only re­ally came to be dis­cussed with Parfit’s Rea­sons and Per­sons, pub­lished in 1984. The use of ran­dom­ized con­trol­led tri­als in de­vel­op­ment eco­nomics, at least out­side the area of health care, again, only in the 1990s, still very re­cent by so­cietal terms.

And then the whole idea of AI safety, or the im­por­tance of en­sur­ing that ar­tifi­cial in­tel­li­gence doesn’t have very bad con­se­quences, again, re­ally from the early 2000s. So this trend should re­ally make us ap­pre­ci­ate, there are so many de­vel­op­ments that should cause rad­i­cal wor­ld­view changes. I think it should definitely usher in the ques­tion of “Well, what are the fur­ther de­vel­op­ments over the com­ing decades that might re­ally switch our views again?”

The sec­ond ar­gu­ment is, more nar­rowly, re­ally big up­dates that peo­ple in the EA com­mu­nity have made in the past. So again, with my con­ver­sa­tion with Holden, he talked about how for very many years he did not take se­ri­ously the loopy ideas of effec­tive al­tru­ism. But, as he’s writ­ten about pub­li­cly, he’s re­ally mas­sively changed his view on things like con­sid­er­a­tions of the long-term fu­ture, and the moral sta­tus of non­hu­man an­i­mals as well. And again, these are huge, wor­ld­view-chang­ing things.

In my own case as well, cer­tainly when I started out with effec­tive al­tru­ism, I re­ally thought, there’s a body of peo­ple who form the sci­en­tific es­tab­lish­ment, and they work on stuff, and then they pro­duce an­swers, and that’s knowl­edge. Then, I thought you could just act on that, and that was the way the sci­en­tific es­tab­lish­ment worked. Turns out things are a lit­tle bit more com­pli­cated than that, a lit­tle bit more hu­man, and that just, un­for­tu­nately, the state of em­piri­cal sci­ence is a lot less ro­bust than I thought. And that came out in the early days of rely­ing on, say the Disease Con­trol Pri­ori­ties Pro­ject, which had much shak­ier method­ol­ogy, and in fact mis­takes that I re­ally, re­ally wouldn’t have pre­dicted at the time. And that’s definitely been a big shift in my own way of un­der­stand­ing the world.

And then, in two differ­ent ways, from my col­leagues at FHI, their views on nan­otech­nol­ogy. Where it re­ally used to be the case that nan­otech­nol­ogy was… atom­i­cally pre­cise man­u­fac­tur­ing was re­garded as one of the ex­is­ten­tial risks. And I think peo­ple just con­verged on think­ing that ac­tu­ally that ar­gu­ment was very much overblown. On the other side, Eric Drexler spent most of his life say­ing like, “Ac­tu­ally, atom­i­cally pre­cise man­u­fac­tur­ing is the panacea. We can be at a post-scarcity world. We can have rad­i­cal abun­dance. This is go­ing to be amaz­ing.” And then was able to change his mind and ac­tu­ally think, “Well ac­tu­ally, I’m not sure if it’s… it might be good, it might be bad. I’m not sure,” de­spite hav­ing kind of worked and pro­moted these ideas for decades. This is ac­tu­ally kind of amaz­ing, that peo­ple in the com­mu­nity are able to have shifts like that.

Then the third ar­gu­ment I’ll give you, is that if we’ve made these up­dates, per­haps we will make such sig­nifi­cant up­dates again in the fu­ture. So the third class of ar­gu­ments is just all the cat­e­gories of things that we still re­ally don’t un­der­stand. So, I mean, the thing I’m fo­cused on most at the mo­ment is try­ing to build this field of global pri­ori­ties re­search to try and ad­dress some of these ques­tions, get more smart peo­ple work­ing on them. But one is just how we should weigh prob­a­bil­ities against very large amounts of value. So, we clearly think that most of the time some­thing like ex­pected util­ity the­ory gets the right an­swers. But then peo­ple get to start a bit antsy about it when it comes to very, very low prob­a­bil­ities of suffi­ciently large amounts of value.

When we then start think­ing about, well, what about in­finite amounts of value? If we’re happy to think about very, very large amounts of value, as long-ter­mists of­ten are think­ing about, if we think it’s not wacky to talk about that, why not about in­finite amounts? But then, you’re re­ally start­ing to throw a span­ner in the works of any sort of rea­son­able de­ci­sion the­ory.

And it just is the case, we just have like no idea at the mo­ment re­ally, how to han­dle this prob­lem. Similarly with some­thing Open Phil has worked a lot on: which en­tities are morally rele­vant? We’re very pos­i­tive about ex­pand­ing the moral cir­cle, but how far should that go? Non­hu­man an­i­mals, of course. But what about in­sects? What about plants or some­thing? Seems like we have a strong in­tu­ition that plants don’t have con­scious­ness, and per­haps they don’t count. We don’t re­ally have any good un­der­ly­ing un­der­stand­ing of why that is the case. There’s plenty of peo­ple try­ing to work on this on the cut­ting edge, like Qualia Re­search In­sti­tute, among oth­ers, but it’s ex­cep­tion­ally difficult. And if we don’t know that, then there’s a ton we don’t know about do­ing good.

Another cat­e­gory that we’re ig­no­rant about is in­di­rect effects and moral clue­less­ness. So we know that like most of the im­pact of our ac­tions are in un­pre­dictable effects over the very, very long term, be­cause of but­terfly effects and so on, be­cause of the ways that our ac­tions will change who is born in the fu­ture. We know that that’s ac­tu­ally where most of the ac­tion is, and it’s just that we can’t pre­dict it at all. So we know we’re just peer­ing very dimly into the fog of the fu­ture. And there’s been ba­si­cally al­most no work on re­ally try­ing to model that, re­ally try­ing to think, well, you take this sort of ac­tion in this coun­try, how does that differ from this other sort of ac­tion in this other coun­try, in terms of its very long-run effects?

So it’s not just that we’ve got this gen­eral ab­stract ar­gu­ment, look­ing in­duc­tively from ex­pe­rience in terms of how we’ve, as a so­ciety and as a com­mu­nity, changed our mind in the past. It’s also that we just know that there’s tons of things that we don’t un­der­stand. So I think what’s ap­pro­pri­ate is a at­ti­tude of deep, kind of rad­i­cal un­cer­tainty when we’re try­ing our best to do good. But what kind of con­crete im­pli­ca­tions does this have? Well, I think there’s kind of three main things.

One is just ac­tu­ally try­ing to get more in­for­ma­tion, so con­tin­u­ing to do re­search, con­tin­u­ing to en­gage in in­tel­lec­tual in­quiry. Se­cond is to keep our op­tions open as much as pos­si­ble, en­sur­ing that we’re not clos­ing doors, that even though they look not too promis­ing, they might ac­tu­ally turn out to be much more promis­ing than they were, when we gain more in­for­ma­tion go­ing into the fu­ture, and when we change our minds. Thirdly is plau­si­bly pur­su­ing things that are con­ver­gently good. So things that look like, “Yeah, this is a re­ally ro­bustly good thing to do from a wide va­ri­ety of per­spec­tives or wor­ld­views.” So, re­duc­ing the chance of a great power war for ex­am­ple. Even if my em­piri­cal be­liefs about the fu­ture changed a lot, even if my moral be­liefs changed a lot, I’d still feel very con­fi­dent that re­duc­ing the chance of ma­jor war in our life­time would be a very good thing to do.

So, the thing I want to em­pha­size to you most is keep­ing this at­ti­tude of kind of un­cer­tainty and ex­plo­ra­tion through what you’re do­ing over the com­ing year. I’ve em­pha­sized Athens in re­sponse to this Athens ver­sus Sparta dilemma, try­ing to bear in mind that we want to stay un­cer­tain. We want to keep con­for­mity at the meta-level and co­op­er­ate and sym­pa­thize with peo­ple who have very differ­ent ob­ject-level be­liefs to us. And so, above all, we want to keep ex­plor­ing and stay cu­ri­ous.