Alice Redfern: Moral weights in the developing world — IDinsight’s Beneficiary Preferences Project

Even armed with the best in­ten­tions and ev­i­dence, donors who con­sider them­selves to be mem­bers of the effec­tive al­tru­ism (EA) com­mu­nity must make a moral choice about what “do­ing the most good” ac­tu­ally means — for ex­am­ple, whether it’s bet­ter to save one life or lift many peo­ple out of poverty. Alice Red­fern, a man­ager at IDin­sight, dis­cusses the or­ga­ni­za­tion’s “Benefi­ciary Prefer­ences Pro­ject,” which in­volved gath­er­ing data on the moral prefer­ences of in­di­vi­d­u­als who re­ceive aid fund­ing. The pro­ject could in­fluence how gov­ern­ments and NGOs al­lo­cate hun­dreds of billions of dol­lars.

Below is a tran­script of Alice’s talk, which we’ve lightly ed­ited for clar­ity. You can also watch it on YouTube and read it on effec­tivealtru­ism.org.

The Talk

Thank you for hav­ing me. I’m re­ally ex­cited to be here to­day and to speak about this study. Buddy [Neil Buddy Shah], who is the CEO of IDin­sight, ac­tu­ally pre­sented a year and a half ago at EA Global: San Fran­cisco 2018. He gave an in­tro­duc­tion to what we were do­ing and shared some of the pi­lot-pro­gram re­sults. Since then, we’ve finished pi­lot­ing the pro­ject and com­pleted a full scale-up of the same study, so to­day I’ll give the first pre­sen­ta­tion on the re­sults of what we found, which is very ex­cit­ing for me.

Redfern2

First, I’m go­ing to try to con­vince you that we should cap­ture prefer­ences from the re­cip­i­ents of aid. [I’ll ex­plain] why, be­cause that’s typ­i­cally missed. Then, I’ll dive into what we did in this study, hope­fully con­vinc­ing you that it is fea­si­ble to cap­ture re­cip­i­ents’ moral prefer­ences.

[From there], we’ll get into the re­sults — the re­ally ex­cit­ing bit. We’ll walk through how these prefer­ences and the rea­son­ings be­hind them differ from the typ­i­cal EA’s rea­son­ings, and how this might change re­source-al­lo­ca­tion de­ci­sions across the global de­vel­op­ment sec­tor.

Why cap­ture aid re­cip­i­ents’ moral prefer­ences?

Why are we do­ing this?

Redfern3

I think we all know that effec­tive al­tru­ists rely on value judg­ments in or­der to de­ter­mine how to do the most good. We can eval­u­ate char­i­ties, and have made a huge amount of progress [in that re­gard]. We do a re­ally good job of try­ing to figure out which char­i­ties do the best work.

But that only gives you part of the an­swer. You can eval­u­ate a char­ity like AMF [the Against Malaria Foun­da­tion] and find out how many lives are saved when you hand out bed­nets. Or you can eval­u­ate a char­ity like GiveDirectly and [de­ter­mine] the im­pact of giv­ing out cash trans­fers. But that doesn’t give you a cross-out­come com­par­i­son. At some point, some­one some­where has to make a de­ci­sion about how to al­lo­cate re­sources across differ­ent pri­ori­ties. The kind of trade-offs you face are at a higher level.

[For ex­am­ple,] if we want to do the most good, should we be sav­ing a child’s life from malaria, or should we give a house­hold money? [Tran­scriber’s note: the amounts of money in­volved in cash trans­fer pro­grams have the po­ten­tial to rad­i­cally im­prove life for mul­ti­ple mem­bers of a re­cip­i­ent fam­ily.] Should we save a younger in­di­vi­d­ual or an older in­di­vi­d­ual? Th­ese are the kinds of trade-offs that an or­ga­ni­za­tion like GiveWell has to work through ev­ery day, and we know that these value judg­ments are re­ally difficult [to make] — and that there’s a lack of rele­vant data to ac­tu­ally sup­port them.

Redfern4

The cur­rent sta­tus in global de­vel­op­ment falls into one of these three cat­e­gories:

1. Peo­ple avoid trade-offs com­pletely by not com­par­ing across out­comes. Peo­ple quite of­ten de­cide they’re go­ing to sup­port some­thing in the health sec­tor, and then they look for what works best within that sec­tor. But that just pushes the buck to some­one else to then de­cide how to al­lo­cate across all of the sec­tors.
2. Peo­ple rely on their own in­tu­ition and rea­sons for donat­ing. But of­ten, the peo­ple who do this are very far away from the re­cip­i­ents of that aid, and don’t nec­es­sar­ily know how those peo­ple think through trade-offs.
3. Peo­ple of­ten rely on data from high-in­come coun­tries. There is data from these coun­tries to in­form value judg­ments. But there isn’t data from the low-in­come coun­tries where these char­i­ties are work­ing. So peo­ple just ex­trap­o­late [from high- to low-in­come coun­tries] and as­sume that will provide part of the an­swer.

What peo­ple do much less of­ten — not never, be­cause I know that there’s a grow­ing move­ment of peo­ple who are try­ing to do this — is make trade-offs that are in­formed by the views and the prefer­ences of the aid re­cip­i­ents.

How we cap­tured aid re­cip­i­ents’ moral prefer­ences

Redfern5

We’ve been di­rectly part­ner­ing with GiveWell since 2017, so for about two years now, to cap­ture prefer­ences that in­form GiveWell’s moral weights. And we’ve been think­ing about these two ques­tions for the last two years.
Redfern6

First of all, is it even fea­si­ble to cap­ture the prefer­ences of the re­cip­i­ents of aid rele­vant to sub­jec­tive value judg­ments? This is not an easy task, as I’m sure you can imag­ine. How do you even start? How do you go about it?

Let me tell you what we did. First of all, we spent a very long time pi­lot­ing [the pro­ject]. I said that Buddy spoke a year and a half ago; we spent that [en­tire] time just pi­lot­ing, try­ing to figure out what works, what doesn’t work, and what would [yield] use­ful data. And we even­tu­ally set­tled on three main meth­ods. None of these meth­ods alone is perfect; I think it’s re­ally im­por­tant that we have three differ­ent meth­ods for cap­tur­ing this in­for­ma­tion from differ­ent points of view.

Redfern7

The first thing we did was cap­ture the value of statis­ti­cal life. This is the mea­sure that’s most of­ten used in high-in­come coun­tries to try and put a dol­lar value on life. It’s used by gov­ern­ments like those in the U.S. and in the U.K. to make these re­ally difficult trade-offs.

We also [con­ducted] two-choice ex­per­i­ments. We asked peo­ple to think about what they want for their com­mu­nity. The first [ques­tions] asked peo­ple about trade-offs be­tween sav­ing the lives of peo­ple who are differ­ent ages. For ex­am­ple: Would you rather have a pro­gram that saves 100 lives of un­der-five-year-olds, or 500 lives of peo­ple over 40? We asked ev­ery­one which one they’d pre­fer and then ag­gre­gated [the re­sults] across the whole pop­u­la­tion to get a rel­a­tive value of differ­ent ages.

Our last choice ex­per­i­ment, which I’ll go into more de­tail on later, did a similar thing, but looked speci­fi­cally at sav­ing lives and pro­vid­ing cash trans­fers.

Redfern8

We also in­ter­viewed around 2,000 typ­i­cal aid re­cip­i­ents across Kenya and Ghana over the last eight months or so of 2019. We con­ducted these quan­ti­ta­tive in­ter­views with re­ally low-in­come house­holds across many com­mu­ni­ties. And what be­came in­creas­ingly im­por­tant as we went on was that we also con­ducted qual­i­ta­tive in­ter­views with in­di­vi­d­u­als and groups of peo­ple to un­der­stand how they were re­spond­ing to our ques­tions and pro­cess­ing these difficult trade-offs.

Did it work? That’s a big ques­tion. My take on it is that over­all, a ma­jor­ity of re­spon­dents demon­strated a good un­der­stand­ing of our ap­proaches. I say a ma­jor­ity; it definitely wasn’t ev­ery­one.

Th­ese were com­pli­cated ques­tions that we asked peo­ple to en­gage with. But there were some things that re­as­sured us of peo­ple’s un­der­stand­ing. First of all, we spent a lot of time de­vel­op­ing vi­sual aids to guide peo­ple through the trade-offs.

Redfern9

This photo is from one of the [quan­ti­ta­tive] in­ter­views. You can see our enu­mer­a­tor on the left hold­ing the vi­sual aids and the benefi­ciary point­ing to her choice, which pre­sum­ably is the pro­gram on the left.

We also, as I said, did qual­i­ta­tive work to re­ally un­der­stand how peo­ple were in­ter­pret­ing our ques­tions — if they were fal­ling into some of the pit­falls that these ques­tions pre­sent and mis­in­ter­pret­ing ei­ther the ques­tions or how to make the trade-offs.

The third and most quan­ti­ta­tive thing that we did was to build in “un­der­stand­ing tests” for ev­ery sin­gle method. We had rel­a­tively high pass rates rang­ing from 60-80% of our sam­ple. We even­tu­ally ex­cluded those who clearly didn’t un­der­stand. Our es­ti­mates come purely from the sam­ple of peo­ple who did un­der­stand, which again in­creases our con­fi­dence.Redfern10

Over­all, I think [cap­tur­ing prefer­ences] is fea­si­ble. It hasn’t been easy; it’s re­source-in­ten­sive and you re­ally do have to put a lot of effort into mak­ing sure the data is high-qual­ity. But I do think we’ve cap­tured ac­tual prefer­ences from these in­di­vi­d­u­als.

The re­sults
Redfern11

Onto the good bit, which is the re­sults. For the rest of the pre­sen­ta­tion, I’m go­ing to walk you through these. I’ll start with a high-level look at the re­sults, which re­ally is as sim­ple as I can get with them. As you can imag­ine, there’s a lot be­hind them that I’m then go­ing to try to un­pack a bit.

Redfern12

What we found at the high­est level is that re­spon­dents place a re­ally high value on life — par­tic­u­larly the value of life for young chil­dren. On the left you’ll see the re­sults of our study in quite a crude dol­lar value for a death averted of an in­di­vi­d­ual un­der five years old and an in­di­vi­d­ual five or older. In the mid­dle you can see what hap­pens when you pre­dict what this value “should” be for this pop­u­la­tion based on high-in­come data — and our re­sults are quite a bit higher than what peo­ple ex­pected for this pop­u­la­tion.

If you look on the right, [you see these val­ues in terms of] the me­dian of GiveWell’s 2018 moral weights. We use this as our bench­mark for try­ing to figure out where we are and what the po­ten­tial im­pact is for GiveWell. You’ll see here that our re­sults are sub­stan­tially higher than [GiveWell’s] re­sults from [2018].

[I’d like to] stress that these are GiveWell’s 2018 moral weights; they’re cur­rently up­dat­ing them with our re­sults in mind. We use this as a bench­mark, but it doesn’t nec­es­sar­ily rep­re­sent where they are now.

Redfern13

But what we found was that our re­sults are nearly five times as high as the GiveWell me­dian moral weights for in­di­vi­d­u­als un­der five years old. This is driven by two things. One is that over­all, peo­ple are plac­ing a higher value on life than pre­dicted. The sec­ond is that peo­ple con­sis­tently value in­di­vi­d­u­als un­der five higher than other in­di­vi­d­u­als that we ask about.

As you saw on the pre­vi­ous slides, a lot of GiveWell staff mem­bers’ rank­ing of in­di­vi­d­u­als is flipped com­pared to [our re­sults]. We see a big differ­ence in the value of peo­ple un­der five, and we also see a differ­ence in the value of peo­ple over five. It’s about 1.7 times higher.

So what does this mean?

Redfern14

To un­der­stand the im­pact of this, we just took these re­sults and put them di­rectly into GiveWell’s cost-effec­tive­ness model. We asked, “What does that do to the cost-effec­tive­ness of the differ­ent char­i­ties?” As you might imag­ine, we saw higher value placed on sav­ing in­di­vi­d­u­als un­der five, so the cost-effec­tive­ness of He­len Kel­ler In­ter­na­tional, the Malaria Con­sor­tium, and the Against Malaria Foun­da­tion in­crease. The same would be true out­side of GiveWell. Any cost-effec­tive­ness or cost-benefit study that’s mak­ing this trade-off and us­ing these re­sults would shift to­ward life-sav­ing in­ter­ven­tions.

[How­ever,] there’s a whole host of rea­sons why this might not be the [best ap­proach]. It might not be perfect to just de­fault to benefi­ciary prefer­ences; you might not want to just take [those] at face value. I’ll walk you through some of that [think­ing] now.

What I’ve [shared] so far is the sim­ple ag­gre­gated re­sult, and to ar­rive at that, we take our three meth­ods, come up with an av­er­age, and then com­pare it to GiveWell’s av­er­age. But once you re­ally get into the data, you see that there’s a lot more go­ing on.

[Let’s look at] the de­tails of just one of our meth­ods. This was the third choice ex­per­i­ment I men­tioned, which gets at the rel­a­tive value of money in life. This [slide shows] one of the vi­sual aids that we used.

Redfern15

The lan­guage is Swahili from our Kenya study. We asked re­spon­dents to choose be­tween two pro­grams. Pro­gram A, shown on the left, saves lives and gives cash trans­fers, but it gives sub­stan­tially more cash trans­fers. Pro­gram B, shown on the right, saves one more life [than in Pro­gram A], but it gives sub­stan­tially fewer cash trans­fers.

Redfern16

We ran­domly changed the num­ber of cash trans­fers so that peo­ple had three ran­dom choices within a range, and we looked for peo­ple switch­ing [their choice]. If peo­ple didn’t switch, we then pushed them to the ex­treme and said, “Okay, if you pre­fer cash, what if the differ­ence is now only one cash trans­fer and one life? Which one would you pick?” On the flip side, if they always picked life, we pushed the num­ber of cash trans­fers re­ally high — to 1,000, to 10,000 — and asked, “What would you do then? Will you even­tu­ally switch?” We were re­ally look­ing for peo­ple who have very ab­solute views rather than just high val­u­a­tions.

Redfern17

We found that high val­ues [of a life] are driven by a large chunk of re­spon­dents who always choose life-sav­ing in­ter­ven­tions. So 38% of our re­spon­dents still pick the pro­gram that saves one ex­tra life, even when it was com­pared to 10,000 $1,000 cash trans­fers. That would rep­re­sent a value of over $10 mil­lion for a sin­gle life, which is far above what’s pre­dicted for this pop­u­la­tion. If we were to fully in­cor­po­rate this prefer­ence — and I say that be­cause this isn’t given full weight in our sim­ple av­er­age — the value would be much higher for life-sav­ing in­ter­ven­tions.

Our ques­tion then was: What does that mean? What’s hap­pen­ing with these re­spon­dents? Why are they ex­press­ing this view? Are they not re­ally en­gag­ing with the ques­tion? Maybe they have a com­pletely differ­ent way of think­ing about it.

Redfern18

What we found was that for many of our re­spon­dents, this does rep­re­sent a clear moral stance. We figured this out through our qual­i­ta­tive work. I’ve put up one quote here as an ex­am­ple of how clear peo­ple are on their views. This re­spon­dent in Mi­gori County, Kenya said, “If there is one sick child in Mi­gori County that needs treat­ment, it’s bet­ter to give all the money to save the child, than give ev­ery­one in the coun­try cash trans­fers.”

I can hear all of the EAs in the room pan­ick­ing at a state­ment like that. It’s very differ­ent from the typ­i­cal util­i­tar­ian way of think­ing about this trade-off, and we tried to un­der­stand the main rea­sons that peo­ple were choos­ing this.

Two big things came out. The first was that peo­ple place a re­ally high value on even a very small [amount of] po­ten­tial for a child to be­come some­one sig­nifi­cant in the fu­ture. We heard again and again, “We don’t know which child might suc­ceed and help this en­tire com­mu­nity. This one child might be­come the leader. They might be­come the eco­nomic force of the fu­ture.” They re­ally place a high value on that small po­ten­tial and want to pre­serve it; they [be­lieve] that a young child de­serves [that po­ten­tial more than] peo­ple who might re­ceive an amount of cash, which will have a smaller im­pact across more peo­ple.

The sec­ond thing we saw was peo­ple hold­ing the view that life has in­her­ent value that just can­not be com­pared to cash. This is al­most the op­po­site of the util­i­tar­ian view. This is start­ing from a de­on­tolog­i­cal point of view whereby “you just can’t make this trade-off.” It starts from the moral stand­point that life is more im­por­tant. A lot of peo­ple ex­press this view. Some­times it’s grounded in re­li­gion — you do hear peo­ple quot­ing their re­li­gion and [dis­cussing] the sanc­tity of life — and other times it’s sep­a­rate from re­li­gion and just a moral stand­point.

Then [the re­sults] get in­ter­est­ing again. Thirty-eight of our re­spon­dents choose life no mat­ter what. But what hap­pens if we look at the 54% who switch be­tween life and cash at some point?

Redfern19

If you look just at this group, you see that they act more like the typ­i­cal util­i­tar­i­ans. Ap­prox­i­mately half of the 54% of re­spon­dents were will­ing to switch be­low a level of 30 cash trans­fers. So that comes out as an im­plied value of about $30,000. If you re­mem­ber the val­ues from the ear­lier slides, that’s very similar to the val­ues that are already in GiveWell’s moral weights (and already seen in the liter­a­ture). If we were to fo­cus only on the peo­ple who hold this moral view, the an­swer is ac­tu­ally rea­son­ably similar to what we already thought.

I think this leaves us with a very in­ter­est­ing ques­tion of what to do with the one-third of re­spon­dents who ap­ply a com­pletely differ­ent frame­work to this situ­a­tion. An easy an­swer might be: “Th­ese are peo­ple who are en­gag­ing with the trade-off [those whose views al­ign more with EAs], so maybe we should look at their value and put more weight on that.” But does that mean we’re go­ing to ig­nore the prefer­ences of the one-third of re­spon­dents who re­ally do think in a differ­ent way from the typ­i­cal util­i­tar­ian frame­work?

I won’t linger on this, but we also looked at the [qual­i­ta­tive data] on the peo­ple who do switch, and the rea­sons they give [for do­ing it] are very similar to those that we prob­a­bly all give when we try and think through these trade-offs — and very similar to the types of rea­sons that our staff mem­bers give when they’re try­ing to come up with their moral weights.

Redfern20

To sum up, what have we found?
Redfern21

On av­er­age, we found that aid re­cip­i­ents do place more value on life than pre­dicted, par­tic­u­larly the lives of young chil­dren. What we also know is that these re­sults can be in­cor­po­rated into re­source-al­lo­ca­tion de­ci­sions straight­away. GiveWell is already work­ing on up­dat­ing their moral weights us­ing this data. We’ve already been talk­ing to other or­ga­ni­za­tions that might be in­ter­ested in us­ing [our find­ings] as well.

The an­swer might be that more fo­cus should be placed on life-sav­ing in­ter­ven­tions. But — and this is a very big “but” — the frame­work used by re­spon­dents to an­swer these ques­tions is differ­ent from the typ­i­cal frame­work that’s used. I think a lot more thought is needed to re­ally un­der­stand what that means and how to in­ter­pret these re­sults and ap­ply them to difficult de­ci­sions. I don’t think any­one is go­ing to take [the find­ings] so [liter­ally] that they take a num­ber and straight­for­wardly ap­ply it to a de­ci­sion. We need to work through all of the differ­ent ways of think­ing about this [in or­der] to un­der­stand how best to ap­ply [our re­search].

Redfern22

So what’s next? I’ve just scratched the sur­face of the study here. We did a lot of other things. But from here, we also would love to ap­ply a similar ap­proach to cap­ture prefer­ences for more pop­u­la­tions. I think what we’ve shown is that these prefer­ences are differ­ent from what was ex­pected, and there were sub­stan­tial re­gional and coun­try-level vari­a­tions. The prefer­ences weren’t uniform.

I think there’s a lot of value in con­tin­u­ing to cap­ture these prefer­ences and for­mu­late our think­ing about how to in­cor­po­rate the views of the re­cip­i­ents of char­ity. There also are a lot of re­lated ques­tions that we haven’t even touched on. We fo­cused on some re­ally high-level trade-offs be­tween cash and life, but there are a large num­ber of trade-offs that de­ci­sion mak­ers have to make. We could po­ten­tially [ex­plore those] with a similar ap­proach.

Then, as I’ve men­tioned, there’s also a lot of work to do now to un­der­stand how best to ap­ply these re­sults to real-world de­ci­sion-mak­ing. Any study us­ing a cost-benefit [anal­y­sis] of in­ter­ven­tions in low- and mid­dle-in­come coun­tries could im­me­di­ately im­ple­ment these re­sults, and we’d like to work with differ­ent peo­ple to figure out how they can do that, and what the best ap­proach is for them.

I think there’s also an op­por­tu­nity here for non­prof­its to try to in­clude prefer­ences into their de­ci­sion-mak­ing. Again, we would love to help figure out what that could look like. And at an even higher level, we would love to see foun­da­tions and philan­thropists think more about how they can in­cor­po­rate prefer­ences into their own per­sonal re­source al­lo­ca­tion. So that’s some­thing we’re think­ing about right now.

Great — that [cov­ers] ev­ery­thing. Thank you.

Nathan Labenz [Moder­a­tor]: Thank you very much, Alice. Great talk. We have ques­tions be­gin­ning to come in through the Whova app.

The first one is re­ally in­ter­est­ing be­cause it con­tra­dicted my in­tu­ition on this, which was that [your re­search is] just brilli­ant. It seems su­per-smart and ob­vi­ous in ret­ro­spect, as Toby [Ord] said in his [EA Global talk], to just ask peo­ple what they think.

But one ques­tioner is challeng­ing that as­sump­tion by ask­ing, “How much do we want to trust the moral frame­work of the re­cip­i­ents? For ex­am­ple, if you were to ask folks in differ­ent places how they weigh, rel­a­tively speak­ing, the moral im­por­tance of women ver­sus men, you might get some re­sults that you would dis­miss out of hand. How do you think about that at a high philo­soph­i­cal level?

Alice: Yeah, this is re­ally tricky. I think that es­sen­tially there has to be a limit to how far you can go with these re­sults.

Another ex­am­ple that I think about of­ten [in­volves] in­come. When you do a lot of these stud­ies, you find, again and again, that the re­sults are very much tied to in­come. And there’s this re­ally dan­ger­ous step be­yond that in which you might say, “If some­one has a higher in­come, their life is more highly val­ued.” Should we be pri­ori­tiz­ing sav­ing the lives of richer peo­ple? I don’t think any­one is par­tic­u­larly com­fortable with that.

I don’t know where [you draw the] line, but I think there’s a pro­cess here for figur­ing out what the ap­pli­ca­tion of these re­sults is and how far we can go with it. It definitely can’t be just de­fault­ing to [the re­sults], be­cause I think that leads us to some dan­ger­ous places, as you pointed out.

Nathan: Yeah. I think you’ve cov­ered this, but I didn’t quite parse it. How did you end up weight­ing the views of the one-third of peo­ple who never switched [their opinion on the value of life] into those fi­nal num­bers that you showed at the be­gin­ning?

Alice: Sure. I think I said that with the choice ex­per­i­ment [we ran], ev­ery­one made three choices that were ran­domly as­signed. Then we looked to the ex­tremes to iden­tify the non-switch­ers.

To es­ti­mate [a value of life], we take the value of those three ran­dom choices. Essen­tially, it puts a cap on how high some­one’s es­ti­mate can be val­ued, as it’s at the limit of those ran­dom choices and not above them, which gives us a good es­ti­ma­tion model and al­lows us to come up with a sin­gle value. But it does de­tract from how high the value is of cer­tain peo­ple.

Like I said, when we in­cor­po­rate it fully, the value comes out way higher — like 50 times higher — be­cause their value is near in­finite. It definitely changes the re­sults, but I think you have to do that to have a num­ber that you can then pro­cess and work with.

Nathan: I’m re­call­ing that graph where you had 10 or 20 through 100, and then 1,000 or 10,000. One hun­dred would be the high­est that any­one could be weighted even if they never switched.

Alice: Yeah, ex­actly. If it’s higher than 100, it gets some weight. It can go above 100 but it’s capped at that point.

Nathan: Okay, cool. Tell me about some of the other ap­proaches that you tried. I think some of the most in­ter­est­ing as­pects of this work are ac­tu­ally figur­ing out how to ask these ques­tions in a way that pro­duces mean­ingful re­sults, even if they’re not perfect. It sounds like there are a dozen other ap­proaches that were tried and ul­ti­mately didn’t work. Tell us a bit about that.

Alice: Sure. Yeah. It was a real pro­cess to figure out what works and which differ­ent bi­ases changed the re­sults in spe­cific di­rec­tions. Even this choice ex­per­i­ment has been through many iter­a­tions. At one point we had a very di­rect fram­ing of “would you save a life, or give this many cash trans­fers?”

But when it’s very di­rect — and I think this has been shown in be­hav­ioral eco­nomics el­se­where — peo­ple are in­clined to never switch [their an­swers based on how many cash trans­fers are pro­posed] and always choose life. Whereas as soon as you make it in­di­rect, peo­ple are will­ing to en­gage with the trade-off, and you ac­tu­ally get at their prefer­ences. So it re­quired a lot of fram­ing ed­its.

We also tried some par­ti­ci­pa­tory bud­get­ing ex­er­cises, which [some re­searchers have] tried out and used in the area [of global de­vel­op­ment]. Again, it’s tricky. There’s a lot of mo­ti­va­tion to al­lo­cate re­sources very evenly when you’re in a group set­ting. Also, the re­sults are less di­rectly ap­pli­ca­ble to the prob­lem at hand.

I think those are two big things we tried out. We tried a lot of differ­ent “will­ing­ness to pay” [ques­tions] that we ended up rul­ing out as well.

Nathan: Here’s an­other in­ter­est­ing [from the au­di­ence] about things that maybe weren’t even tried. All of these frame­works were defined by you, right? As the ex­per­i­menter?

Alice: Yeah.

Nathan: Did the team try to do any­thing where you in­vited the re­cip­i­ents to pre­sent their own frame­work from scratch?

Alice: Oh, that’s in­ter­est­ing. In a way we did. Like I said, we did a lot of in­di­vi­d­ual qual­i­ta­tive in­ter­views, and those were not grounded in the three meth­ods that I pre­sented here. They were com­pletely sep­a­rate. A big part of those in­ter­views was just pre­sent­ing re­cip­i­ents with the GiveWell prob­lem and ask­ing, “How do you work through this?”, and then get­ting peo­ple to walk us through [their think­ing].

With some peo­ple it’s very effec­tive. Some peo­ple are very will­ing to en­gage. But as you can imag­ine, some peo­ple are taken aback. It’s a com­pli­cated ques­tion. We tried it in that sense.

For us, sim­plify­ing [our ques­tions] down to a sin­gle choice [in the other re­search meth­ods] is what helped make the meth­ods work. But qual­i­ta­tively, you can go into that de­tail.

Nathan: Awe­some. Well, this is fas­ci­nat­ing work and there’s ob­vi­ously a lot more work to be done in this area. Un­for­tu­nately, that’s all the time we have at the mo­ment for Q&A. So please, an­other round of ap­plause for Alice Red­fern. Great job. Thank you for join­ing us.

No comments.