Karolina Sarek: How to do research that matters

Why is it that some high-qual­ity re­search fails to af­fect where we choose to di­rect our time and money? What makes other re­search more per­sua­sive or ac­tion­able? In this talk, Karolina Sarek, co-founder and di­rec­tor of re­search at Char­ity En­trepreneur­ship, dis­cusses tech­niques for pro­duc­ing im­pact­ful, de­ci­sion-rele­vant re­search.

Below is a tran­script of Karolina’s talk, which we’ve lightly ed­ited for clar­ity. You can also watch it on YouTube and read it on effec­tivealtru­ism.org.

The Talk

What makes good re­search? Hav­ing worked as a re­searcher for academia, non­prof­its, and for-prof­its, I re­peat­edly ob­served some­thing that not only struck me, but ul­ti­mately led to a ca­reer change: In spite of the countless hours that went into re­search, not that much changed. There were no ma­jor up­dates, no strat­egy changes. Re­search efforts didn’t nec­es­sar­ily trans­form into ac­tual units of good. There was a big gap be­tween re­search and de­ci­sion-mak­ing. So I started won­der­ing why. What makes good re­search?


Sarek2

I de­cided to pose this ques­tion to a group of pro­fes­sion­als work­ing on the method­ol­ogy of sci­ence, and I heard that good re­search:

* Needs to be pub­lished in peer-re­viewed jour­nals.
* Is high-pow­ered and has big sam­ple sizes.
* Is valid and cap­tures what it set out to cap­ture.
* Comes from non-bi­ased sources of fund­ing and non-bi­ased re­searchers.
* Pre­sents pre­cise re­sults.

I think that all of those qual­ities are ex­tremely im­por­tant. Sadly, they’re also true for these stud­ies:

Sarek3

Sarek4

Sarek5

As amus­ing as this is, I think we all agree that ex­plor­ing those top­ics is not the best al­lo­ca­tion of our limited re­sources. But I also think that com­mon mea­sures of re­search qual­ity are not get­ting at what we, as [mem­bers of the effec­tive al­tru­ism com­mu­nity], care about most. They are not fo­cused on in­creas­ing the im­pact of the re­search it­self.

Sarek6

So even though I knew the qual­ities of good re­search, again I saw this gap be­tween re­search and de­ci­sion mak­ing. But this time I de­cided to change that. In­stead of just ask­ing ques­tions, I co-founded Char­ity En­trepreneur­ship, an or­ga­ni­za­tion that not only con­ducts re­search to find the high­est-im­pact ar­eas to work on, but also im­me­di­ately finds peo­ple and trains them to start high-im­pact char­i­ties. They’re im­ple­ment­ing the in­ter­ven­tions that we find to be most promis­ing.

Sarek7

Now I’ll tell you what I’ve learned in the pro­cess. I’ll tell you about three main les­sons that I ap­ply in our re­search — and that you can ap­ply in your re­search as well:

1. Always reach con­clu­sions through your re­search.
2. Always com­pare al­ter­na­tives with equal rigor.
3. De­sign re­search to af­fect de­ci­sion-mak­ing.

I’ll ex­plain what the cur­rent prob­lems are in these three ar­eas, show you ex­am­ples, ex­plain why they hap­pen, and provide solu­tions to ad­dress them.

Les­son No. 1: Reach con­clu­sions

Sarek8

Imag­ine that you have a crys­tal ball. In­stead of tel­ling you the fu­ture, it tells you the an­swer to any ques­tion you have, in­clud­ing your re­search ques­tions. And you’re 100% sure it’ll tell the truth. Would you ask the crys­tal ball all of the re­search ques­tions you have to avoid the effort of do­ing the re­search in the fu­ture?

Sarek9

I think if we could, we definitely should, be­cause the main goal of con­duct­ing re­search is the des­ti­na­tion — the an­swer we get — not the path for get­ting there. And in my view, the ideal re­search is not beau­tifully writ­ten and prop­erly for­mat­ted, but pro­vides the cor­rect an­swer to the most im­por­tant ques­tions we have.

I know that in the real world, we don’t have a crys­tal ball, and we need to rely on strong episte­mol­ogy and good re­search pro­cesses. But we also need to re­mem­ber that the main goal is to get the an­swer — not to fol­low the pro­cess.

Sarek10

If we fail to reach a con­clu­sion, there are three ma­jor prob­lems:

1. The re­search won’t have any im­pact. This is the most im­por­tant prob­lem. We might not uti­lize the re­search that has been con­ducted and it might not af­fect de­ci­sions or change any­thing.
2. [Failing to reach a con­clu­sion may re­sult in] re­dun­dant re­search. When re­search ends with the state­ment “more re­search is needed,” we very of­ten must [re­view] the same facts and con­sid­er­a­tions [as a pre­vi­ous re­searcher], but this time [reach] con­clu­sions by our­selves.
3. Others might draw the wrong con­clu­sion. What about a situ­a­tion where your au­di­ence re­ally wants an an­swer? Maybe they’re a fun­der who is think­ing about fund­ing a new pro­ject or or­ga­ni­za­tion. If you, as the re­searcher, do not draw con­clu­sions from your own re­search, in a way you’re pass­ing this re­spon­si­bil­ity through to your au­di­ence. And very of­ten, the au­di­ence has less knowl­edge or time to ex­plore all of the nu­ances of your re­search as a whole, and there­fore could draw worse con­clu­sions com­pared to those that you would.

Sarek11

Th­ese prob­lems be­came ap­par­ent to me for the first time when I was fol­low­ing the progress of golden rice . Golden rice is ge­net­i­cally mod­ified to con­tain vi­tamin A to pre­vent malnu­tri­tion, mostly in South­east Asia. The Copen­hagen Con­sen­sus Cen­ter stated that im­ple­ment­ing golden rice in agri­cul­ture was one of the biggest pri­ori­ties [for the coun­try of Bangladesh]. And in a 2018 ap­proval, the U.S. Food and Drug Ad­minis­tra­tion stated that the level of beta-carotene in golden rice was “too low to war­rant a nu­tri­ent con­tent claim.”

You can only imag­ine what that did to pub­lic opinion about golden rice. You could see head­lines like “‘Ge­net­i­cally mod­ified or­ganism (GMO) golden rice offers no nu­tri­tional benefits,’ says FDA” and “FDA says golden rice is not good for your health.” But here’s the catch: It’s true that it offers no health benefits — to Amer­i­cans. In or­der to claim that food is for­tified with a spe­cific vi­tamin, com­pa­nies must in­clude a spe­cific amount, based on a typ­i­cal Amer­i­can diet. But a typ­i­cal South­east Asian eats 25 times more rice than a typ­i­cal Amer­i­can. There­fore, the beta-carotene [in golden rice] is suffi­cient for a child who is grow­ing up in Bangladesh with a vi­tamin A defi­ciency, but doesn’t provide any health benefits to Amer­i­cans [who eat much less rice]. This shows that the con­clu­sion we draw needs to be rele­vant to the de­ci­sion that is ac­tu­ally be­ing made.

For­tu­nately, the FDA ad­justed their state­ment and pro­vided sup­ple­men­tary con­clu­sions to help guide the de­ci­sion of Asian reg­u­la­tory bod­ies. As a re­sult of that effort and many other ac­tions, golden rice was ap­proved and re­leased in Bangladesh in 2019. The story has a very happy end­ing, but be­fore [that hap­pened], 2,000 chil­dren lost their lives, and an­other 2,000 be­came ir­re­versibly blind, due to vi­tamin A defi­ciency [num­bers which likely would have been lower if golden rice had been in­tro­duced ear­lier].

That shows how im­por­tant it is to come to a con­clu­sion, for the con­clu­sion to be rele­vant to the de­ci­sion be­ing made, and of course, for the con­clu­sion to be cor­rect.

Sarek12

So, what stops us from draw­ing a con­clu­sion? First of all, we might be wor­ried that there is not enough strong, ob­jec­tive ev­i­dence upon which to base our con­clu­sion. We might in­tro­duce too much sub­jec­tive judg­ment or moral rea­son­ing, which might not be gen­er­al­iz­able to [oth­ers’ views]. Se­cond, there could be too much un­cer­tainty to pro­pose a con­clu­sion. We don’t want to spread in­cor­rect in­for­ma­tion. And fi­nally, we could be wor­ried about stat­ing the wrong con­clu­sion, which could lead to rep­u­ta­tional dam­age — for us per­son­ally as re­searchers, and for our or­ga­ni­za­tion.

Sarek13

There are a few solu­tions to [these ob­sta­cles]. If you don’t have a con­clu­sion, then you can dis­t­in­guish your sub­jec­tive mod­els from the ob­jec­tive ones. Maybe you’re fa­mil­iar with how GiveWell does their cost-effec­tive­ness anal­y­sis, split­ting sub­jec­tive moral judg­ments about im­pact (for ex­am­ple, sav­ing lives) from ob­jec­tive judg­ments based on the ac­tual effec­tive­ness of spe­cific pro­grams or in­ter­ven­tions. Of course, in the end, you need to com­bine those mod­els to make a hard de­ci­sion, similar to how GiveWell makes such de­ci­sions, but at least you have a very clear, trans­par­ent model that [dis­t­in­guishes be­tween] your sub­jec­tive judg­ment and the ob­jec­tive facts.

Sarek14

A sec­ond solu­tion is to state your con­fi­dence and epistemic sta­tus. If you’re un­cer­tain when con­duct­ing your re­search and can’t draw a con­clu­sion based on the ev­i­dence at hand, you can just state that the ev­i­dence is not suffi­cient to draw a con­clu­sion.

Sarek15

Ad­di­tion­ally, I think it’s definitely worth in­clud­ing what ev­i­dence is miss­ing, why you can­not draw a con­clu­sion, and what in­for­ma­tion you would need in or­der to come to a con­clu­sion. That way, if some­body in the fu­ture is re­search­ing the same topic, they can see the spe­cific ev­i­dence you were miss­ing, sup­ple­ment your re­search, and come to a con­clu­sion more quickly.

Sarek16

And the last solu­tion I’ll sug­gest is to pre­sent a list of plau­si­ble con­clu­sions. You canstate how con­fi­dent you are in each pos­si­bil­ity and [ex­plain what miss­ing in­for­ma­tion] could change your mind about it [if you were to learn that in­for­ma­tion]. You’re just ex­press­ing the plau­si­ble con­se­quences of your re­search.

And what if you ac­tu­ally do for­mu­late a con­clu­sion, but are afraid to state it? I would hate to be re­spon­si­ble for di­rect­ing tal­ent and char­i­ta­ble dol­lars to an­other PlayPump-type char­ity [PlayPump con­cluded too early that a cer­tain type of wa­ter pump would be very use­ful to villages, only to find that older wa­ter pumps ac­tu­ally worked bet­ter].

Sarek17

How­ever, if we slow down or fail to pre­sent our true con­fi­dence un­til we are more cer­tain, then we sac­ri­fice all of the lives that have been lost while we’re build­ing up courage. And the lives that are lost when we slow down our re­search [mat­ter just as much] as those lost when we draw the wrong con­clu­sion.

Sarek18

Lastly, we can build in­cen­tive struc­tures and [re­frain from] pun­ish­ing those who come to in­cor­rect con­clu­sions in their char­i­ta­ble en­deav­ors. As long as the method they used was cor­rect, their re­search efforts shouldn’t be diminished in our eyes.

Les­son No. 2: Ap­ply equal rigor to al­ter­na­tives

Sarek19

The sec­ond les­son I’ve learned in­volves com­par­ing al­ter­na­tives and ap­ply­ing equal rigor to all of them.

Sarek20

The first prob­lem with com­par­i­son is that there’s a lot of high-qual­ity re­search out there. Re­searchers might be [ex­plor­ing] the effec­tive­ness of an in­ter­ven­tion that hasn’t been eval­u­ated be­fore. Or maybe they’re as­sess­ing the life or welfare of a wild bird, or a harm that could be caused by a pan­demic [see the next para­graph for more on these ex­am­ples]. Even though the re­search might be very high-qual­ity and [ca­pa­ble of pro­vid­ing] in-depth knowl­edge and un­der­stand­ing about a given topic, we might not be able to com­pare [the is­sue be­ing stud­ied] to all of the al­ter­na­tives.

For ex­am­ple, what if we know that the in­ter­ven­tion we eval­u­ate is effec­tive, but not whether it’s more effec­tive than an al­ter­na­tive ac­tion we could be tak­ing? Wild birds suffer enor­mously in na­ture, but they may not suffer more than an in­sect or a fac­tory-farmed chicken. A pan­demic might be dev­as­tat­ing to hu­man­ity, but is it worse than a nu­clear war? And I know that this point may sound like effec­tive al­tru­ism 101 — it’s a fun­da­men­tal ap­proach for effec­tive al­tru­ists — but why isn’t there more com­pa­rable re­search? Not hav­ing com­pa­rable re­search can re­sult in tak­ing ac­tions that are not as im­pact­ful as an al­ter­na­tive.

Sarek21

The lack of com­par­i­son is not the only prob­lem, though. We also need to look out for an un­equal ap­pli­ca­tion of rigor. Imag­ine that you are try­ing to eval­u­ate the effec­tive­ness of In­ter­ven­tion A. You’re run­ning a very high-qual­ity RCT [ran­dom­ized con­trol­led trial] on that. And for [In­ter­ven­tion] B, you’re offer­ing an ex­pected value based on your best guess and best knowl­edge. How much bet­ter would In­ter­ven­tion B need to be in or­der to com­pen­sate for the smaller ev­i­dence base and [re­duced] rigor? Is it even pos­si­ble?

We might think that we are com­par­ing costs or in­ter­ven­tions in a very similar man­ner, but in fact, they might not be cross-ap­pli­ca­ble. For ex­am­ple, we could have been look­ing into one in­ter­ven­tion for a longer pe­riod of time. We [would thus have had] time to dis­cover all of the flaws and prob­lems with it [mak­ing it look weaker than an in­ter­ven­tion we hadn’t ex­am­ined for very long]. Or the in­ter­ven­tion might sim­ply have a stronger ev­i­dence base, so we can pick spe­cific things that are wrong with it [it can be eas­ier to crit­i­cize some­thing we know a lot about than some­thing we’ve barely stud­ied].

Sarek22

I think the perfect ex­am­ple is some­thing that I’ve ob­served in al­most all of the char­i­ties that peo­ple started through the Char­ity En­trepreneur­ship In­cu­ba­tion Pro­gram. Ba­si­cally, when we first sug­gest a char­ity idea, par­ti­ci­pants are ex­tremely ex­cited and con­fi­dent. They feel like it’s the high­est-value thing for them to do, and are like, “I’m go­ing to spend my life do­ing that.” But they start re­search­ing the idea and their con­fi­dence drops. Maybe they dis­cover that it’s harder than they ex­pected, or the effect was not as big as they had hoped.

How­ever, the mo­ment they start look­ing into al­ter­na­tive op­tions, they re­al­ize that the al­ter­na­tive char­ity that they want to start, or the com­pletely differ­ent ca­reer path they want to go down, has even more flaws and prob­lems than the origi­nal char­ity idea that was sug­gested. In turn, their con­fi­dence re­cov­ers. And for­tu­nately, most of the time, they do re­flect and ap­ply equal rigor, and the higher-im­pact char­ity wins and gets started.

How­ever, imag­ine a situ­a­tion where that’s not the case, and some­body pur­sues a com­pletely differ­ent ca­reer be­cause they didn’t ap­ply equal rigor [when re­search­ing their op­tions], so that they start a differ­ent char­ity that’s less effec­tive. Of course, that [re­sults in] real con­se­quences for the be­ings we want to help.

Sarek23

So why does this hap­pen?

First of all, [it may be due to] a lack of re­search. If you’re work­ing with a small ev­i­dence base, there’s no re­search to which you can com­pare your [ini­tial idea]. Or maybe there isn’t enough data that has been gath­ered in a similar man­ner, so you can­not com­pare it with the same rigor. [An eas­ier prob­lem to solve] is the lack of a sys­tem or re­search pro­cess to en­sure that, at the end, you com­pare all of the op­tions. [Without that], then com­par­i­son won’t hap­pen.

Also, if we don’t have a pre-set pro­cess — a frame­work and ques­tions — we might end up with a situ­a­tion where we’re ex­plor­ing a given area just be­cause there’s more in­for­ma­tion available there. In­stead of look­ing at the in­for­ma­tion we need, we look at the in­for­ma­tion we have and base our re­search on that.

And the last, very hu­man rea­son is just cu­ri­os­ity. It’s very tempt­ing as a re­searcher to fol­low a lead that you’ve dis­cov­ered [and give less at­ten­tion to com­pa­rable ideas]. I fully un­der­stand that. I think a lot of re­searchers de­cided to be­come re­searchers be­cause of this truth-seek­ing drive. They want to find ev­ery­thing and have knowl­edge about the en­tire area they’re ex­plor­ing.

————

If you’re con­vinced that there are benefits of do­ing sys­tem­atic re­search, there are two sim­ple meth­ods to go about that.

Sarek24

First, use spread­sheets to make good de­ci­sions. Do you re­mem­ber the ex­am­ple of [peo­ple’s con­fi­dence in] char­i­ties go­ing up and down? That could have been pre­vented had they sim­ply used a spread­sheet [to com­pare differ­ent ideas in one place, rather than chang­ing their views with each new idea they ex­plored]. The model isn’t par­tic­u­larly novel, but I think it’s quite un­der­used, es­pe­cially given the body of ev­i­dence from de­ci­sion-mak­ing sci­ence that sug­gests it is very effec­tive. The whole pro­cess is ex­plained in Peter Hur­ford’s blog post on the topic. I’m just go­ing to sum­ma­rize the main steps.

First, state your goal. For ex­am­ple, it could be, “What char­ity do I want to start?” You need to brain­storm many pos­si­ble in­ter­ven­tions and solu­tions. Next, cre­ate crite­ria for eval­u­at­ing those solu­tions — and cus­tom weights for each crite­rion. Then, come up with re­search ques­tions that will an­swer and help you to eval­u­ate each of the ideas in terms of the crite­rion. At the end, you can look at the ideas’ rat­ings, sort them, and see which ideas are the three most plau­si­ble ones. Of course, this won’t provide an ul­ti­mate an­swer, but at least it will nar­row down [your op­tions] from 1,000 things you could do to the top three to ex­plore and test in real life.

We need to not only com­pare al­ter­na­tives, but also ap­ply equal rigor. So how do we do that?

Sarek25

Pre-set a re­search pro­cess. Come up with re­search ques­tions [ahead of time] that will help you de­ter­mine how well each solu­tion fits each crite­rion. And I’ll add a few things:

1. For­mu­late a co­her­ent epi­demiol­ogy and trans­par­ent strate­gies for how to com­bine differ­ent ev­i­dence. For ex­am­ple, how much weight do you put on a cost-effec­tive­ness anal­y­sis com­pared to ex­perts’ opinions or ev­i­dence from a study? If we’re look­ing at the study, what’s more im­por­tant — re­sults from an RCT or an ob­ser­va­tional study? How much more im­por­tant?
2. Have clearly defined crite­ria to eval­u­ate op­tions. How much weight does each crite­rion carry? Create pre-set ques­tions that will help you rate each idea on each crite­rion. So for ex­am­ple, you might have a crite­rion [re­lated to] scale. What ex­actly do you mean by “scale” — the scale of the prob­lem, or the scale of the effect you can have with your ac­tions? Hav­ing very spe­cific ques­tions defin­ing your crite­ria will help you make [your anal­y­sis] cross-com­pa­rable.
3. De­cide how many hours you’re go­ing to put into re­search­ing each of the ques­tions and ap­ply that time equally. This will in­crease the odds of your re­search be­ing con­sis­tent and cross-ap­pli­ca­ble.

Les­son No. 3: De­sign re­search to af­fect de­ci­sion-mak­ing

sarek26

We know that our goal is to have the high­est im­pact pos­si­ble. That’s why we need to think about de­sign­ing our re­search pro­cess in a way that af­fects de­ci­sion-mak­ing. I know that the path is not ob­vi­ous, es­pe­cially for re­search or­ga­ni­za­tions, where the met­rics of suc­cess are not that tan­gible and we do not always have a clear and testable hy­poth­e­sis about how im­pact will oc­cur as a re­sult of our ac­tions.

Sarek27

That may lead to a few prob­lems. First of all, our re­search might be in­effec­tive, or less effec­tive, if we don’t have a clearly defined goal and [plan for] how our ac­tions will lead to [that goal]. It’s very hard to de­sign a mon­i­tor­ing and eval­u­a­tion sys­tem to en­sure that we are on the right track. As a re­sult, we might end up not know­ing that we don’t have any im­pact.

Sarek28

Se­cond, it’s very hard to find as­sump­tions and miti­gate risk if we don’t have a clear path to im­pact. There’s a whole range of hid­den as­sump­tions we’re mak­ing [to cre­ate the] un­der­ly­ing de­sign of our re­search pro­cess, and if they are left un­challenged, it can lead to one of the steps failing and mak­ing your pro­ject less effec­tive.

Sarek29

Next, [con­sider your] re­search team. They might not have a shared un­der­stand­ing of the pro­ject, and there­fore it could be harder for them to stay on track. Sim­ply put, they might feel less mo­ti­vated be­cause they don’t see how their spe­cific ac­tions — for ex­am­ple, the re­search pa­per they’re writ­ing — is con­nected to the long-term im­pact.

Sarek30

Fi­nally, it’s very hard to com­mu­ni­cate com­plex re­search in your ini­ti­a­tive if you don’t have a clear path to im­pact. It’s harder to quickly con­vey the aims of your work to your stake­hold­ers, de­ci­sion-mak­ers, fun­ders, or or­ga­ni­za­tions you want to col­lab­o­rate with.

Sarek31

It’s also very hard to reach agree­ment among your stake­hold­ers. Very likely, the main way your re­search could have an im­pact will be in­form­ing other de­ci­sion-mak­ers in the space: fun­ders, or­ga­ni­za­tions [that di­rectly provide in­ter­ven­tions], and other re­searchers. But if we don’t di­rectly in­volve de­ci­sion-mak­ers in cre­at­ing our agenda, we won’t en­sure that our re­search will be used.

Sarek32

Do you re­mem­ber those stud­ies from the be­gin­ning [of my talk] — the ones about the Eiffel Tower, crocodiles, and sheep? Well, un­for­tu­nately, [those kinds of stud­ies oc­cur] in pub­lic health, eco­nomics, and global de­vel­op­ment — the do­mains I care about. There’s this no­tion that [con­duct­ing re­search in the] so­cial sci­ences, as with any other sci­ence, [will re­sult in] some­thing that even­tu­ally changes the world. That’s why the cur­rent fo­cus is on cre­at­ing good study de­sign.

In­stead, we should be fo­cus­ing on cre­at­ing good re­search ques­tions. [Tran­scriber’s note: Even a well-de­signed study won’t have much im­pact if it doesn’t seek to an­swer an im­por­tant ques­tion.] We have to have a more planned, ap­plied ap­proach. If a re­searcher and fun­der are look­ing at those stud­ies, they might see that an in­ter­ven­tion may ul­ti­mately lead to im­pact. They also might re­al­ize there are some miss­ing pieces that’ll stop you from hav­ing im­pact.

Sarek34

So how do we pre­vent that? You can build a the­ory of change. It’s a method­ol­ogy that is very com­monly used in global de­vel­op­ment and [di­rect-ser­vice] or­ga­ni­za­tions, but I think we can cross-ap­ply it to re­search or­ga­ni­za­tions. Ini­tially, it’s a com­pre­hen­sive de­scrip­tion and illus­tra­tion of how and why de­sired change is sus­pected to hap­pen as a re­sult of your ac­tion.

I won’t go into de­tails on how to build a the­ory of change. I’m just go­ing to pre­sent the core [el­e­ments].

First, you look at the in­puts and your ac­tivi­ties. [Those rep­re­sent] the pro­gram­matic com­pe­tence of your or­ga­ni­za­tion. They might in­clude pre­sent­ing at con­fer­ences, do­ing re­search, and writ­ing. Then, you look at the out­puts. Those could in­clude writ­ing three re­search re­ports, or an­swer­ing 10 re­search ques­tions. Then, you look at the in­ti­mated out­comes of your ac­tions. Maybe a fun­der de­cides to fund differ­ent pro­jects [than the ones they se­lected] be­fore your re­search was pub­lished. And lastly, you look at the goal that I think we all share, which is to re­duce suffer­ing and in­crease hap­piness.

So that’s the core of your the­ory of change. And I’m go­ing to pre­sent one ex­am­ple of that, which I think is one of the best illus­tra­tions of ap­ply­ing a the­ory of change.

Sarek35

The Hap­pier Lives In­sti­tute, an or­ga­ni­za­tion started by Michael Plant and Clare Don­ald­son, built a beau­tiful the­ory of change.

Sarek36

As you can see, they con­sid­ered mul­ti­ple paths for hav­ing an im­pact, but ul­ti­mately it can be sum­ma­rized in this sim­ple the­ory of change. They con­duct the­o­ret­i­cal hap­piness re­search. For ex­am­ple, they look at sub­jec­tive well-be­ing as a met­ric. Also, they do ap­plied hap­piness re­search, eval­u­at­ing char­i­ties and iden­ti­fy­ing promis­ing in­ter­ven­tions and pro­jects. Then, they com­mu­ni­cate to key or­ga­ni­za­tions and in­di­vi­d­u­als, hop­ing to [bring about] very tan­gible re­sults. Ul­ti­mately, their goal is hap­pier lives.

Sarek37

You not only need to build a the­ory of change to know your im­pact. You also need to in­volve stake­hold­ers to cre­ate an effec­tive re­search agenda. You might rec­og­nize [the above] spread­sheet. But this time, in­stead of list­ing in­ter­ven­tions, we have spe­cific re­search ques­tions we’re think­ing about ex­plor­ing. In­stead of crite­ria, there are differ­ent de­ci­sion-mak­ers. It might be a fun­der that other or­ga­ni­za­tions want to col­lab­o­rate with.

In this situ­a­tion, you can ask each of them to rate each of their re­search ques­tions on a scale from zero to 10, ask­ing how use­ful it would be for you to have an an­swer. When ev­ery­body fills up [the spread­sheet with their] an­swers, you can sim­ply sort it and see which re­search ques­tions will af­fect their ac­tions and de­ci­sions the most.

Sum­mary

I’ll sum­ma­rize the main prin­ci­ples [of my talk]:

Sarek38

* Each piece of re­search needs to have a con­clu­sion. If not for that prin­ci­ple, I might have re­searched the ex­act level of cor­ti­sol in fish and how it af­fects wa­ter qual­ity. But I wouldn’t be able to in­tro­duce you to the Fish Welfare Ini­ti­a­tive, an or­ga­ni­za­tion work­ing on im­prov­ing the welfare of fish. They got started this year as a res­i­dent of [Char­ity En­trepreneur­ship’s] re­search in­cu­ba­tion pro­gram.
* We also need to have a sense of how the in­ter­ven­tion we are re­search­ing com­pares to other op­tions. If not for this prin­ci­ple, I would [know the pre­cise] effect of io­dine sup­ple­men­ta­tion, but I wouldn’t have com­pared it to folic acid and iron for­tifi­ca­tion. I wouldn’t be able to an­nounce that For­tify Health, pur­su­ing the lat­ter, has re­cently re­ceived its sec­ond GiveWell In­cu­ba­tion Grant of $1 mil­lion and is on its way to be­com­ing a GiveWell-recom­mended char­ity.
* The last key el­e­ment of im­pact­ful re­search is [the rele­vance of your re­search to] de­ci­sion-mak­ing in your field and the de­ci­sion-mak­ers whom you might af­fect. To en­sure that your re­search will be uti­lized, root it in a the­ory of change and in­volve the de­ci­sion-mak­ers to cre­ate an im­pact­ful re­search agenda. Without this ap­proach, I’d know ex­actly how many cigarettes are sold in Greece, but I wouldn’t be able to in­tro­duce two char­i­ties work­ing on to­bacco tax­a­tion.

As effec­tive al­tru­ists’ motto goes, we need to not only figure out how we can use our re­sources to help oth­ers the most, but also [how to do it] through the [in­ter­ven­tion we’re con­sid­er­ing]. So let’s close this gap be­tween re­search and de­ci­sion-mak­ing, and make re­search mat­ter. Thank you.

Moder­a­tor: You men­tioned that when re­search con­cludes that more re­search is needed, it’s re­ally im­por­tant for us to do that [fol­low-up] re­search. But I think one of the prob­lems is that those [fol­low-up] re­search ques­tions are just in­her­ently less in­ter­est­ing to re­searchers than novel ques­tions. So how do we bet­ter in­cen­tivize that work?

Karolina: Yeah, that’s a very good ques­tion. I think it re­ally de­pends on the com­mu­nity you’re in. If you’re in a com­mu­nity of effec­tive al­tru­ists, we all agree that our main driver might be al­tru­ism and the im­pact we can have through our re­search. So that’s the biggest in­cen­tive.

We just need to re­mind our­selves and all re­searchers [in the effec­tive al­tru­ism com­mu­nity] that their re­search is not just an in­tel­lec­tual pur­suit, it’s also some­thing that can change the lives of other be­ings. Hav­ing this in mind will help you choose the bet­ter re­search ques­tion that is not only in­ter­est­ing, but also im­pact­ful.

Moder­a­tor: Great. Thank you.

Karolina: Thank you.