Brendan Eappen: Lessons from an EA-Aligned Charity Startup

When GiveWell wrote that they were look­ing for char­i­ties to work on micronu­tri­ent for­tifi­ca­tion, For­tify Health rose to the challenge. With help from a $300,000 GiveWell grant, they be­gan to work on wheat flour for­tifi­ca­tion, hop­ing to re­duce In­dia’s rate of iron defi­ciency. In this talk, co-founder Bren­dan Eap­pen dis­cusses the char­ity’s story and cru­cial de­ci­sions they faced along the way. He also offers ad­vice to mem­bers of the effec­tive al­tru­ism com­mu­nity in­ter­ested in pur­su­ing similar work in the field of global de­vel­op­ment.

Below is a tran­script of Bren­dan’s talk, which has been lightly ed­ited for clar­ity. You can also watch it on YouTube and read it on effec­tivealtru­

The Talk

I want to talk about what we’re do­ing at For­tify Health, and then, more broadly, about some of the cen­tral ten­sions [I’ve ex­pe­rienced] as some­one who started an effec­tive al­tru­ism [EA]-al­igned char­ity startup in a world of other global health ac­tors.


Our goal at For­tify Health is to im­prove pop­u­la­tion health by ad­dress­ing wide­spread iron-defi­ciency ane­mia and neu­ral tube defects in In­dia. We’re do­ing that through for­tifi­ca­tion (i.e., adding vi­tam­ins and min­er­als like iron, folic acid, and vi­tamin B12 to the foods that peo­ple already eat).


The main prob­lem we ad­dress is ane­mia. Half of the women and chil­dren in In­dia suffer from ane­mia. This is gen­er­ally char­ac­ter­ized by an in­abil­ity to carry enough oxy­gen in your body to your brain and mus­cles, which leads to fa­tigue, ex­haus­tion, stunted cog­ni­tive de­vel­op­ment, eco­nomic loss, preg­nancy com­pli­ca­tions, and other prob­lems.

Neu­ral tube defects are the most com­mon birth defect in In­dia. Nearly four out of ev­ery one thou­sand chil­dren born have this defect. Essen­tially, it is a malfor­ma­tion of the spine. The spinal cord or the brain can be severely de­bil­i­tated, lead­ing to phys­i­cal and men­tal im­pair­ment. This is most of­ten caused by folic acid defi­ciency in the first month of preg­nancy — in some cases, be­fore peo­ple even know they’re preg­nant.

For­tifi­ca­tion, or adding vi­tam­ins and min­er­als to food, is an ev­i­dence-based, cost-effec­tive strat­egy to pre­vent these prob­lems. But why did we start work­ing on for­tifi­ca­tion?


Here is some back­story: As some­one who has been in­ter­ested in effec­tive al­tru­ism for quite a while, I was ex­cited to see Char­ity Science Health launch. They looked at a num­ber of in­ter­ven­tions that they thought mem­bers of the EA com­mu­nity could suc­cess­fully de­ploy as new star­tups. They looked at GiveWell’s “Char­i­ties we’d like to see” blog post. They took sug­ges­tions from around the com­mu­nity and came up with their top five that they thought a non-ex­pert could im­ple­ment.

Char­ity Science Health had been launched in-house [at GiveWell] to send text mes­sages re­mind­ing new moms to keep their kids healthy and safe by vac­ci­nat­ing them. We, as you may have guessed, started For­tify Health as part of the iron and folic acid for­tifi­ca­tion group.


One of the cen­tral ques­tions that we had to con­sider was whether young, for­eign, non-ex­perts could re­spon­si­bly have an im­pact. This was not a ques­tion we took lightly. The EA move­ment is very en­thu­si­as­tic. There are a lot of re­sources and young peo­ple who are try­ing to do good. But the ques­tion is: Is it always good to de­ploy [an in­ter­ven­tion] with­out the rele­vant ex­per­tise?

I’m cu­ri­ous if any­one in the au­di­ence has any in­sight into what some key con­cerns would be for some­one like me — who doesn’t nec­es­sar­ily have a back­ground in the area that we’re work­ing in, the ge­o­graphic re­gion, or a wealth of or­ga­ni­za­tional ex­pe­rience — to get in­volved.

Au­di­ence mem­ber: Why launch a startup when there are other peo­ple who are already es­tab­lished [in the re­gion]?

Bren­dan: Ab­solutely. Do we need to do our own thing when there’s already an in­fras­truc­ture, a frame­work, a wealth of ex­per­tise, and some true ex­perts [work­ing on this prob­lem]? Why not work to­gether? I’ll touch on this.

Au­di­ence mem­ber: You could be do­ing more harm than good be­cause you’re ac­tu­ally dis­plac­ing state ac­tivity.

Bren­dan: Ab­solutely. We would be dev­as­tated to find out that we jumped into this in a flaky or tran­sient way. Other ac­tors within the coun­try — even gov­ern­ment ac­tors — may have de­cided not to work on these prob­lems. Or maybe they would bring a differ­ent wealth of ex­per­tise, re­sources, power, and cred­i­bil­ity — and per­haps a bet­ter sense of the best ap­proaches to these is­sues.

Au­di­ence mem­ber: And what’s the sus­tain­abil­ity of your ap­proach?

Bren­dan: Right. What hap­pens when we go away, or when the effec­tive al­tru­ism move­ment changes its mind about the most im­por­tant pri­ori­ties to fund? Are we able to sus­tain the work we’re do­ing? Is some­one else able to sus­tain that work? That is a re­ally im­por­tant ques­tion.

I’ll add a few more things. [Let me ad­dress] the con­sid­er­a­tion of whether to join the ex­ist­ing in­fras­truc­ture and the ex­perts who are already al­igned on these is­sues, rather than [start­ing a ven­ture] that could per­haps an­tag­o­nize them or [drain] their re­sources. That is a difficult ques­tion. A lot of EAs would think that the coun­ter­fac­tual value lies in cre­at­ing some­thing new that wouldn’t be done oth­er­wise.

But that re­lies on an as­sump­tion that ex­ist­ing or­ga­ni­za­tions couldn’t use you or the re­sources you could bring to the table — and that the kind of work you’re do­ing in iso­la­tion is adding value in a way that won’t hurt other or­ga­ni­za­tions. I think that’s an as­sump­tion that needs to be tested on a case-by-case ba­sis, and with great hu­mil­ity.

Another re­lated con­cern is gaug­ing ne­glect. Is there re­ally a gap? If there are great or­ga­ni­za­tions already work­ing on a prob­lem, do we re­ally need one more? And if one more or­ga­ni­za­tion could add value, [who should run it]? After all, I’m not an ex­pert. Couldn’t we find some­one to start an or­ga­ni­za­tion [more effec­tively] than I? If those peo­ple are already busy do­ing other good work, are we good enough?

Also, what does the world look like when we start For­tify Health? What does our launch mean for the pool of re­sources that are go­ing to­ward [the is­sue of for­tifi­ca­tion] and oth­ers like it? Could we po­ten­tially cause harm in some way to the move­ment, to the par­tic­u­lar brand­ing of this in­ter­ven­tion, to the other ac­tors, or to the abil­ity of gov­ern­ment to in­vest in these kinds of in­ter­ven­tions? Could we even cause di­rect harm through a short-sighted or su­perfi­cial ap­proach to the in­ter­ven­tion it­self? Could we im­ple­ment some­thing that hurts peo­ple?

Then, from our per­spec­tive, could we [gain skills]? Could we, as in­di­vi­d­u­als, be­come bet­ter able to have an im­pact on the world if we took on this pro­ject?

Re­solv­ing these key con­sid­er­a­tions was non­triv­ial. We sought guidance within and be­yond the effec­tive al­tru­ism com­mu­nity, which has a num­ber of in­ter­est­ing ideas about the moral [im­pli­ca­tions] of this kind of work. We asked peo­ple who are very crit­i­cal of these moral frame­works whether it made sense for us to get in­volved. We talked to ex­perts within the for­tifi­ca­tion space, as well as peo­ple who have re­lated ex­per­tise in nu­tri­tion or pub­lic health and maybe don’t think that for­tifi­ca­tion is the best solu­tion. And we talked to other peo­ple who have a sense of our com­pe­ten­cies and could help us gauge whether we were the right peo­ple to try to do some­thing like this — or what we would need to do in or­der to be­come the right peo­ple to do it. We as­sessed our­selves to un­der­stand what a team would look like that com­ple­ments our strengths and weak­nesses.

We re­al­ized that, to some ex­tent, we could rely on ex­ter­nal eval­u­a­tion to sup­port these kinds of judg­ments. We con­sid­ered: Are other or­ga­ni­za­tions will­ing to put skin in the game? And could we get the kind of fund­ing that would val­i­date this effort? We knew that if we ap­plied for a GiveWell In­cu­ba­tion Grant, they would con­duct a thor­ough re­view of the team as well as the in­ter­ven­tion.

We also thought about al­ter­na­tive ca­reer paths. What would we be do­ing oth­er­wise? If we didn’t work on this pro­ject, what would it look like to work in a great or­ga­ni­za­tion that already ex­isted, or in sup­port of a gov­ern­ment pro­ject [that could serve as an] ad­vi­sory and sup­port sys­tem and help us build ex­per­tise?

(I want to speci­fi­cally rec­og­nize Char­ity Science Health, which en­couraged us to take on the pro­ject ini­tially, gave us the seed fund­ing to get started, and then pro­vided men­tor­ship that con­tinues to this day, but has taken differ­ent forms.)

So how do you ac­tu­ally start? That is a non­triv­ial ques­tion.


We took some early steps to gain ex­per­tise, ask­ing: What is for­tifi­ca­tion? How are other peo­ple do­ing this? Do we re­ally be­lieve it works as well as we think it does? Then, we filled in our knowl­edge gaps by talk­ing to ex­perts. We sought to iden­tify the best pos­si­ble tar­gets. If we were go­ing to start a new pro­ject or bring new re­sources to the table, where could we best put those to use?

Then, we talked to lo­cal or­ga­ni­za­tions — in our case, in In­dia — and we were in­vited to visit. They sug­gested we come to In­dia, see the work that they were do­ing, and de­ter­mine whether [it made sense] for us to get in­volved.

We were par­tic­u­larly con­cerned about whether we would be wel­comed to the table. We won­dered if lo­cal or­ga­ni­za­tions would be in­ter­ested in the kinds of fund­ing and ad­di­tional sup­port that we could offer. We asked: What strate­gies are (or are not) be­ing em­ployed? Can we learn from those? And are there ac­tu­ally gaps? Do we need to ex­ist in or­der for those gaps to be filled, or could other or­ga­ni­za­tions per­haps do the work bet­ter — ei­ther with­out us, or with [us play­ing a sup­port­ing role]?

As we were meet­ing with these or­ga­ni­za­tions, we were an­ti­ci­pat­ing ap­ply­ing for a GiveWell In­cu­ba­tion Grant.


One of the things we wanted to do was con­nect [the other for­tifi­ca­tion or­ga­ni­za­tions] to the same po­ten­tial fund­ing streams [in the EA com­mu­nity], so that they could con­tinue their work at a larger scale. But there are some ide­olog­i­cal differ­ences, as well as or­ga­ni­za­tional con­straints, that cre­ated bar­ri­ers to [do­ing that].

As we read­ied our­selves to ap­ply for an in­cu­ba­tion grant, we pre­sented GiveWell with:

- In­for­ma­tion about In­dia and why we thought that was the best place to start a new pro­ject.
- Con­ver­sa­tion notes from the var­i­ous or­ga­ni­za­tions in In­dia that we had con­sulted with, and a syn­the­sis of the strate­gies that we could em­ploy.
- Pro­pos­als [cen­tered on] how we thought we could add the most value to the ex­ist­ing ecosys­tem.
- [The pro­jected cost of our pro­pos­als] and how they ap­plied to the cost-effec­tive­ness anal­y­sis that GiveWell had de­vel­oped in-house for iron for­tifi­ca­tion.

Long story short: We were awarded a GiveWell In­cu­ba­tion Grant. We were asked to re­fine our strat­egy, build our dream team, and im­ple­ment our ap­proach. This is a step that I think of­ten doesn’t hap­pen in EA cir­cles be­cause we spend a lot of time on ab­stract ques­tions, in the pro­cess of set­ting pri­ori­ties, which is fun and im­por­tant. But [im­ple­men­ta­tion re­quires] an en­tirely differ­ent skill set. [There’s a lot in­volved in mak­ing] that tran­si­tion re­spon­si­bly.

I want to spend the rest of this talk dis­cussing some key clashes be­tween what I’ll call the hy­per­bolized effec­tive al­tru­ist (which doesn’t de­scribe the move­ment as a whole or any par­tic­u­lar peo­ple, but rather serves as an ex­treme ex­am­ple) and what I’ll call our typ­i­cal global health ac­tor. Th­ese are peo­ple who are thought­fully and am­bi­tiously do­ing good work in the field. They per­haps have a differ­ent moral frame and differ­ent in­tu­itions about some of the best strate­gies to de­ploy. What I’ll ar­gue is that these are cen­tral ten­sions that we faced as an EA-funded, EA-al­igned or­ga­ni­za­tion that saw a lot of value in the crit­i­cism [di­rected at us] by the global health com­mu­nity. And we sought to rec­on­cile these two camps.


An EA might think that lead­ers should be strong ad­vo­cates and defen­ders of the EA move­ment’s ap­proach. If you are run­ning an EA char­ity, then per­haps you should be­lieve that the EA move­ment’s moral frame­work takes the cake. But some­one crit­i­cal of the move­ment might sug­gest that lead­ers need to humbly en­gage with lo­cal ac­tors and their lo­cal moral wor­lds. What re­ally mat­ters to the peo­ple who are work­ing on these is­sues and who are af­fected by these kinds of in­ter­ven­tions?

This was crit­i­cally im­por­tant, and could have been a sub­stan­tial failure early on. When we were talk­ing to lo­cal NGOs and gov­ern­ment offi­cials, we had to be hum­ble enough to learn from their ap­proaches. We had to un­der­stand what they were already do­ing and why. And we had to be open-minded enough to con­sider what might, at first, seem like less cost-effec­tive or more difficult-to-mea­sure ap­proaches. We also needed to be very re­cep­tive, and even proac­tive, about some of the weak­nesses and blind spots of our own strate­gies (and even some of the weak­nesses and blind spots as­so­ci­ated with how EAs [op­er­ate]).


Also, as EAs, we might want to hire other EAs and build a hi­er­ar­chy un­der their lead­er­ship. We know that “value drift” in an or­ga­ni­za­tion can be very dan­ger­ous. But some­one crit­i­cal of the EA move­ment might say, “Wait a sec­ond — we need to de­velop a high-level lo­cal team that has a voice.” Col­lab­o­ra­tion across a team is strongest when it’s par­ti­ci­pa­tory and non-hi­er­ar­chi­cal — when lo­cal voices rep­re­sent­ing what’s pos­si­ble and ideal are ac­tively in­volved in set­ting goals for the or­ga­ni­za­tion.

This was hard to do. Every­one we sought to hire was older than I was, more in­formed than I was, and had a bet­ter sense of the lo­cal con­text. And that was ex­actly what we wanted. I would en­courage other EAs to do the same. Don’t just ag­gre­gate other effec­tive al­tru­ists who think [the way you do]. In­stead, bring in peo­ple who might have very differ­ent opinions about what’s im­por­tant and how to ac­com­plish the or­ga­ni­za­tion’s goals.

This was par­tic­u­larly im­por­tant when we thought about our core strat­egy: What were we will­ing to do, and what were we un­will­ing to do? It re­sulted in us con­sid­er­ing some of the less cost-effec­tive ap­proaches to re­solv­ing the [iron defi­ciency] prob­lem, be­cause we thought they might [bet­ter fit the goals of the peo­ple we’d brought in]. This took a de­gree of flex­i­bil­ity for us.


EAs may of­ten want to in­de­pen­dently ex­e­cute a con­se­quen­tial­ist strat­egy for a few rea­sons. One is mea­sura­bil­ity and credit. EAs are very in­ter­ested in causal at­tri­bu­tion and know­ing, for ex­am­ple, that For­tify Health ac­tu­ally made a differ­ence. But that can be very limit­ing. We don’t want to iso­late our­selves from other or­ga­ni­za­tions that ei­ther can make our work stronger or benefit from our work. Col­lab­o­ra­tion is key. Even if it mud­dies the wa­ters on causal at­tri­bu­tion, I would re­ally en­courage any­one who’s work­ing in these spaces to think about where the most pro­duc­tive col­lab­o­ra­tions could lie.

Others who are crit­i­cal [of the EA move­ment] might en­courage us to en­gage other ac­tors, and learn from and re­spect their ap­proaches. Th­ese could in­clude strate­gic ap­proaches, but also the moral frames used to mo­ti­vate peo­ple in do­ing this work. This took a great deal of proac­tivity. We had to rec­og­nize the EA blind spots. We had to rec­og­nize our naivete and the ex­tent to which we didn’t have the ex­per­tise that some of these or­ga­ni­za­tions and peo­ple had from their decades of work in the field.


As EAs, we also might dis­miss jus­tified trade-offs. We might do a cost-benefit anal­y­sis to rec­og­nize the benefits our work might have and the harm it could cause. We might say what seems best for the world and where we can op­ti­mize the differ­ence [be­tween tak­ing ac­tion and reach­ing an ideal out­come]. That’s in­ad­e­quate for a lot of folks who are crit­i­cal of effec­tive al­tru­ism. They would sug­gest we be wary of any in­tended or un­in­tended nega­tive con­se­quences that we could [in­flict] in the course of do­ing the work, and en­courage us not to treat the harms as neg­ligible. They would re­mind us that some­one suffer­ing as a re­sult of our work [is an effect that] re­ally mat­ters. Even if you’re do­ing some­thing that seems net benefi­cial, that doesn’t ex­cuse you from con­sid­er­ing the im­por­tance of miti­gat­ing other risks. This is some­thing that I think is of­ten missed in effec­tive al­tru­ism, or at least missed in the dis­course around these ideas. That’s harm­ful to the peo­ple you’re leav­ing be­hind and, from the per­spec­tive of other or­ga­ni­za­tions, very alienat­ing. I think this is some­thing EAs need to be quite cau­tious of.

For us, this in­cluded con­sid­er­ing the risk of how for­tifi­ca­tion could be harm­ful to a sub­set of the pop­u­la­tion. It in­cluded con­sid­er­ing how shift­ing the grind­ing of wheat from a lo­cal level to a larger, cen­tral­ized level could hurt lo­cal mil­lers and their busi­nesses. That weighed into our de­ci­sion to fo­cus on some of the already-cen­tral­ized pro­cesses. We even con­sid­ered the po­ten­tial risks of var­i­ous dos­ing paradigms that we could use when mod­el­ing the effec­tive­ness of the in­ter­ven­tion.


Effec­tive al­tru­ists want to fo­cus on scale and cost-effec­tive­ness. Th­ese guide us as a com­mu­nity. But oth­ers in global health would have us fo­cus on the vuln­er­a­ble, pri­ori­tiz­ing those who would not be reached oth­er­wise and us­ing that as mo­ti­va­tion to in­no­vate for greater benefit. We don’t have to stick to what we know in terms of what works and what doesn’t and how much it costs. We can challenge the so­cial struc­tures that define those costs and con­strain the work that we do. I think the EA com­mu­nity has got­ten bet­ter at step­ping back like this and think­ing big­ger — and maybe even ac­cept­ing greater un­cer­tainty that we have [in the past]. But we still haven’t be­come as flex­ible as some of the other rad­i­cal and won­der­ful global health ac­tors.

One of the rea­sons why we’ve fallen into the trap of do­ing this is our fo­cus on scale. We de­cided to work in In­dia in large part be­cause of how much we thought we could grow. But as a com­mu­nity, we may be sys­tem­at­i­cally ne­glect­ing smaller coun­tries for which scale of this pro­por­tion just isn’t pos­si­ble. I com­mend or­ga­ni­za­tions like Pro­ject Healthy Chil­dren that are work­ing on for­tifi­ca­tion in some of the coun­tries that have been sys­tem­at­i­cally ne­glected.

[We also may fall into this trap when] con­sid­er­ing whether to do what’s eas­ier or put our heads to­gether and af­fect the more challeng­ing-to-reach pop­u­la­tions. This can mean the differ­ence be­tween [fo­cus­ing on] cen­tral­ized for­tifi­ca­tion via the mills cater­ing to the most well-off peo­ple and [fo­cus­ing on] de­cen­tral­iza­tion, which might be harder or more ex­pen­sive to mon­i­tor, but nec­es­sary to reach the peo­ple who are poor­est or the bulk of the pop­u­la­tion.


EAs may want to fo­cus on ab­stract prob­lems and ra­tio­nal re­sponses. That can be fun and good. But oth­ers want to fo­cus on soli­dar­ity, com­pas­sion, and care­giv­ing — the hu­man­is­tic side that, at its core, fo­cuses on in­di­vi­d­ual peo­ple rather than the scale of a prob­lem. I in­vite folks to in­te­grate the two in the way we think about our work and [al­ign it] with the peo­ple we serve, as well as in how we talk about this. [We risk] los­ing peo­ple by talk­ing about the mas­sive im­pact we can have on countless un­knowns. But we’re ag­gre­gat­ing all of this data be­cause we care about ev­ery sin­gle in­di­vi­d­ual who’s af­fected. And most peo­ple who are work­ing in this field [place a differ­ent level of at­ten­tion] than most EAs on the peo­ple they’re serv­ing, and for whom they are try­ing to im­prove some as­pect of life.


EAs of­ten want to de­ploy and mea­sure ver­ti­cal in­ter­ven­tions. It’s cleaner and eas­ier to im­ple­ment. The ev­i­dence base may be more ro­bust. But oth­ers who are crit­i­cal of effec­tive al­tru­ism may re­ally push us to strengthen ex­ist­ing sys­tems and fo­cus on long-term im­pact and sus­tain­abil­ity. As we try to work at scale, rec­og­nize that there are oth­ers [with many more re­sources] than the EA com­mu­nity who are in­volved in this game. If we’re go­ing to be able to work to­gether, we should be think­ing about how our work cor­re­sponds with and strength­ens the work that other sub­stan­tial ac­tors — par­tic­u­larly gov­ern­ment ac­tors — may be do­ing in the field.

Although I’ve high­lighted a few cen­tral ten­sions that I think char­ac­ter­ize this work and have been im­por­tant to some of the op­er­a­tional and strate­gic de­ci­sions that we’ve made, I do want to [ac­knowl­edge] that oth­ers in the field want a lot of the same things that the EA com­mu­nity wants.


We’re try­ing to thought­fully, cre­atively, and en­thu­si­as­ti­cally take ac­tion to serve the needs of oth­ers. So in­stead of silo­ing our­selves based on a strat­egy or a con­se­quen­tial­ist wor­ld­view that [runs counter] to main­stream ap­proaches, I think we need to find ways to in­te­grate. We need to humbly ask our­selves, “How can we learn from the peo­ple who are do­ing this good work, and how do we work to­gether?”

To close, the themes of this talk have been for­ti­tude, col­lab­o­ra­tion, and hu­mil­ity.


I want to provide you with the for­ti­tude or bold­ness to take well-guided ac­tions to over­come these bar­ri­ers to do­ing some­thing rather than noth­ing — and to sur­round your­self with the right kind of sup­port to do your work re­spon­si­bly and in al­ign­ment with other ac­tors.

I want you to col­lab­o­rate with oth­ers, to have mean­ingful, close en­gage­ment with an em­pow­ered team. Think of it not just as your EA vi­sion be­ing im­ple­mented by agent-less ac­tors, but rather some­thing that is col­lab­o­ra­tive. That col­lab­o­ra­tion ex­tends be­yond your team into the space where other ac­tors are work­ing.

I also en­courage you to be more hum­ble, to em­brace this hu­mil­ity, to pre­pare to be wrong, and to change course [when im­ple­ment­ing] your strat­egy. Pre­pare to stop when what you set out to do doesn’t work, or work well enough, to sup­port the work of oth­ers. Be ea­ger to learn from oth­ers rather than [force] an agenda to ad­vance an EA cause or an EA-al­igned strat­egy.

I want to put up a few pic­tures of some of our awe­some team mem­bers in In­dia, who have been at the core of our strate­gic de­ci­sions in terms of how we im­ple­ment our work.


Our strat­egy has changed course and we’ve learned quite a bit by hav­ing a re­ally strong group of in­di­vi­d­u­als putting our heads to­gether on how to use the awe­some re­sources that EA can provide in the most effec­tive — but maybe not always the most in­tu­itive — ways for an EA. Thank you.

Moder­a­tor: Thank you. That was a scin­til­lat­ing talk. The many ways in which we have nat­u­ral in­cli­na­tions — and how they may butt heads with the global health com­mu­nity — are not ob­vi­ous.

I’m hop­ing you can con­tex­tu­al­ize your talk with some of the spe­cific work that you do. Could you share how long it took be­fore you were do­ing some­thing on the ground, and what some of the biggest tri­als were in that ini­tial startup [phase]?

Bren­dan: Yeah, ab­solutely. I think that one of the early [ques­tions] was: How far do we need to go in or­der to know that this is a good idea? I think it wasn’t un­til we were sit­ting down face-to-face with other peo­ple who are work­ing on these kinds of pro­jects in In­dia that we felt we had a strong enough in­vi­ta­tion to join them and be a pro­duc­tive ac­tor in this space. [This work also en­abled us to see that] the gap was large enough for there to be some­thing mean­ingful we could do, some­thing that wouldn’t hap­pen with­out us. That hap­pened af­ter we spent about four months on this pro­ject, con­cep­tu­al­iz­ing these ideas.

The most es­sen­tial work hap­pened over the course of last sum­mer, when we were try­ing to as­sem­ble the dream team. We were figur­ing out what it took to do the work that we didn’t know how to do our­selves. GiveWell is re­view­ing us now, and I hope we can get some more money to do this work. But the money doesn’t speak for it­self. You have to im­ple­ment a strat­egy that is thought­ful and effec­tive, and those strate­gies have re­ally come from our part­ners and team­mates in In­dia.

Moder­a­tor: It’s a difficult re­search prob­lem in and of it­self, let alone figur­ing out how to ex­e­cute in a to­tally un­known en­vi­ron­ment. How can some­body de­ter­mine whether or not this is some­thing that they would be a good fit for?

Bren­dan: I think that the crit­i­cal con­sid­er­a­tions are:

- How flex­ible can you be?
- What kind of at­ti­tude can you set, and what kind of cul­ture can you build, within the team?
- How will­ing are you to be wrong?
- How will­ing are you to defer to the judg­ments of other peo­ple and their ex­per­tise? Can you set your­self up to rely on the knowl­edge of oth­ers to de­ter­mine the best pos­si­ble path for­ward?

Moder­a­tor: Thank you for your pre­sen­ta­tion.

No comments.