Thanks for checking—I think we can leave it here in case someone else missed it :)
Sindy_Li
Thanks Linch! It’s in the last bullet point of the beginning “notes” section and also mentioned in the body of the doc.
Interesting. If most voters are in favor of cutting aid, AND this is clear to the MPs, then why would MPs have an incentive to vote against cutting aid?
One reason I can think of is if there is a well-organized interest group that, even though small in size, tries very hard to influence the MPs, leading them to help this group rather than the general population. (This seems to be the case in some areas in US policy.) In this case, you may want to create the impression of having a well-organized interest group—which seems hard, but I wonder what strategies could help.
Another is that some MPs are personally against cutting aid and are willing to vote against it—even though their voters favor cutting aid, voters don’t care about this issue passionately and won’t punish the MP much if they vote against. In this case, I wonder what strategies can persuade them.
Sorry if I come off as skeptical. I’m just thinking maybe thinking through the theory of change , incentives and the psychology of the MPs can help you refine your strategy, but no need to spend much time replying if you don’t find this useful.
Related, I wonder if the emails are still a bit boilerplate as after seeing a few maybe the MP can tell how they were generated? I imagine there are people who know
generally what works best in influencing lawmakers / lobbying
specifically what works well in the UK
so would be curious what strategies they would propose.
(I wonder if something like doing an opinion poll of voters and presenting that info would help, but not sure how practical that is. Perhaps you could partner with someone already doing a poll / a major website or newspaper.)
Hey, I’m working on some research on the most impactful areas within ML-aided drug and vaccine discovery. I can share that with you once I’m done.
Why I’m Donating to Giving Green This Year
Thanks for sharing your experience! I’ll share mine. I attended the workshop in July 2019 in California.
Like you, I also came in with the hope of becoming a hyper efficient rationality machine, overcoming problems like procrastination that I struggled with all my life. I was hoping to be taught how to use my System 2 to fight my lazy, uncooperative System 1 that always stood in the way of achieving my goals.
My biggest surprise was that the workshop was much more about understanding, working with, and leveraging your System 1. I was unconvinced and confused for quite a while, but more recently I finally realized that my existing way of constantly trying to force myself to do things I wasn’t intrinsically excited about was not going to end well—it was already resulting in a significant amount of unhappiness (which did not set me up for a sustainable, successful and impactful career path) until I noticed.
There were a number of major positive changed in my life in the past 1+ years and it’s hard to say what role the CFAR workshop played, but I think it definitely played some role. For one thing, it made me aware for the first time of the possibility of working with, rather than against, my System 1, and even though I wasn’t convinced by it for quite a while, it definitely triggered some discussions and reflections that eventually led to very productive rethinking and ultimately a different outlook. (E.g. I realized I had to constantly battling myself to get things done because I was on a career path that wasn’t right for me. I’m not totally sure how my new career path would pan out but I think I’m definitely in a much better position to notice what I like and don’t like, and switch gears accordingly.)
So if you are like me back then—wanting to be super efficient and impactful but struggling with procrastinations and other barriers, and hoping to become a rational machine with no System 1 to distract oneself—you should consider attending a CFAR workshop :) . It will not give you what you wanted but there’s a good chance it will change your life in a positive way.
(I’d say overall it was a moderate positive effect, which was in line with what my friend told me the workshop did to them before I went. They also said one of the best things that came out of the workshop was that it caused them to get a therapist which turned out to be pretty useful. I also got a therapist after the workshop (prompted by factors other than the workshop) and I’d highly recommend considering therapy (and/or coaching) if you are struggling with issues in life you don’t know how to solve, even if you consider yourself generally “mentally healthy”.)
Thank you for your post! I am an IDinsight researcher who was heavily involved in this project and I will share some of my perspectives (if I’m misrepresenting GiveWell, feel free to let me know!):
My understanding is GiveWell wanted multiple perspectives to inform their moral weights, including a utilitarian perspective of respecting beneficiaries/recipient’s preferences, as well as others (examples here). Even though beneficiary preferences may not the the only factor, it is an important one and one where empirical evidence was lacking before the study, which was why GiveWell and IDinsight decided to do it.
Also, the overall approach is that, because it’s unrealistic to understand every beneficiary’s preferences and target aid at the personal level, GiveWell and we had to come up with aggregate numbers to be used across all GiveWell top charities. (In the future, there may be the possibility of breaking it down further, e.g. by geography, as new evidence emerges. Also, note that we focus on preferences over outcomes—saving lives vs. increasing income—rather than interventions, and I explain here why we and GiveWell think that’s a better approach given our purposes.)
My understanding is that ideally GiveWell would like to know children’s preferences (e.g. value of statistical life) if that was valid (e.g. rational) and could be measured, but in practice it could not be done, so we tried to use other things as proxies for it, e.g.
Measuring “child VSL” as their parents/caretakers’ willingness-to-pay (WTP) to reduce the children’s mortality (rather than own, which is the definition of standard VSL)
Taking adults’ VSL and adjusting it by the relative values adults place on individuals of different ages (there were other ).
(Something else that one could do here is to estimate own VSL (WTP) to reduce own mortality as a function of age. We did not have enough sample to do this. If I remember correctly, studies that have looked at it had conflicting evidence on the relationship between VSL and age.)
Obviously none of these is perfect—we have little idea how close our proxy is to the true object of interest, children’s WTP to reduce own mortality—if that is a valid object at all, and what to do if not (which gets into tricky philosophical issues). But both approaches we tried gave a higher value for children’s lives than for adult lives so we conclude it would be reasonable to place a higher value on children’s lives if donors’/GiveWell’s moral weights are largely/solely influenced by beneficiaries. But you are right that the philosophical foundation isn’t solid. (Within the scope of the project we had to optimize for informing practical decisions, and we are not professional philosophers, but I agree overall more discussions on this by philosophers would be helpful.)
Finally, another tricky issue that came up was—as you mentioned as well—what to do with “extreme” preferences (e.g. always choosing to save lives). Two related questions that are more fundamental are
If we want to put some weights on beneficiaries’ views, should we use “preferences” (in the sense of what they prefer to happen to themselves, e.g. VSL for self) OR “moral views” (what they think should happen to their community)? For instance, people seem to value lives a lot higher in the latter case (although one nontrivial driver of the difference is questions on the moral views were framed without uncertainty—which was a practicality we couldn’t get around, as including it in an already complex hypothetical scenario trading off lives and cash transfers seemed extremely confusing to respondents).
In the case where you want to put some weights on their moral views (and I don’t think that would be consistent with utilitarianism—not sure what philosophical view that is, but I think certainly not unreasonable to put some weight on it), what do you do if you disagree with their view? E.g. I probably wouldn’t put weight on views that were sexist or racist; what about views that purport you should value saving lives above increasing income no matter the tradeoff?
I don’t have a good answer, and I’m really curious to see philosophical arguments here. My guess is respecting recipient communities moral views would be appealing to some in the development sector, and I’m wondering what should be done when that comes into conflict with other goals, e.g. maximizing their utility / satisfying preferences.
Hi,
Staying in your current job for a bit to help your family (as well as build a bit of runway) makes a lot of sense.
Re future career paths:
If you are interested in getting into policy in your home country: I’m not sure which South Asian country you’re from, if it’s India, I’ve seen some IAS officers getting degrees from top US policy schools. Having such talents joining the civil service sounds like it could have really positive impact, but I’m not sure if working there will be frustrating. It’s probably good to talk to people who have worked there.
Another idea is to get into a non-profit that works in your home country. E.g. I work at IDinsight and in our India office there are a few Indian nationals with degrees from top US policy schools. They work on engagements with governments, foundations and non-profits in India. Having local connections and context seems to really help with this type of work. Some other options including CHAI and Evidence Action. Also there are a number of EA non-profits working in India, like Fortify Health and Suvita. (Probably more in the animal welfare space if you’re interested.)
Doing tech work for socially impactful orgs could be a good path too, e.g.:
(There are probably lots more; these are just some examples I came across.)
Overall, as long as you are not sick of your current job, as it has a good work life balance it seems like a good place to be while you learn about different options (and gives you some financial security). So you’re in a good place to explore!
Hey Johannes, I don’t have ideas for a strictly speaking EA org, but here are some examples where chat bots have helped in public/social sector or humanitarian contexts—perhaps they can give you some ideas on NGO partners who may benefit:
• DoNotPay, a “robot lawyer” app that uses NLP models to provide legal advice to users, has assisted people with asylum applications in the US and Canada
• HelloVote, which helps voters find voting information and sends reminders to vote
• UNICEF’s U-Report collects opinions from marginalized communities from around the world, which can provide information relevant for decisions by governments and non-profits
• Raheem.ai allows people across the US to report on police conduct and partners with communities to use data collected to hold police accountable
• GYANT, a chat bot that provides diagnosis and advice based on symptoms reported, including assessing the likelihood that someone has COVID-19 and Zika
• Praekelt, a South African non-profit, has a few mobile health programs, including HealthConnect which has provided information on COVID-19 (some NLP elements), and MomConnect, an SMS-based help desk to provide health advice to new mothers (no NLP elements so far but it is being explored; languages are not English)
Some more examples I found in their concept note:
“to meet emission reduction targets under the Kyoto agreement, the Swiss government committed to purchasing 2 million tons of certified emissions credits between 2015 and 2020 (estimated at $24 million USD1 ) by financing an NGO distributing water-purifying chlorine dispensers in Africa. Did the $24 million reduce 2 million tons in carbon emissions? Almost certainly not, as the assumption households would boil water in absence of the filters was untrue.” (Footnote 1. Note that chlorine dispensers treat water and reduce child diarrhea incidences and hence save lives, but they may just not be good at reducing carbon emissions.)
an EU-commissioned assessment of the UN’s Clean Development Mechanism (CDM), which is used to certify offsets under the Kyoto protocol, concluded that “CDM still has fundamental flaws in terms of overall environmental integrity.” It noted that 85% of the projects analyzed have a “low likelihood that emission reductions are additional and are not over-estimated.” (p2)
Hi Jason, your blog is really interesting. I wonder if you have any medium/long term theory of change of how your work or the progress studies community (if there is such a community yet, or in the future) will have real world impact, e.g. how you or others in your community plan to engage with researchers/academics (e.g. to collaborate or build the field), policy makers, investors, scientist, technologists, entrepreneurs etc. And what some concrete changes you hope to see/affect.
(Do you just focus on research or also aim for real world impact? (And in either case, how do you measure the success of your project?)
Hey Brian, Giving Green has done some research, including on offsets, and they found some interventions to be effective and others being not. You can read more here: https://www.givinggreen.earth/carbon-offsets
I see. Let me know if I’m understanding this correctly: Founders Pledge aims to have cost-effectiveness estimate numbers, which involves a lot of work especially for topics like growth and climate change, whereas Open Phil takes a more qualitative approach for such topics with higher uncertainty. (If so, I am also curious about the philosophy behind your approach—I’m really uncertain which one works better, and that’s a bigger conversation.)
Re topics to look into, I second Michael’s suggestions: labor markets, firms, and monetary policy in developing countries. There’s also: trade, infrastructure, industrial policy, legal system, institutions etc. (Nick Bloom whom I mentioned earlier had the hypothesis that improving management practice in LMICs could be pretty impactful, and that requires a type of education/training not commonly discussed in LMICs.)
One thing tangentially related is Emergent Ventures India. (They don’t have a formal website—all updates seem to be posted on the Marginal Revolution blog.) It’s not growth-specific but rather just for innovative ideas that improve welfare. They don’t have any rigorous analysis (so I’m not sure whether it will fly with EAs) but the projects look cool and it could be a high-potential model (if expanded to Africa etc.).
Happy to keep in touch—will shoot you a DM!
BTW, have you checked out Nick Bloom’s work on management practice? He shows it’s a significant constraint on productivity in LMICs (of course, maybe not as fundamental as institutions/politics, but could still be an important one). This interview with him is interesting: https://conversationswithtyler.com/episodes/nicholas-bloom/
Thanks a lot for your research and writeup! Really nice to see follow-up work on this topic.
A few thoughts:
Is growth work neglected? I’m not sure if that’s the right question to ask. After all, “micro” development (direct service delivery) type work isn’t neglected—tons of money go into it each year—but most had no good evidence behind it, which motivated the founding of GiveWell, IDinsight etc. So perhaps whether “effective” work in growth is neglected is the more relevant question. Though I agree with you that it may be hard to assess the field as a whole compared to individual orgs.
The orgs you listed:
As you said, some (like IGC) focus more on “randomista” type work (I think this applies to Y-RISE too though they care more about effects at scale). I’m guessing there are more orgs focused on the more “macro” aspect of growth, e.g. growth diagnostics.
ODI’s fellowship program is a really interesting model, but I’m not sure how effective it is or how much they measure their impact. I’ve met a few former fellows who after finishing an undergrad econ degree went to work in a ministry in a LMIC for some time, and they told me it wasn’t clear they had much impact. I suspect ODI may want to place more experienced people—now they say they only select masters/PhD graduates, but from what I heard they pay very little, so perhaps that’s a constraint on impact too. It’s a really interesting and high-potential mode but I suspect it can be greatly improved. (IDinsight where I currently work has a similar approach of having embedded learning partnerships with LMIC governments, though as one can expect there are a lot of challenges in working with and influencing them. IDinsight for now focuses more on “randomista”/micro topics like health, education, cash transfers, but topics like tax administration and state capacity are on our minds too.)
Relatedly, perhaps an impactful thing is to fund scholarships for bureaucrats in LMICs to study in top policy schools, e.g. Harvard’s MPA-ID. I heard Latin America (e.g. Mexico, Peru) did a lot of this and it has impacted how governments work there, but don’t know much; I also know some Indian IAS officers have done this.
I am not sure if it would necessarily be that much work evaluating the potential impact, or just track record, of an org. One can sometimes establish a credible causal link using case studies. E.g. Open Phil cited a few impressive achievements of CGD and attributed certain results to orgs in their history of philanthropy studies. In fact, I was hoping for some kind of analysis like that for the growth-promoting (or research) orgs. But of course you (authors) would know better than me about your situation!
It’s really exciting to see EA-based charity analysis in other countries! A few quick comments/questions:
Re the standout charities, you said “This is because none of the organizations’ Philippine operations have had a full cost-effectiveness analysis done by GiveWell.” It may be feasible to adjust GiveWell’s CEA for some of these, e.g. for deworming, you can vary parameters like baseline worm prevalence. I’m not sure how doable it is but just an idea to explore.
I think it’s definitely a great idea to communicate this more widely and promote evidence-based giving in the Philippines, but how to do so beyond the EA community is a nontrivial question. May be worth checking with The Life You Can Save.
Among the charities, Oxfam is the only one that seems really broad (rather than running more focused programs like the others). The main reason it’s recommended seems to be that TLYCS recommends it (rather than being evaluated using the rigorous criteria of effectiveness, cost-effectiveness etc.). It’s probably also much less funding constrained than the others. So I’m a bit skeptical of it.
Sorry if I missed this, but what’s the reason for there being only 1 animal welfare charity? My impression is there are many animal welfare groups active in Asia, though I don’t know much about the Philippines. Lewis Bollard’s animal welfare newsletter (and I think the EA animal welfare fund he manages) mentions some orgs in Asia—not sure if you can find more that operate in the Philippines.
Thanks for your reply! Please keep us posted here on your plan and how to donate etc. as you figure them out.
Another thought: may be helpful to work with some experienced NGO or someone experienced in political campaigning to craft the fb ads, targeting strategy etc. Seems like a pretty specialized thing worth drawing from existing expertise to maximize the chance of success.
Thanks Sanjay! This looks really important. For those considering supporting you, would be helpful to see something like
Timeline—how urgent this is, e.g. when the voting is expected to happen
Any details of the plan to the extent you have it, e.g. which Tory MPs are relevant (the journalist mentioned some MPs are angry—do they know which ones?), rough estimate of budget
“I get the impression that it’s not clear that the government will win on this.” Why is that? (Or mostly based on what the journalist said?)
Any lesson from similar previous campaigns (you probably don’t time to do a deep analysis, but anything quick would probably help)
I appreciate you writing up these comments! There are some great suggestions here as well as things I disagree with. As the author of the “extremely positive” post let me share some thoughts. (I’m by no means an expert on this so feel free to tell me I’m wrong.)
1. Quantitative cost-effectiveness analysis
Summary of my view: I’m pretty torn on this one but think we may not want to require a quantitative CEA on charities working on policy change (although definitely encourage the GG team to try this exercise).
On one hand I think it’s great to at least attempt it to develop a better understanding of one’s causal model as well as sources of uncertainty, and getting a ballpark estimate if possible (though sometimes the range is too wide to be useful). On the other hand requiring quantitative cost-effectiveness estimates can restrict the type of charities one can evaluate. I took a brief look at Founders’ Pledge’s model on the Clean Air Task Force, which seems to be a combination of 1) their track record, 2) their plan, 3) subjective judgements. While the model seems reasonable (I haven’t taken a deep enough look to tell how much I agree) I do think requiring such a model would preclude evaluating orgs like the Sunrise Movement—or, if we take your concerns about them seriously (which I’ll address below) let’s just say orgs like that, of which there are many in the climate space: those with a more complex theory of change than say CATF, and any model would involve inputs that are mostly extremely subjective (compared to the CATF one) which makes it less meaningful. Perhaps you would say these orgs are precisely the ones not worth recommending—on this I agree with Giving Green that we should hedge our bets among different theories of change and hence look at different types of orgs (even though as I’ll elaborate later I agree with having a stronger recommendation for CATF, I think potentially recommending orgs more similar to TSM is valuable).
So I think it is definitely good to attempt a quantitative CEA and I highly encourage the GG team to do so, even for an org like TSM. (I would have liked to engage with Founders’ Pledges’ models more but didn’t end up doing it—that would be a nice exercise.) But I’m unsure about requiring that in a recommendation especially when you work in a space with so much uncertainty. (I was trying to look up other EA charity recommenders and saw that Animal Charity Evaluators also don’t seem to have quantitative CEA for any/all their top charities—I haven’t checked all but here’s an example without. Not saying this is a sufficient argument though.)
I have to say I’m pretty uncertain about how much to use quantitative CEA and I am happy to be convinced that I’m wrong.
I do agree Giving Green should communicate with less confidence in their recommendations as say GiveWell, which explicitly recommends charities that are amenable to be evaluated with higher quality evidence (e.g. RCTs) and hence have lower uncertainty.
2. Offsets
1) Offsets vs policy change
My read is that GG recommends offsets because they see a huge market especially among companies that want to purchase offsets, and it’s hard to convince them to instead donate the money to the maximally impactful thing. However, I agree that they should communicate this more clearly: that for more “flexible” donors they strongly recommend policy change over offsets.
2) Cost-effectiveness of offsets
I agree it would be good to come up with cost-effectiveness estimates for offsets even though they will also be pretty uncertain (probably something between the uncertainty of GiveWell current top charities and climate change orgs working on policy change). In addition to telling people to buy offsets with real additionality, it’s probably also good to put a proper price tag on things especially if they differ a lot.
3. The Sunrise Movement (TSM)
Summary of my view: I’m more positive than the author on the impact they achieved (and perhaps their impact potential), and less negative on the potential for negative impact, although I’m really unsure about it as I’m far from an expert. I do agree that GG should recommend CATF more highly than TSM.
Impact they achieved: The fact that Biden and some other Democrats adopted climate change plans similar to what’s proposed by TSM (see the “Policy consensus and promotion” section of GG’s page on TSM) is some evidence of their influence, although of course we can’t be sure. (This article argues it was valuable for groups on the left to have a more unified framework for addressing climate change, and it seems like TSM is one of the multiple groups that had an influence in the process.)
Potential for negative impact:
In terms of actual policies: I mostly trust Biden and overall the Democratic members of Congress (rather than the most “progressive” ones) to go for policies that will be less polarizing than the most radical proposals, and I’m not too worried about TSM pressuring them into doing things they don’t think are good ideas.
In terms of public opinions: Will TSM make climate change a more polarizing issue than it already was? On one hand we do see the majority of Americans being concerned about climate change; on the other hand the extreme level of polarization (even in the absence of the TSM) already shape people’s view on many things. so I’m not sure.
(I think my arguments are pretty weak here though because I don’t understand the US political system very well.)
Why GG should recommend CATF more highly:
Outside view perspective: even if the expected values of the two orgs look the same we should account for the fact that CATF has much more of a track record.
Inside view perspective: under the Biden administration it seems like CATF has a very clear vision of what they can do (see here); for TSM it’s less clear—even if they achieved some impact before the election in getting candidates to take climate change more seriously and adopt a more unified platform, it’s less clear how they will influence policy now. If I were choosing between the two at this moment it’s definitely CATF.
(Right now they sort of do this: labeling CATF as “good bet” and TSM “shows promise”, although we probably want something more clear than those labels, and apparently the team did not mean to recommend CATF more highly.)