Such an amazing talk, well done!! :)
Lucas Lewit-Mendes
Thanks for the response Samuel, would be interesting to hear GiveWell’s rationale on using the log of average(earnings+consumption).
Hi Joel, thanks for your response on this!
I think my concern is that we can only “illustrate what would happen if GiveWell added decay to their model” if we have the right starting value. In the decay model’s current form, I believe the model is not only adding decay, but also inadvertently changes the total earnings effect over the first 11 years of adulthood (yet we already have evidence on the total earnings effect for these years).
However, as you noted, the main point certainly still holds either way.
As a separate note, I’m not sure if it was intentional, but it appears HLI has calculated log effects slightly differently to GiveWell.
GiveWell takes the average of earnings and consumption, and then calculates the log change.
HLI does the reverse, i.e. calculates the log of earnings and the log of consumption, and then takes the average.
If we were to follow the GiveWell method, the effect at the second follow-up would be 0.239 instead of 0.185, i.e. there would be no decay between the first and second follow-up (but the size of the decay between the first and third follow-up would be unaffected).
If the decay theory relies only on a single data point, does this place the theory on slightly shakier ground?
I don’t have a good intuition on which of these approaches is better. Was there any rationale for applying the second approach for this calculation?
Full disclosure: I’m the primary author of a yet to be published SoGive report on deworming, however I’m commenting here in a personal capacity.
Thanks for this thought provoking and well-written analysis!I have a query about whether the exponential decay model appropriately reflects the evidence:
If I understand the model correctly, this cell seems to imply that the annual consumption effect of deworming in the first year of adulthood is 0.006 logs.
As HLI is aware, this is based on GiveWell’s estimated annual earnings effect—GiveWell gets 0.006 by applying some adjustments to the original effect of 0.109.
However, 0.109 is not the effect for the first year of adulthood. Rather, it is the effect across the first ~11 years of adulthood (ie. pooled earnings across KLPS rounds conducted ~10-20 years after treatment). *
I think this implies that the total effect over the first 11 years of adulthood (without discounting) is 0.006*11 = 0.061.
Currently, the HLI exponential decay / no discounting model suggests the total effect over these 11 years is only 0.035. Should this instead be 0.061 to reflect the 11 years of evidence we have?
To make the total effect 0.061 over these first 11 years (without discounting), the first year annual effect would need to be 0.010 rather than 0.006 (I used the Goal Seek function to get this number).
As a result, HLI’s exponential decay model with 4% discounting produces lifetime earnings of 0.061 (coincidently the same number as above). This is still a lot lower than GiveWell’s figure (0.115), but is higher than HLI’s (0.035, also coincidently the same number as above).
Under this new approach, decaying earnings would reduce cost-effectiveness by 46%, compared to 69% in the HLI model.
As a sense check, we can set the number of years of impact in GiveWell’s model to 11 years (instead of 40 years), which gives us total earnings of 0.051. Therefore, I don’t think it would make sense if the decay model produced lifetime earnings of only 0.035.
Looking forward to hearing HLI’s thoughts on whether this approach better reflects the evidence or if I have misunderstood.
* Note that I have included both the 10th and 20th year, hence the 11 years.
Thank you for writing this Mitra, it’s always valuable to hear critiques of current approaches in the EA community. As Peter noted above, your experiences and views would be greatly valued by the community.
I will attempt to respond to some of these questions, but note that my responses may not reflect the views of everyone in the community, and I may miss some crucial points.
Are you effective enough to notice that you could be 10x more effective if instead of selling wells to villages, you focused resources of finding and supporting local entrepreneurs to build their own businesses doing so ?
I think many EAs share this concern. You may be interested in this post, which is a critque of the current “EA” approach to global poverty.
Is your altruism effective enough to notice who is building them for $2200?
In the global poverty space, GiveWell aims to find charities who produce the most bang for buck. Of course, they may get this wrong sometimes in practice. But in theory, if someone achieves the same benefit for less cost, GiveWell would prioritise this opportunity.
Have you measured the return on impact well enough to know that $100 is the cost of the measurement, that it mostly collects meaningless numbers, and you could dig 5% more wells if you eliminated that?
I’d be interested to hear why you think measurement numbers are meaningless? Take the case of malaria control—if some areas are much more malarious than others, it seems important to spend some money to know which areas to focus on.
Have you noticed that as many as half the wells, or solar panels or toilets sit there broken waiting for the next donor? Are you measuring the right thing, would you notice if those $2000 wells fail in 5 years while $2500 wells might fail in 10 ?
GiveWell’s cost-effectiveness analyses look at costs per treated person, so they try to account for situations where some of the treatments/materials are not used. They also account for the length of time a treatment lasts, which may resolve the second question.
Are you innovative enough to figure out that if you got the village to invest $1000, not only could you support twice as many villages but the village would be more likely to maintain it and use it, if they had skin in the game. Are you flexible enough to create a social enterprise rather than a charity, and to fund its overheads rather than expecting it to make a profit?
If you’re interested, this interview with Karen Levy of Evidence Action has sections on both “participatoriness” and “sustainability”. The issue may be too complex to cover here, but it would be valuable to understand the crux of your disagreement.
Are you innovative enough to invest in someone developing a cheaper machine, or one that reduces the cost even further, or are you demanding certainty and measurability too much to consider them anything that pays off at scale, but in the long term.
This is not precisely what you described, but Charity Entrepeneuship aims to find innovative solutions to challenging problems, such as the Lead Elimination Exposure Project, which advocates for lead paint regulation to reduce health and economic damage in the long term.
Hopefully this didn’t come across as dismissive, but I think it’s worth giving due credit to GiveWell and other members of the EA community.
All the best,
Lucas
Thanks for writing this Caroline, really interesting post! I think it’s probably true that having talented people doing important things work really hard is higher impact than having people donate a little bit more money.
However, I am concerned about the idea that one should prioritize their impact over relationships with family, friends, romantic partners, or children, for two reasons:
I think it’s important to note that, personally, donating 10-20% of my income to effective charities literally makes zero difference to my life enjoyment.* But neglecting relationships would significantly reduce my life enjoyment. If lots of EAs are less happy (and potentially also their partners, friends, and family), that means the corresponding increase in impact from working hard would need to outweigh their reduction in happiness to provide net benefit.
If lots of EAs are less happy, it would presumably be harder to attract new people and also increase burnout. There might also be diminishing marginal returns to work in many cases (e.g. once GiveWell has analysed a charity for 100 hours, the 101st hour probably doesn’t provide that much more information). But returns to donations are probably linear, unless you are dealing with large amounts of money such that you run out of equally cost-effective opportunities.
I am unsure whether this means EAs shouldn’t work 7-day weeks and de-prioritze relationships, but I don’t think it’s clear they should. Of course, this might work for some people but not others!
* I may be in a particularly priveledged position here, as I currently live with my parents and do not pay rent or have kids, but I suspect a high proportion of EAs would make a roughly similar conclusion.
Thanks for writing this up Rumtin and Krystal!
Does the scope of the project allow for engagment with academics as well as policy-makers/public servants? While there obvious risks with expanding the scope too broadly, I wonder whether collaboration with academia could be valuable for research efforts. There is also the possibility that some academic work (e.g. gain-of function research) could undermine policy efforts, so perhaps coordination between EA-aligned policy-makers/public servants and academics could reduce this risk?
Thanks for writing this up!
This post does resonate with me, as when I was first introduced to EA, I was sceptical about the idea of “discussing the best ways to do good”. This was because I wanted to volunteer rather than just talk about doing good (this was before I realised how much more impact I could have with my career/donations) and I think I would’ve been even more deterred if I’d heard that donated funds were being spent on my dinners.
However, it sounds like my attitude might have been quite different to others, reading the comments here. Also, I suspect I would’ve ended up becoming involved in EA either way as long as I heard about the core ideas.
Thanks Nathan, that would make a lot of sense, and motivates the conversation about whether CEA can realisticly attract as many people through advertising as Goldman etc.
I guess the question is then whether:
a) Goldman’s activities are actually effective at attracting students; and
b) This is a relevant baseline prior for the types of activities that local EA groups undertake with CEA’s funding (e.g. dinners for EA scholars students)
Hi Jessica,
Thanks for outlining your reasoning here, and I’m really excited about the progress EA groups are making around the world.
I could easily be missing something here, but why are we comparing the value of CEA’s community building grants to the value of Mckinsey etc?
Isn’t the relevant comparison CEA’s community building grants vs other EA spending, for example GiveWell’s marginally funded programs (around 5x the cost-effectiveness of cash transfers)?
If CEA is getting funding from non-EA sources, however, this query would be irrelevant.
Looking forward to hearing your thoughts :)
Thanks Ren, that makes a lot of sense!
Really interesting and well-written post about the Australian political context! Do you think EA grant makers should consider funding political campaigns by minor parties, or would you prefer to see EA-aligned volunteers/staff leverage other sources of funds?
Thanks, Lucas
Thank you for raising some interesting concerns JP.
I just wanted to note that the value of a market for bednets may be small relative to the value of philanthropic funding for several reasons:
Having gone down the philanthropy path, ceasing to provide bednets philanthropically now would be unlikely to lead to a flourishing bednet market. See more on this here under “People may not purchase ITNs because they are unavailable in local markets or because they expect to be given them for free”
There are many reasons people may buy fewer bednets in a market than is socially optimal: lack of available funds, present bias, positive externalities (not internalising the societal benefit of reducing malaria transmission).
Business owners can sell other, less crucial goods and services. But in poverty stricken locations, they cannot provide and distribute thousands of life-saving/improving bednets to the poor.
Warm regards,
Lucas
Thanks, these are really interesting and useful thoughts!
Thanks very much Saulius, that all makes sense!
Happy new year!
Fantastic story! All the best :)
Thanks for your reply Saulius!
I wasn’t sure if the 65 years (or 569,400 hours) per dollar already accounts for the number of hours lived in disabling/excruciating pain (as opposed to milder suffering)?
To be more precise, if each hen lives for ~1.27 years (i.e. 11,125 hours), and a caged hen spends ~431 hours in disabling/excruciating pain, while an aviary hen spends ~156 hours in disabling/excruciating pain, I was thinking that the reduction in hours of suffering per dollar is actually 569400*(431-156)/11125 = 14,075 hours (or 1.6 years)?
In other words, I was trying to account for the fact that only 275 hours of suffering are being averted rather than 11,125 hours per hen. However, am I missing something that is contained in your model? (Note: I wasn’t sure if 65 years referred to hens or broilers, but the same sentiment would hold either way.)
As you note, this doesn’t account for differences in productivity (It was really interesting to hear that cage-free productivity might increase with scale!).
Thanks again for engaging in this discussion, and looking forward to hearing your reponse!
Sorry I’m a bit late to the party on this, but thanks for the well-researched and well thought-out post.
My two cents, as this line caught my eye:
Notably, working on these issues can often improve the lives of people living today (e.g. working towards safe advanced AI includes addressing already present issues, like racial or gender bias in today’s systems).
I think the line of reasoning concerns me. If working on racial/gender bias from AI is one of the most cost-effective ways to make people happier or save lives, then I would advocate this line of reasoning, but I doubt this is the case.
Rather, if the arguments for working on AI as an X-risk aren’t convincing enough on its own, it seems this would be enough to re-consider whether we want to work on AI.
Alternatively, the racial/gender bias angle could be used more for optics, rather than truly being the rationale behind working on AI. While it’s possible this would bring more people on board, there are risks associated with hiding what you really think (see section “Longtermism vs X-risk” of this podcast for discussion on the issue—Will Macaskill notes “I think it’s really important to convey what you believe and why”).