I have a PhD in finance and am the strategist at Affinity Impact, the impact initiative of a Singapore-based family office that makes both grants and impact investments.
Wayne_Chang
Hi Huwelium, thanks so much for your post! I’m also advising someone on highly cost-effective interventions, so I found your thoughtful analysis to be very interesting. My question relates to your cost effectiveness estimates vs GiveWell’s. Based on GiveWell’s spreadsheet, their modeling of DDK (2017) places that program’s cost effectiveness at 0.5x – 2.5x GiveDirectly’s. Their modeling of Bettinger et al (2017) places that program’s at 0.2x – 1.4x GiveDirectly’s. Both of these estimates are for consumption effects only and excludes non-pecuniary benefits like reduced teenage pregnancy. This seems most comparable with your document’s cost-effectiveness estimates, which are based on income effects only. However, for Pratham, you conclude its cost effectiveness is 20x − 200x GiveDirectly’s.
I’m having trouble undertanding how your estimates are one to two orders of magnitude different from GiveWell’s. I’m probably missing something important so I was wondering if you’ve attempted a reconciliation. Any clarification on assumption differences and their relative importance would be very much appreciated. Thanks so much!
A company structure to consider would be a mutual organization where all profits go to members, which in your case would be the policy holders. Profits can be retained to grow the company or policy fees can be reduced by the amounts of its profits. Mutuals have a long history and many of the most successful financial organizations in the US are mutuals (e.g. Vanguard, State Farm, Liberty Mutual, NY Life). You could develop an insurance brokerage mutual that offers products from different insurance companies. I’m not sure if there are mutuals in this space but this could be a potential structure to explore given its long history of success. Personally, I’m a huge fan of Vanguard and Jack Bogle. They’ve done tremendous good and helped millions retire with more money and fewer fees. I wish you the same success!
Thanks for posting this, kbog! I would be interested in your recommendation for someone donating to the EA funds. The Long Term Future and Global Development funds focus on humans and thus potentially runs into the meat eater problem. For every dollar donated to the above funds, what would be an appropriate amount to donate to the Animal Welfare Fund that is enough to offset this issue? Thanks!
Thanks for your response, kbog!
Animal welfare issues are plausibly getting worse and not better so I’d be less confident to assume it will not be an issue in the future. As the world develops and eats more meat, Compassion in World Farming estimates that annual factory farm land animals killed could increase by 50% over the next 30 years. Assuming people’s expanding moral circle will reverse this trend is dangerous when the animal welfare movement has progressed little over the past few decades (number of vegetarians in US have been flat; there are some animal welfare legislative victories but also setbacks like ag-gag rules). Innovations like clean meat could help but it is still early, and there are also ways technology can make things even worse. Assuming animal welfare issues remain as they currently are (neither deteriorating nor improving) seems to me a plausible and more responsible projection.
If so, for the Long Term Future EA Fund, let’s assume the Animal Welfare EA Fund “offset ratio” (to account for the meat eater problem) is the same for future generations as it is for the current generation. Based on your blog’s estimate of a nickel a day, it costs a person ~$1000 to offset a lifetime of meat consumption ($0.05/day x 365 days/year x 50 years). It seems your estimate is for people living in rich countries though, so maybe 30% of that or ~$300 is more applicable to the average human. This can be compared to the Long Term Future Fund’s expected cost effectiveness of saving a human life (for just the current generation). I’ve seen one estimate that assumes a reduction in x-risk of 1% for $70 billion dollars spent (again for the current generation only). This leads to ~$1000 per human life saved ($70 billion / 7 billion humans / 1%). If so, the meat eater problem offset ratio for the Long Term Future Fund is very roughly ~30% (~$300 offset per life saved / ~$1000 to save a life).
Let’s apply a similar logic to the Global Health EA Fund. Instead of ~$1000 to offset a lifetime of meat consumption, let’s assume 10% of that for someone living in extreme poverty, or ~$100. GiveWell estimates that AMF can save a life for ~$3000, leading to an offset ratio of ~3% (~$100 offset per life saved / ~$3000 to save a life). This is two orders of magnitude larger than your comment response (of 0.008% ~ 0.04% from $0.08 ~ $0.4 / $1000). One reason might be because you’re only accounting for one year of the meat eater problem when I’ve accounted for a lifetime’s worth of impact (which I believe is the more complete counterfactual comparison). However, I’ve not had a chance to dive into your spreadsheet so I could be mis-using your results. Any corrections or reactions are much appreciated!
Finally, I’m curious as to why you think offsetting makes little sense under utilitarianism. I’m thinking it would actually be required if one were uncertain about the conversion ratio between human and animal welfare. If we were certain about the conversion, we should just do the one intervention that’s most cost effective, in whatever domain it happens to be in (human or animal). But if we were uncertain about the conversion, we will need to ensure that one domain’s actions doesn’t inadvertently produce overall negative utility when the other domain’s consequences are summed together. In the case of saving a human life, we wouldn’t want to lower overall utility because of our underestimation of the meat eater problem. On the other hand, we wouldn’t want to just focus on animal welfare if it turns out human welfare is especially significant. Offsetting cross-domain spillover effects avoids this dilemma (I teach finance, where analogies include hedging different FX risks or asset-liability matching). For the meat eater problem, it ensures saving a human life does not lead to negative utility even if we find out that animal welfare is unexpectedly important. The offset trades one animal life for another animal life, ensuring neutral utility impact within the animal domain.
Sorry for the long reply but I’ve been worrying about the meat eater problem so found your post to be especially interesting and informative. Any response you might have would be very appreciated!
Hauke’s calculation simply determines a standard Benefit/Cost ratio. If it costs $10 to avert a tonne of CO2 that provides benefits of $417 (in damages averted), this Benefit/Cost ratio equals 41.7. This ratio should be directly comparable to Copenhagen Consensus ‘Social, economic, and environmental benefit per $1 spent.’ For the Post-2015 Consensus, ‘Climate Change Adaption’ is listed as providing a Benefit/Cost ratio of 2 while climate-related ‘Energy Research’ has a ratio of 11. I would weight these results from meta-level research must more strongly than that from a single study. But even if we believed Hauke’s study, a benefit/cost ratio of 41.7 still lags ‘Reduce Child Malnutrition’ (ratio of 45) or ‘Expanded Immunization’ (ratio of 60). This hardly suggests that “we should consider prioritizing climate change over global development interventions.” The unconditional cash transfer benchmark that Hauke uses is a minimum and not representative of highly cost-effective interventions in global development. Using GiveWell’s estimates, deworming and malaria nets are more than 10x more cost-effective than cash. Before rushing to replace well-established priorities and interventions that are based on decades of research, we need to have substantial confidence in the new priority/intervention. This study is far from it.
Note that the Copenhagen Consensus and GiveWell results do not apply utility adjustments. If this new climate change study does so, its Benefit/Cost ratio would be distorted by improperly inflating Benefits, which make the ratio larger than it actually is.
A 7% real investment return over the long-term is in my opinion, highly aggressive. World real GDP growth from 1960 through 2019 is 3.5%. Since the proposed fund expects to invest over “centuries or millennia,” any growth rate faster than GDP eventually takes over the world. Piketty’s r > g can’t work if wealth remains concentrated in a fund with no regular distributions.
Even in the shorter run, it’s unrealistic to expect the fund to implement a leveraged equity-only strategy (or analogous VC strategy):
1) A leveraged approach may not survive (e.g. will experience −100% returns). Even if the chance is small over a given year, this will be increasingly likely over a longer horizon. Dynamic leverage strategies can be implemented to reduce this risk but this likely reduce returns too.
2) A high-risk strategy will result in extremely painful drawdowns. In bad times, any fiduciary running the fund will face enormous pressure to shift to a more conservative strategy. During the Great Depression, US equities declined by nearly 90% during the course of just 3 years, even without leverage. Sticking to the same approach in the face of a potentially worse decline is nearly unimaginable.
3) A consistently leveraged portfolio approach has never been done before over long investment periods. Foundation/university endowments are probably in the most analogous position and few apply leverage. Harvard tried a modest 5% leverage during the 2000’s, and it blew up during the Financial Crisis.
4) Any successful strategy will be mimicked and thus face increasing competition and declining returns. If the fund grows to any significant size, it will start facing competition from itself. For example, Yale’s legendary endowment has seen declining returns from a ~9.5% real rate over the past 20 years to a ~5.5% one over the past decade. Similarly, given Berkshire Hathaway’s large size, it’s now increasingly difficult for Warren Buffet to beat the stock market.
Indeed, the proposed fund may actually have to be quite conservative for it to survive over time (through broad diversification even into low-return assets) and be accepted by the world (to avoid scrutiny or excess taxation). In my opinion, when investing over centuries with an unprecedented strategy, I would characterize a 2-4% real return (broad asset class diversification that keeps up with world GDP) as reasonable, and a 5%+ real return (all equity with or without leverage) as aggressive.
I don’t think it makes sense to compound the model distributions (e.g. from 1 year to 10 years). Doing so leads to non-intuitive results that are difficult to justify.
1) Compounded model results (e.g. 10x impact in 10 years) are highly sensitive to the arbitrarily assumed shape, range, and skewness parameters of the variable distributions. Also, these results will vary wildly from simulation to simulation depending on the sequence of random draws. This points to the model’s fragility and leads to unnecessary confusion.
2) The parameter estimates may use annualized growth rates, but they need not correspond to an annual time frame. Indeed, it is more realistic to make estimates for longer horizons because short-term noise averages out (i.e. Law of Large Numbers). In other words, it is far easier to estimate a variable’s expected mean than its underlying distribution. Estimates for the expected mean will already be highly uncertain. I don’t think it’s possible to reasonably defend distribution assumptions of the variables themselves.
The exercise is to compare giving-today vs. investing-to-give-later. The post usefully identifies key variables in this consideration. I think the most it can do is propose useful estimates of these variables’ expectations over the long run (i.e. their averages over time) and their key uncertainties (i.e. Knighting uncertainty and not quantifiable distribution parameters). If the expectations’ net sum is above 1, it makes sense to give later. If it falls below 1, it makes sense to give now. Reasonable areas of uncertainty can be further discussed and debated. Already, there will be much irreconcilable (rational) disagreement. Compounding returns using arbitrary distribution parameters won’t (and shouldn’t) reconcile any differences and likely confuses the matter.
I agree with Michael that a 70% allocation to US stocks is way too high. US stocks’ outperformance against international developed stocks can almost entirely be explained by the increase in the US market’s valuation (which shouldn’t be assumed to continue and indeed, is more likely to reverse). See AQR’s analysis on pg 6 here. Also, what about Emerging Market stocks? This should certainly get some allocation as well, especially if you’re focused on the next 100 years. China and India will increasingly be key economic players and have capital markets that will outgrow the US in importance. In fact, 6 of the 7 largest economies in the world in 2050 are likely to be emerging economies. When it comes to investing, beware of simply extrapolating the past into the future! The US markets have done well because the US has been the dominant country in the 20th century. This is unlikely to continue during this century.
A 10% global bonds/90% global stocks portfolio is likely to be more robust and not suffer from a USD/US historical bias. Keep it simple and avoid picking bond/stock market winners.
Thanks, Sanjay, I’m sharing a basic model I’ve written that highlights the trade-off for impact investments that seek both social impact and financial returns. This isn’t specifically about ESG but the key ideas still apply. The upshot: the investment must produce annually one percent of a same-sized grant’s social benefit for every one percent concession on its financial return. I construct impact investing’s version of the Security Market Line and quantitatively define what ‘impact alpha’ means.
This model was written a couple of years ago but since then, I actually haven’t applied it much. That’s because it’s hard to quantify impact, which is a key input that the model requires (and an input that any model will obviously require). There’s no established and easy way to monetize impact, especially given impact’s tremendous heterogeneity. Comparing the value of a year’s education versus a year’s health is hard enough. What about quantifying the counterfactual impact that a business has? Or that of the investor investing into the business? So modeling is helpful but at this stage, I think data is what we actually need most.
Here’s the math on moral/financial fungibility:
...
You’re probably better off eating cow beef and donating the $6.03/kg to the Good Food Institute
Is refraining from killing really morally fungible to killing + offsetting? Would it be morally permissible for someone to engage in murder if they agreed to offset that life by donating $5,000 to Malaria Consortium? I don’t mean to be offensive with this analogy, but if we are to take seriously the pain/suffering that factory farming inflicts on animals, we should morally regard it in a similar lens to inflicting pain/suffering on humans.
So, no, moral acts are not necessarily fungible. It is better to not eat meat in the first place than to eat meat and donate the savings to farm animal charities (even if you could save more animals). This is obvious from a rights moral framework but even consequentialists would consider financial offsetting dangerous and unpalatable. The consequences of allowing people to engage in immoral acts + offsetting would be a treacherous and ultimately inferior world.
So your calculations are not the cost of eating meat but rather, the cost of saving animals. You have not estimated the cost of chicken/cow suffering (which would require estimating utility functions and animal preferences), but rather, the cost of alleviating suffering. Your low-cost numbers don’t imply that eating meat is inconsequential, but rather, that it’s very cost-effective to help chickens and cows. GiveWell’s $5,000 per human life doesn’t make human life cheap or murder trivial, it means we have an extraordinary opportunity to help others at a very low cost to ourselves.
Have you compared your analysis to this previous EA Forum post? Are there different takeaways? Have you done anything differently and if so, why?
I highly recommend the Founder’s Pledge report on Investing to Give. It goes through and models the various factors in the giving-now vs giving-later decision, including the ones you describe. Interestingly, the case for giving-later is strongest for longtermist priorities, driven largely by the possibility that significantly more cost-effective grants may be available in the future. This suggests that the optimal giving rate today could very well be 0%.
Hi Owen, even if you’re confident today about identifying investment-like giving opportunities with returns that beat financial markets, investing-to-give can still be desirable. That’s because investing-to-give preserves optionality. Giving today locks in the expected impact of your grant, but waiting allows for funding of potentially higher impact opportunities in the future.
The secretary problem comes to mind (not a perfect analogy but I think the insight applies). The optimal solution is to reject the initial ~37% of all applicants and then accept the next applicant that’s better than all the ones we’ve seen. Given that EA has only been around for about a decade, you would have to think that extinction is imminent for a decade to count for ~37% of our total future. Otherwise, we should continue rejecting opportunities. This allows us to better understand the extent of impact that’s actually possible, including opportunities like movement building and global priorities research. Future ones could be even better!
Thanks for the clarification, Owen! I had mis-understood ‘investment-like’ as simply having return compounding characteristics. To truly preserve optionality though, these grants would need to remain flexible (can change cause areas if necessary; so grants to a specific cause area like AI safety wouldn’t necessarily count) and liquid (can be immediately called upon; so Founder’s Pledge future pledges wouldn’t necessarily count). So yes, your example of grants that result “in more (expected) dollars held in a future year (say a decade from now) by careful thinking people who will be roughly aligned with our values” certainly qualifies, but I suspect that’s about it. Still, as long as such grants exist today, I now understand why you say that the optimal giving rate is implausibly (exactly) 0%.
This post (and the series it summarizes) draws on the scientific literature to assess different ways of considering and classifying animal sentience. It persuasively takes the conversation beyond an all-or-nothing view and is a significant advancement for thinking about wild animal suffering as well farm animal welfare beyond just cows, pigs, and chickens.
I agree with Michael that concrete examples would be very helpful, even for researchers. A post should be informative and persuasive, and examples almost always help with that. In this case, examples can also make clear the underlying logic, and where the explanation can be confusing.
For example, let’s think about investing in alternative protein companies as a way to tackle animal welfare. Assume that in a future state where lots more people eat real meat (bad world state), the returns for alt-proteins in that state are low but cost-effectiveness is high. This could be because alt proteins have faced lower rates of adoption (low returns) but it’s now easier to persuade meat eaters to switch (search costs are now low since more willing-switchers can be efficiently targetted). The opposite situation is true too. In a good future state with few meat-eaters, alt protein returns are high but cost-effectiveness is low. So this scenario should put us in your table’s upper left quadrant (negative correlation btw/ World State and Cost-Effectiveness + negative correlation btw/ Return and Cost-Effectiveness).
This example illustrates how some of your quadrant descriptions may be confusing or even inappropriate:
“Underweight investment”: I agree with this one since to have a greater EV, you want investments with a positive correlation between returns and cost-effectiveness. This isn’t true for alt proteins here, so you should avoid them.
“Divest from evil to do good”: I don’t think this makes sense because alt proteins are not “evil” (but you should avoid them given the scenario).
“Mission leveraging”: I was quite confused initially because I was assuming that the comparison is to no investment at all. If so, then investing in alt proteins can lead to an ambiguous impact on volatility (depending on the relative magnitude of return changes versus cost-effectiveness changes). It could in fact be mission hedging (with an improvement in the bad state) if the low returns end up producing more total good because of the state’s high cost-effectiveness. However, I eventually realized that the comparison is to a fixed grant within the animal welfare space (although this was never made explicit in the post and may not be what most people would assume). If so, then indeed this is always mission leveraging since a positive correlation between the world state and returns does ensure lower volatility.
So as you can see, an example makes clear where table descriptions may be inappropriate and where a clearer description can be helpful. It also makes more concrete what various correlation signs mean and how to think about them.
Minor suggestion: in your title and summary, please just write out “10 k” as 10,000. No need to abbreviate when people may be unsure that it’s actually 10,000 (given that it’s such a large difference).
Thanks for writing this! The very last sentence seems off. Did you mean to say every second (instead of minute)? Also, the number of farm animals that die every second should be 1⁄60 (not 1⁄120) of that in the “minute” table above.
This last sentence was quite shocking for me to read. It’s sad…but very powerful.
Got it. But I think the phrasing for the number of animals that die is confusing then. Since you say “100 other human [sic] would probably die with me in that minute,” the reference is to how many animals would also do during that minute. I think what you want to say is for every human death, how many animals would die, but that’s not the current phrasing (and by that logic, the number of humans that would die per human death would be 1, not 100).
I’d suggest making everything consistent on a per-second basis as smaller numbers are more relatable. So 1 other human would die with you that second, along with 10 cows, etc.
I would challenge your notion that you are over-analyzing the problem and that you must make a definitive decision soon.
1. In general, better knowledge and information leads to better decision making. If you are new to the EA community or to thinking deeply about philanthropy more generally, it is very unlikely that your current notions of how to give are appropriate.
2. Once you give away money, you cannot get it back. But money you save now can always be given away later. This argues for waiting in the presence of uncertainty. For example, in the optimal stopping Secretary Problem, you should see and just reject the first 37% of all candidates before you even begin your evaluation process.
3. There are tremendous consequences to your actions so you shouldn’t take this matter lightly. Going with your gut and intuition is not the appropriate response simply because you find your dilemmas to be difficult and overwhelming. Using GiveWell’s latest model, you can expect to save a life for probably $2500 or less. Since you have several hundred thousand pounds, you could save over 100 people with what you have. You could be like Oskar Schindler. Please don’t waste this precious opportunity.