I have a PhD in finance and am the strategist at Affinity Impact, the impact initiative of a Singapore-based family office that makes both grants and impact investments.
Wayne Chang đ¸
Hi, James! When it comes to assessing bednets vs therapy or more generally, saving a life vs happiness improvements for people, the meat eater problem looms large for me. This immediately complicates the trade-off, but I donât think dismissing it is justifiable on most moral theories given our current understanding that farm animals are likely conscious, feel pain, and thus deserve moral consideration. Once we include this second-order consideration, itâs hard to know the magnitude of the impact given animal consumption, income, economic growth, wild animal, etc. effects. Youâve done a lot of work evaluating mental health vs life-saving interventions (thanks for that!), how does including animals impact your thinking? Do you think itâs better that we should just ignore it (like GiveWell does)?
I think this goes back to Joeyâs case for a more pluralistic perspective, but I take your point that in some cases, we may be doing too much of that. Itâs just hard to know how wide a range of arguments to include when assessing this balance...
Thanks, Vasco, for doing this analysis! Here are some of my learnings:
Fish and shrimp suffering is so much greater than that of other farm animals. I knew about their much larger numbers already, but I feel it much more viscerally now after better understanding the steps in your calculations and the assumptions behind them.
The overall picture of neglectedness (i.e. disability vs funding) is insensitive to the way pain/âdisability is measured or to the assumptions behind animal moral value (e.g. welfare range). Unless you literally assume farm animals matter zero, any reasonable assumption will show how neglected farm animals are relative to humans.
Wild animals have even greater neglectedness. We had initially discussed including them in the analysis, but they would dominate everything else, even farm animals. I had thought that limiting the analysis to certain types of animals (e.g. land vertebrates or just mammals) would result in farm animals being more prominent, but wild animals dominate even among just mammals.
Thanks, Ben, for writing this up! I very much enjoyed reading your intuition.
I was a bit confused in a few places with your reasoning (but to be fair, I didnât read your article super carefully).
Nvidiaâs market price can be used to calculate its expected discounted profits over time, but it canât tell us when those profits will take place. A high market cap can imply rapid short-term growth to US$180 billion of revenues by 2027 or a more prolonged period of slower growth to US$180B by 2030 or 2035. Discount rates are an additional degree of freedom. We can have a lower level of revenues (of not even US$180B) if we assume lower discount rates. CAPM isnât that useful since itâs an empirical disaster, and thereâs the well-known fact that high-growth companies can have lower, not higher, discount rates (i.e. the value/âgrowth factor).
Analysts are forecasting very rapid growth for Nvidiaâs revenues and profits. You mention Jan-2025 fiscal year-end revenues of $110 billion. The same source has analyst expectations for Jan-2026 year-end revenues of $138 billion. Perhaps you can find analyst expectations that go even further but these are generally rare and unreliable. So you could say that analysts expect Nvidiaâs revenues of $138 billion in 2025 (ending Jan-26) and continue your analysis from there. However, analyst expectations are known to have an optimistic bias and arenât as predictive as market prices.
I was confused about how you used the 3-year expected life of GPUs within your analysis. Itâs irrelevant when it comes to interpreting Nvidiaâs market price since Nvidiaâs future sales pathway canât be inferred by how long its products last. The more appropriate link applies to when Nvidiaâs customers must have high sales levels given that Nvidia is selling its GPUs, say in 2025. If we add 3 (GPU life) to 2025 (last available year for analyst estimates), we get 2028 (not your 2027), with Nvidiaâs revenues at $138 billion based on analyst expectations (not your US180 billion based on the market price).
I wasnât sure why you needed to estimate âconsumer valueâ or âwillingness to pay.â This inflated your final numbers by 4x in your title of âtrillions of dollars of value.â And confusingly, it conflates how value is used in other parts of your article. Bringing in âconsumer valueâ is weird because itâs not commonly calculated or compared in economics or finance. Value generally refers to that implied by market transactions, and this applies to well-known concepts like GDP, income, addressable market size, market value, sales, profits, etc (how you use it in most of your article). So we donât have a good intuition for what trillions of consumer surplus means, but, we do for hundreds of billions of sales.
So instead of ending with âtrillions of consumer valueâ for which there are no intuitive comparisons, itâs better to end with x billions of sales (profits arenât reliable since high growth companies can go years and years without them, e.g. Amazon). You can then compare this with other historical episodes of industries/âcompanies with high sales growth and see if this growth is likely/âunlikely for AI. How fast did Internet companies, or the SaaS industry (software as a service), or Apple get to this level of sales? Is it likely (or not) that AI software companies can do the same within y years?
In case you havenât seen these, here are some related resources that might be useful. 1) Damadoranâs valuation of Nvidia (from June 2023 so already dated given Nvidiaâs rapid growth), 2) Sequioaâs talks on the large AI software potential (not much in terms of hard numbers but more for useful historic analogs), and 3) ARKâs AI note from 2023 (self-promoting and highly optimistic but provides estimates for the AI software market in the many trillions by 2030).
Thanks, Ben! I enjoyed reading your write-up and appreciate your thought experiment.
What concerns me is that I suspect people rarely get deeply interested in the moral weight of animals unless they come in with an unusually high initial intuitive view.
This criticism seems unfair to me:
It seems applicable to any type of advocacy. Those who promote global health and poverty are likely biased toward foreign people. Those who promote longtermism are likely biased toward future people. Those who advocate for effective philanthropy are likely biased toward effectiveness and/âor philanthropy.
Thereâs no effective counter-argument since, almost by definition, any engagement is possibly biased. If one responds with, âI donât think Iâm biased because I didnât have these views to begin with,â the response can always be, âWell, you engaged in this topic and had a positive response, so surely, you must be biased somehow because most people donât engage at all.â It seems then that only criticisms of the field are valid.
This is reminiscent of an ad hominem attack. Instead of engaging in the merits of the argument, the critique tars the person instead.
Even if the criticism is valid, what is to be done? Likely nothing as itâs unclear what the extent of the bias would be anyway. Surely, we wouldnât want to silence discussion of the topic. So just as we support free speech regardless of peopleâs intentions and biases, we should support any valid arguments within the EA community. If one is unhappy with the arguments, the response should be to engage with them and make valid counterarguments, not speculate on peopleâs initial intuitions or motivations.
- Sep 29, 2023, 2:22 PM; 14 points) 's comment on WeighÂing AnÂiÂmal Worth by (
Thanks so much for such a thorough and great summary of all the various considerations! This will be my go-to source now for a topic that Iâve been thinking about and wrestling with for many years.
I wanted to add a consideration that I donât think you explicitly discussed. Most investment decisions done by philanthropists (including the optimal equity/âbond split) are outsourced to someone else (financial intermediary, advisor, or board). These advisors face career risk (i.e. being fired) when making such decisions. If the advisor recommends something that deviates too far from consensus practice, they have to worry about how they can justify this decision if things go sour. If you are recommending 100% equities and the market tanks (like it did last year), itâs hard to say âBut thatâs what the theory says,â when the reflective response by the principal is that you are a bad advisor because you donât understand risk. Many advisors have been fired this way, and no one wants to be in that position. This means tilting toward consensus is likely the rational thing to recommend as financial advisors. There are real principal-agent issues at play, and this is something acutely felt by practitioners even if itâs less discussed among academics.
I suspect the EA community is subject to this dynamic too. Itâs rarely the asset owners themselves who decide the equity mix. Asset allocation decisions are recommended by OpenPhil, Effective Giving, EA financial advisors, etc. to their principals, and itâs dangerous to recommend anything that deviates too far from practice. This is especially so when EAâs philanthropy advice is already so unconventional and is arguably the more important battle to fight. It can be impact-optimal over the long term to tilt toward asset allocation consensus when not doing so risks losing the chance to make future grant recommendations. The ability to survive as an advisor and continue to recommend over many periods can matter more than a slightly more optimal equity tilt in the short term.
Keynes comes to mind: âWorldly wisdom teaches that it is better for reputation to fail conventionally than to succeed unconventionally.â
Thanks for posting this, Jonathan! I was going to share it on the EA Forum too but just havenât gotten around to it.
I think GIFâs impact methodology is not comparable to GiveWellâs. My (limited) understanding is that their Practical Impact approach is quite similar to USAIDâs Development Innovation Venturesâ impact methodology. DIVâs approach was co-authored by Michael Kremer so it has solid academic credentials. But importantly, the method takes credit for the funded NGOâs impact over the next 10 years, without sharing that impact with subsequent funders. The idea is that the innovation would fail without their support so they can claim all future impact if the NGO survives (the total sum of counterfactual impact need not add to 100%). This is not what GiveWell does. GiveWell takes credit for the long-term impact of the beneficiaries it helps but not for the NGOs themselves. So this is comparing apples to oranges. Itâs true that GiveWell Top Charities are much more likely to survive without GiveWellâs help but this leads to my next point.
GiveWell also provides innovation grants through their All Grants Fund (formerly called Incubation Grants). Theyâve been funding a range of interventions that arenât Top Charities and in many cases, are very early, with GiveWell support being critical to the NGOâs survival. According to GiveWellâs All Grants Fund page, âAs of July 2022, we expect to direct about three-quarters of our grants to top charity programs and one-quarter to other programs, so thereâs a high likelihood that donations to the All Grants Fund will support a top charity grant.â This suggests that in GiveWellâs own calculus, innovation grants as a whole cannot be overwhelmingly better than Top Charities. Otherwise, Top Charities wouldnât account for the majority of the fund.
When thinking about counterfactual impact, the credit one gets for funding innovation should depend on the type of future donors the NGO ends up attracting. If these future donors would have given with low cost-effectiveness otherwise (or not at all), then you deserve much credit. But if they would have given to equally (or even more) cost-effective projects, then you deserve zero (or even negative) credit. So if GIF is funding NGOs that draw money from outside EA (whereas GiveWell isnât), itâs plausible their innovations have more impact and thus are more âcost-effectiveâ. But we are talking about leverage now, so again, I donât think the methodologies are directly comparable.
Finally, I do think GIF should be more transparent about their impact calculations when making such a claim. It would very much benefit other donors and the broader ecosystem if they can make public their 3x calculation (just share the spreadsheet please!). Without such transparency, we should be skeptical and not take their claim too seriously. Extraordinary claims require extraordinary evidence.
Thanks for your response, Joel!
Stepping back, CEARCHâs goal is to identify cause areas that have been missed by EA. But to be successful, you need to compare apples with apples. If youâre benchmarking everything to GiveWell Top Charities, readers expect your methodology to be broadly consistent with GiveWellâs and their conservative approach (and for other cause areas, consistent with best-practice EA approaches). The cause areas that are standing out for CEARCH should be because they are actually more cost-effective, not because youâre using a more lax measuring method.
Coming back to the soda tax intervention, CEARCHâs finding that itâs 1000x GiveWell Top Charities raised a red flag for me so it seemed that you must somehow be measuring things differently. LEEP seems comparable since they also work to pass laws that limit a bad thing (lead paint), but theyâre at most ~10x GiveWell Top Charities. So whereâs the additional 100x coming from? I was skeptical that soda taxes would have greater scale, tractability, or neglectedness since LEEP already scores insanely high on each of these dimensions.
So I hope CEARCH can ensure cost-effectiveness comparability and if youâre picking up giant differences w/â existing EA interventions, you should be able to explain the main drivers of these differences (and it shouldnât be because youâre using a different yardstick). Thanks!
Hi Joel, I skimmed your report really quickly (sorry) but suspect that you did not account for soda taxes being eventually passed anyway. So the modeled impact of any intervention shouldnât be going to 2100 or beyond but out only a few years (Iâd think <10 years) when soda taxes would eventually be passed without any active intervention. You are trying to measure the impact of a counterfactual donated dollar in the presence of all the forces already at play that are pushing for soda taxes (how some countries already have them). This makes for a more plausible model, and I believe is how LEEP or OpenPhil model policy intervention cost-effectiveness (I could be wrong though).
New phrasing works well!
Got it. But I think the phrasing for the number of animals that die is confusing then. Since you say â100 other human [sic] would probably die with me in that minute,â the reference is to how many animals would also do during that minute. I think what you want to say is for every human death, how many animals would die, but thatâs not the current phrasing (and by that logic, the number of humans that would die per human death would be 1, not 100).
Iâd suggest making everything consistent on a per-second basis as smaller numbers are more relatable. So 1 other human would die with you that second, along with 10 cows, etc.
Thanks for writing this! The very last sentence seems off. Did you mean to say every second (instead of minute)? Also, the number of farm animals that die every second should be 1â60 (not 1â120) of that in the âminuteâ table above.
This last sentence was quite shocking for me to read. Itâs sadâŚbut very powerful.
Minor suggestion: in your title and summary, please just write out â10 kâ as 10,000. No need to abbreviate when people may be unsure that itâs actually 10,000 (given that itâs such a large difference).
I agree with Michael that concrete examples would be very helpful, even for researchers. A post should be informative and persuasive, and examples almost always help with that. In this case, examples can also make clear the underlying logic, and where the explanation can be confusing.
For example, letâs think about investing in alternative protein companies as a way to tackle animal welfare. Assume that in a future state where lots more people eat real meat (bad world state), the returns for alt-proteins in that state are low but cost-effectiveness is high. This could be because alt proteins have faced lower rates of adoption (low returns) but itâs now easier to persuade meat eaters to switch (search costs are now low since more willing-switchers can be efficiently targetted). The opposite situation is true too. In a good future state with few meat-eaters, alt protein returns are high but cost-effectiveness is low. So this scenario should put us in your tableâs upper left quadrant (negative correlation btw/â World State and Cost-Effectiveness + negative correlation btw/â Return and Cost-Effectiveness).
This example illustrates how some of your quadrant descriptions may be confusing or even inappropriate:
âUnderweight investmentâ: I agree with this one since to have a greater EV, you want investments with a positive correlation between returns and cost-effectiveness. This isnât true for alt proteins here, so you should avoid them.
âDivest from evil to do goodâ: I donât think this makes sense because alt proteins are not âevilâ (but you should avoid them given the scenario).
âMission leveragingâ: I was quite confused initially because I was assuming that the comparison is to no investment at all. If so, then investing in alt proteins can lead to an ambiguous impact on volatility (depending on the relative magnitude of return changes versus cost-effectiveness changes). It could in fact be mission hedging (with an improvement in the bad state) if the low returns end up producing more total good because of the stateâs high cost-effectiveness. However, I eventually realized that the comparison is to a fixed grant within the animal welfare space (although this was never made explicit in the post and may not be what most people would assume). If so, then indeed this is always mission leveraging since a positive correlation between the world state and returns does ensure lower volatility.
So as you can see, an example makes clear where table descriptions may be inappropriate and where a clearer description can be helpful. It also makes more concrete what various correlation signs mean and how to think about them.
This post (and the series it summarizes) draws on the scientific literature to assess different ways of considering and classifying animal sentience. It persuasively takes the conversation beyond an all-or-nothing view and is a significant advancement for thinking about wild animal suffering as well farm animal welfare beyond just cows, pigs, and chickens.
Thanks for the clarification, Owen! I had mis-understood âinvestment-likeâ as simply having return compounding characteristics. To truly preserve optionality though, these grants would need to remain flexible (can change cause areas if necessary; so grants to a specific cause area like AI safety wouldnât necessarily count) and liquid (can be immediately called upon; so Founderâs Pledge future pledges wouldnât necessarily count). So yes, your example of grants that result âin more (expected) dollars held in a future year (say a decade from now) by careful thinking people who will be roughly aligned with our valuesâ certainly qualifies, but I suspect thatâs about it. Still, as long as such grants exist today, I now understand why you say that the optimal giving rate is implausibly (exactly) 0%.
Hi Owen, even if youâre confident today about identifying investment-like giving opportunities with returns that beat financial markets, investing-to-give can still be desirable. Thatâs because investing-to-give preserves optionality. Giving today locks in the expected impact of your grant, but waiting allows for funding of potentially higher impact opportunities in the future.
The secretary problem comes to mind (not a perfect analogy but I think the insight applies). The optimal solution is to reject the initial ~37% of all applicants and then accept the next applicant thatâs better than all the ones weâve seen. Given that EA has only been around for about a decade, you would have to think that extinction is imminent for a decade to count for ~37% of our total future. Otherwise, we should continue rejecting opportunities. This allows us to better understand the extent of impact thatâs actually possible, including opportunities like movement building and global priorities research. Future ones could be even better!
I highly recommend the Founderâs Pledge report on Investing to Give. It goes through and models the various factors in the giving-now vs giving-later decision, including the ones you describe. Interestingly, the case for giving-later is strongest for longtermist priorities, driven largely by the possibility that significantly more cost-effective grants may be available in the future. This suggests that the optimal giving rate today could very well be 0%.
Have you compared your analysis to this previous EA Forum post? Are there different takeaways? Have you done anything differently and if so, why?
Hereâs the math on moral/âfinancial fungibility:
...
Youâre probably better off eating cow beef and donating the $6.03/âkg to the Good Food Institute
Is refraining from killing really morally fungible to killing + offsetting? Would it be morally permissible for someone to engage in murder if they agreed to offset that life by donating $5,000 to Malaria Consortium? I donât mean to be offensive with this analogy, but if we are to take seriously the pain/âsuffering that factory farming inflicts on animals, we should morally regard it in a similar lens to inflicting pain/âsuffering on humans.
So, no, moral acts are not necessarily fungible. It is better to not eat meat in the first place than to eat meat and donate the savings to farm animal charities (even if you could save more animals). This is obvious from a rights moral framework but even consequentialists would consider financial offsetting dangerous and unpalatable. The consequences of allowing people to engage in immoral acts + offsetting would be a treacherous and ultimately inferior world.
So your calculations are not the cost of eating meat but rather, the cost of saving animals. You have not estimated the cost of chicken/âcow suffering (which would require estimating utility functions and animal preferences), but rather, the cost of alleviating suffering. Your low-cost numbers donât imply that eating meat is inconsequential, but rather, that itâs very cost-effective to help chickens and cows. GiveWellâs $5,000 per human life doesnât make human life cheap or murder trivial, it means we have an extraordinary opportunity to help others at a very low cost to ourselves.
Thanks so much for this very helpful post!
Iâm a bit confused about your framing of the takeaway. You state that âreducing meat consumption is an unsolved problemâ and that âwe conclude that no theoretical approach, delivery mechanism, or persuasive message should be considered a well-validated means of reducing meat and animal product consumption.â However, the overall pooled effects across the 41 studies show statistical significance w/â a p-value of <1%. Yes, the effect size is small (0.07 SMD) but shouldnât we conclude from the significance that these interventions do indeed work?
Having a small effect or even a statistically insignificant one isnât something EAs necessarily care about (e.g. most longtermism interventions donât have much of an evidence base). Itâs whether we can have an expected positive effect thatâs sufficiently cheap to achieve. In Arielâs comment, you point to a study that concludes its interventions are highly cost-effective at ~$14/âton of CO2eq averted. Thatâs incredible given many offsets cost ~$100/âton or more. So it doesnât matter if the effect is âsmallâ, only that itâs cost-effective.
Can you help EA donors take the necessary next step? It wonât be straightforward and will require additional cost and impact assumptions, but itâll be super useful if you can estimate the expected cost-effectiveness of different diet-change interventions (in terms of suffering alleviated).
Finally, in addition to separating out red meat from all animal product interventions, I suspect itâll be just as useful to separate out vegetarian from vegan interventions. It should be much more difficult to achieve persistent effects when youâre asking for a lot more sacrifice. Perhaps we can get additional insights by making this distinction?