I live for a high disagree-to-upvote ratio
huw
I agree with you that EA often implicitly endorses conclusions, and that this can be pernicious and sometimes confusing to newcomers. Here’s a really interesting debate on whether biodiversity loss should be an EA cause area, for example.
A lot of forms of global utilitarianism do seem to tend to converge on the ‘big 3’ cause areas of Global Health & Development, Animal Welfare, and Global Catastrophic Risks. If you generally value things like ‘saving lives’ or ‘reducing suffering’, you’ll usually end up at one of these (and most people seem to decide between them based on risk tolerance, assumptions about non-human moral values, or tractability—rather than outcome values). Under this perspective, it could be reasonable to dismiss cause areas that don’t fit into this value framework.
But this highlights where I think part of the problem lies, which is that value systems that lie outside of this can be good targets for effective altruism. If you value biodiversity for its own sake, it’s not unreasonable to ask ‘how can we save the greatest number of valuable species from going extinct?’. Or you might be a utilitarian, but only interested in a highly specific outcome, and ask ‘how can I prevent the most deaths from suicide?’. Or ‘how can I prevent the most suffering in my country?’—which you might not even do for value-system reasons, but because you have tax credits to maximise!
I wish EA were more open to this, especially as a movement that recognises the value of moral uncertainty. IMHO, some people in that biodiversity loss thread are bit too dismissive, and I think we’ve probably lost some valuable partners because of it! But I understand the appeal of wanting easy answers, and not spending too much time overthinking your value system (I feel the same!).
Similarly, I wonder if one of the major activities this group could do together is joint funding, possibly by forming a funding circle. When I was EtG, I just donated broadly to GiveWell Top Charities because I found cause selection overwhelming, but a community of similar funders with some hobbyist-type research into causes and charities might’ve engaged me more.
Thank you so much! That’s a great clarification ❤️
I hate to continue to ride this hobby horse, but I wish that questions about mental health as an EA cause area would distinguish between mental health as a global health problem, and mental health for EAs or other capacity-building purposes (or, if not, just leave them out). Conflating them in the same question without a clear disambiguation, especially around prioritisation, makes this data nearly useless because I don’t know what the answerer interpreted it to mean. (I hope it’s not too late to add a clarification now?)
I think most of the article is pretty stock-standard, but I did want to elucidate a novel angle to replying to these kinds of critiques if you see them around:
When Notre Dame caught on fire in 2019, affluent people in France rushed to donate to repair the cathedral, a beloved national landmark. Mr. Singer wrote an essay questioning the donations, asking: How many lives could have been saved with the charitable funds devoted to repairing this landmark? This was when a critique of effective altruism crystallized for Ms. Schiller. “He’s asking the wrong question,” she recalled thinking at the time. She wanted to know: How could anyone put a numerical value on a holy space?
Ms. Schiller had first become uncomfortable with effective altruism while working as a fund-raising consultant. She encountered donors who told her, effectively, “I’m looking for the best bang for my buck.” They just wanted to know their money was well spent. That made sense, though Ms. Schiller couldn’t help but feel there was something missing in this approach. It turned the search for a charitable cause into an exercise of bargain hunting.
The school of philanthropy that Ms. Schiller now proposes focuses on “magnificence.” In studying the literal meaning of philanthropy — “love of humanity” in Greek — she decided we need charitable causes that make people’s lives feel meaningful, radiant, sacred. Think nature conservancies, cultural centers and places of worship. These are institutions that lend life its texture and color, and not just bare bones existence.
I’d humbly propose that, without good guardrails, this kind of thinking has good shot at turning racist/anglo-centric. It’s notable, of course, that the article mentioned the Notre Dame, and not the ongoing destruction of religious history in Gaza or Syria or Afghanistan or Sudan or Ukraine (for example). If critics of EA don’t examine their own biases about what constitutes ‘magnificence’, they risk contributing to worldviews that they probably abhor. Moreover, in many of these cases, these kinds of fundraisers contribute to projects that should be—and usually otherwise would be—funded by government.
If you value civic life and culture, but only contribute to your local, Western civic life and culture, then you are a schmuck and have been taken advantage of by politicians who want to cut taxes for the wealthy. Please, at least direct your giving outward.
Thank you—this is a great clarification! Appreciate your work!
I was curious about the thing with Vida Plena too (since I work for Kaya Guides, and we’re generally friendly). I cooked up a quick data explorer for the three mental health charities:
It’s an interesting pattern. It does appear like Kaya Guides and Vida Plena got a pretty equal number of partial votes, suggesting that people favourable to those areas ranked them equally highly. But among completers, Vida Plena got a lot of lower votes—even more so than ACTRA! Given that ACTRA is very new, I think they make a good control group, which to me implies something like ‘completers think Vida Plena is likely to be less favourable than the average unknown mental health charity’.
Your hypothesis makes sense to me; many in the EA community don’t know the specifics of Vida Plena’s program or its potential for high cost-effectiveness, probably due to previous concerns around HLI’s evaluations. I personally think this is unfounded, and clearly many partial voters agree, as Vida Plena ranked quite highly even if you assume that some number of these partial voters sorted by a GHD focus and only voted for GHD charities (higher than us!).
I am not very familiar with the terminology, but from context clues such as:
I assumed cows’ conditions are as bad as those of broilers in a reformed scenario
That ‘conventional scenario’ is referring to conditions a la most factory farming, and ‘reformed scenario’ is referring to more humane conditions, including free range. But there’s a good chance I just misinterpreted this?
Regardless, whatever you think the reformed scenario is, it sure seems like it would be advantageous to switch your chicken consumption to it!
I went through your estimates, and I actually found it more persuasive that buying broilers from a reformed scenario seems to get you both a reduction in pain and a more climate-positive outcome. I feel like the title and tone of the post set up an artificial tension between caring about animals and caring about the climate (under the constraint that you still want to eat meat)! Am I misinterpreting you?
I suppose to tack onto Elliot’s answer, I’m curious about what you see the differences in reasoning to be. If it is merely that GCR giving opportunities are more hits-based / high variance, I could see, for example, a small label being applied on the GWWC website next to higher-risk opportunities with a link to something like the explanations you’ve written above (and the evaluation reports).
That kind of labelling feels like only a quantitative difference from the current binary evaluations (as in, currently GWWC signals inclusion/exclusion, but could extend that to signal for strength of evaluation or risk of opportunity).
OpenAI have their first military partner in Anduril. Make no mistake—although these are defensive applications today, this is a clear softening, as their previous ToS banned all military applications. Ominous.
Markdown footnotes work, they’re just different to the first-class ones. This post was fully Markdown (and I couldn’t have done it without it!)
Nice! Is there an associated Markdown syntax I can use that will trigger the same functionality?
Mmm, so maybe the crux is at (3) or (4)? I think that GWWC may be assuming too much about how viewers are interpreting the messaging and presentation around the evaluations. I think there is probably a way to signal the differences in evaluation strength while still maintaining the BYO worldview approach?
I’ve been going through the evaluation reports and it seems like GWWC might not be as confident in Longview’s Emerging Challenges Fund or the EA Long-Term Future Fund as they are in their choices for GHD and Animal Welfare. The reports for these funds often include some uncertainties, like:
There were several limitations to the evaluation, including various conflicts of interest, limited access to relevant information (in part because the LTFF was not set up to be evaluated in this way), our lack of direct expertise and only limited access to expert external input, and the general difficulty of evaluating funding opportunities in the global catastrophic risk cause area.
We don’t know of any clearly better alternative donation option in reducing GCRs
On the other hand, the Founders Pledge GHD fund wasn’t fully recommended due to more specific methodological issues:
Our belief that FP GHDF evaluations do not robustly demonstrate that the opportunities they fund exceed their stated bar of 10x GiveDirectly in expectation, due to the presence of errors and insufficiently justified subjective inputs we found in some of their BOTECs that could cause the estimated cost-effectiveness to fall below this threshold.
Until I read various posts around the forum and personally looked into what LTFF in particular was funding, I was under the impression—partly from GWWC’s messaging—that the LTFF was at least comparable to a GiveWell or even an ACE. This is partly because GWWC usually recommend their GCR funds at the same time as these other funds.
It might be on me for having the wrong assumptions, so I wrote out my chain of thinking, and I’m keen to find out where we disagree:
There might be different evaluation standards between cause areas
These varying standards could mean a higher risk of wastage or even fraud in more risky funding opportunities; see this analysis
GWWC’s messaging and presentation might not fully convey these differences (the paragraph for these two funds briefly mentions lower tractability and an increased need for specialised grantmaking)
It might be challenging for potential donors to grasp these differences without clear communication
(I originally posted this to the 2024 recommendations but thought it might be more constructive / less likely to cause any issues over in this thread)
Goes without saying, but thank you for continuing to promote & facilitate experiments in democratic giving!
That makes so much more sense, thank you!
I was interested in a different presentation of the income estimate data, so I plotted it against the approximate actual income brackets:
I would love to see a deeper version of this graph that includes more of the raw data & has better error bars. I feel like this presentation is much more insightful!
I think I’m struggling to follow your point. No EA knows anyone with malaria, and indeed the whole ethos of EA is to try and prevent personal connections from getting in the way of doing a lot of good. I couldn’t understand from what you wrote why “diseases we’ve learned about abstractly” would be more likely to be focused on than smoking (which presumably we’ve learned about more practically via harm reduction education).
I think I it might be possible to make an interesting point about how, because smoking is seen as lower-class in most EA contexts, it’s treated with more disdain than malaria, which has no class connotation. Is this what you meant?