I want to make the biggest positive difference in the world that I can. My mission is to cause more effective charities to exist in the world by connecting talented individuals with high-impact intervention opportunities. This is why I co-founded the organisation Charity Entrepreneurship to achieve this through an extensive research process and incubation program.
Joey 🔸
This take was more aimed at hiring/staffing instead of direct outreach/EA chapters
Cause Plurality vs Cause Prioritization
“I don’t really think non-OpenPhil EA donors should give to farmed animal welfare, for example.” Wow, this is interesting! I would love to know what you mean by this?
So, I have some mixed views about this post. Let’s start with the positive.
In terms of agreement: I do think organizational critics are valuable, and specifically, critics of ACE in the past have been helpful in improving their direction and impact. I also love the idea of having more charity evaluators (even in the same cause area) with slightly different methods or approaches to determining how to do good, so I’m excited to see this initiative. I also have quite a bit of sympathy for giving higher weight to explicit cost-effectiveness models when it comes to animal welfare evaluations.
I can personally relate to the feeling of being disappointed after digging deeper into the numbers of well-respected EA meta organizations, so I understand the tone and frustration. However, I suspect your arguments may get a lot of pushback on tone alone, which could distract from the more important substance of the post and concepts (I’ll leave that for others to address, as it feels less important, in my opinion).
In terms of disagreement: I will focus on what I think is the crux of the issue, which I would summarize as: (a) ACE uses a methodology that yields quite different results than a raw cost-effectiveness analysis; (b) this methodology seems to have major flaws, as it can lead to clearly incoherent conclusions and recommendations easily; and (c) thus, it is better to use a more straightforward, direct CEA.
I agree with points A and B, but I am much less convinced about point C. To me, this feels a bit like an isolated demand for methodological rigor. Every methodology has flaws, and it’s easy to find situations that lead to clearly incoherent conclusions. Expected value theory itself, using pure EV terms, has well-known issues like St. Petersburg Paradox, optimizer’s curse, and general model mistakes. CEAs in general share these issues and have additional flaws (see more on this here). I think CEAs are a super useful tool, but they are ultimately a model of reality, not reality itself, and I think EA can sometimes get too caught up in them (whereas the rest of the world probably doesn’t use them nearly enough). GW, which has ~20x the budget of ACE, still finds model errors and openly discusses how softer judgments on ethics and discount factors influence outcomes (and they consider more than just a pure CEA calculation when recommending a charity).
Overall, being pretty familiar with ACE’s methodology and CEAs, I would expect, for example, that a 10-hour CEA of the same organizations would be quite a bit further from the truth of the actual impact or effectiveness of an organization. It’s not clear to me that spending equal time on pure CEAs versus a mix of evaluative techniques (as ACE currently does) would lead to more accurate results (I would probably weakly bet against it). I think this post overstates the importance of discarding a model due to a flaw that can be exploited.
A softer argument, such as “ACE should spend double the percentage of time it currently spends on CEAs relative to other methods” or “ACE should ensure that intervention weightings do not overshadow program-level execution data,” is something I have a lot of sympathy for.
I do not think there is much reward in the charity sector for identifying undervalued organizations, particularly by criteria that differ from what the market as a whole aims for. Which sadly is not cost-effectiveness, etc. I think that’s part of why it’s a lot easier to find promising missed opportunities compared to the for-profit sector.
I do think it’s much harder (assuming ~equal time) for someone to spend $100 million cost-effectively compared to $100, due to systemic differences. However, I would predict there are many people who could spend $100,000 and get a higher ROI than many people spending $10 million, due to a lack of efficient delegation/regranting/communication between the two.
A thing that seems valuable but is not talked about much is organizations that bring talent into the EA/impact-focused charity world, vs. re-using people already in the movement, vs. turning people off the movement. The difference in these effects seems both significant and pretty consistent within an organization. I think Founders Pledge is a good example of an organization that, I think, net brings talent into the effective charities world. I often see their hires, post-leaving FP, go on to pretty impactful other roles that it’s not clear they would have done absent their experience working for FP. I wish more organizations did this vs. re-using/turning people off.
Pretty interesting consideration; it is one I have not thought about/modelled that much. I wonder if someone could do a simple version of this by considering willingness to pay—e.g., how much would a different charity would pay for a given piece of IP? My guess, though, is that many things would be relatively low value compared to the yearly org costs (e.g., I’m not sure someone would pay more than one year of our AIM yearly budget for all our IP).
The biggest intangible asset that comes to mind, which I have not seen modelled much, is the implicit staff training that happens in certain jobs. E.g., if the average staff member goes on to a career five times higher than the jobs they were getting offered before, something quietly valuable has probably happened over their tenure. Shoutout to Founders Pledge for this, as I feel like I see a ton of really valuable people entering the EA job world via Founders Pledge. I think often the counterfactual is that they would get far less impactful jobs than they do post-working for FP.
Mostly in EA meta...
My sense is that AMF has gotten a little less cost-effective over time due to working in slightly less ideal countries. GD might be pretty close, as I am less sure how the low-hanging fruit affects them. It looks like their percentage of funding that goes to beneficiaries has been pretty similar over time from a quick Google search.
Good thought I like putting things into close to life scale—made the change to $1.5m and $150k to be in line with ~average CE/AIM charity.
“I can’t think of many nonprofit organizations that I’m convinced become more cost-effective as they grow in their core job.”
I can’t say I have many great examples of this either, at least past the first ~3-5 years or ~$1-3m budget. With AIM/CE charities, I think they tend to become more cost-effective in years 3-5 than they are in years 1-2, so there are some gains from very early-stage growth.
Although, I guess one mitigating factor here is that I think early-stage organizations are sometimes effectively cost-offset by a dedicated, high-talent founder. So maybe early ‘on-paper numbers’ don’t fully reflect the counterfactual costs, and that cost would reduce with growth.
Expectations Scale with Scale – We Should Be More Scope-Sensitive in Our Funding
Mostly basing this on the macro data I have seen that seems to suggest giving as a % of GDP has stayed pretty flat year to year (~2%).
I think a semi-decent amount of broadly targeted adult-based outreach would have resulted in me finding out about EA (e.g., I watched a lot of TED Talks and likely would have found out about EA if it had TED Talks at that point). I also think mediums that are not focused on a given age but also do not penalize someone for it would have been effective. For example, when I was young, I took part in a lot of forums in part because they didn’t care about or know my age.
I think that EA outreach can be net positive in a lot of circumstances, but there is one version of it that always makes me cringe. That version is the targeting of really young people (for this quicktake, I will say anyone under 20). This would basically include any high school targeting and most early-stage college targeting. I think I do not like it for two reasons: 1) it feels a bit like targeting the young/naive in a way I wish we would not have to do, given the quality of our ideas, and 2) these folks are typically far from making a real impact, and there is lots of time for them to lose interest or get lost along the way.
Interestingly, this stands in contrast to my personal experience—I found EA when I was in my early 20s and would have benefited significantly from hearing about it in my teenage years.
Joey ’s Quick takes
Yes, reasonable points. I do think that there is a mediating size variable here. E.g., for organizations with sub-$1m budgets, having the knowledge that others would/could donate if you need it might be more optimal than diversifying, due to the time costs of doing so. I do think the x2 time is a lower bound, as some funders might take ~x10 more time, and you are more likely to put up with them if you are seeking diversification.
I think people model this in theory more often than it happens in practice. Three reasons why:
The total funding pie is pretty fixed; I expect it to be quite rare to grow it.
It could be that the next org on the list is much worse, but normally, if they are funding something effective, there is a non-random reason they came to the top of the list. I think it’s more common that a funder has different values—e.g., they are supporting global health and mental health, and you have a strong view that the one you are working on is significantly better (of course, someone working on the other would say the opposite). But I think that seems a bit immodest/morally presumptuous to then treat this counterfactual as massively different based on a pretty debatable value call.
You could, in fact, recommend your funding to go somewhere else with high EV (thus making the counterfactual higher). E.g., if someone says, “I like AIM, I want to give it $1m,” and I say, “Sorry, we have no room for funding, but have you considered X charity? It is also really good and hits similar ground.” This is not possible in every fundraising situation, but it is doable, and if I know an opportunity that I think is $-for-$ better than AIM, I have been pretty successful at redirecting funding that would go to AIM there.
I think the suggestion here makes sense, although I likely have a more pessimistic model of funder (and charity, for that matter) rationality. E.g., I expect a charismatic but equally talented charity founder to have ~2x the fundraising ability, even in EA. This creates a bit more noise in the system and makes me inclined to set higher bars to compensate for it.
An idea that has always motivated me is the idea of the veil of ignorance — the basic concept being, how would you want society to be if you did not know who you would be in it? A world where people in the top 25% donate significantly to those less well off has always appealed to me and felt right. The GWWC pledge was one of the first long-term charitable actions I took in this direction. I remember signing my paper copy of the pledge with four other friends, two of us taking the further pledge and the others taking the 10%. It felt both important and significant to our group and carried real weight.