I want to make the biggest positive difference in the world that I can. My mission is to cause more effective charities to exist in the world by connecting talented individuals with high-impact intervention opportunities. This is why I co-founded the organisation Charity Entrepreneurship to achieve this through an extensive research process and incubation program.
Joey
Just wanted to chip in that I am quite positive about this choice and the direction that CEA could go in under Zach’s leadership. I have found Zach to be thoughtful about a range of EA topics and to hold to core principles in a way that is both rare and highly valuable for the leader of a meta organization.
Hey Vasco,
Love the post; I think it is super valuable to have these sorts of important conversations, directly thinking about cross-cause comparison. It’s worth noting that CE does consider cross-cause effects in all the interventions we consider/recommend, including possible animal effects and WAS effects. Despite this, CE does not come to the same conclusion as this post; here are a couple of notes on why:
Strength of evidence discounting: CEAs are not all equal when they are based on very different strengths of evidence, and I think we weight this factor a lot heavier. It’s quite common for the impact of any given intervention to regress fairly heavily as more research/work is put into it. We have found this in CE’s, GW’s and other EAs’ research. This can be seen in even more depth in the GiveWell and EA forum writings on deworming and how to deal with speculative effects that possibly have very high upsides. For example, I would expect a five-hour CEA to be constantly off (almost always in a positive direction) compared to a 50-hour CEA. A calculation made at two different levels of rigor should not be directly compared. (This does not mean shorter-form CEAs are not worth doing, but I think we have to take their cons and likely regressions a lot more seriously than this post currently does.) This discounting should be even more heavily applied to flow-through effects, as the evidence for them is way lighter than the direct effects. We tend to use something akin to the weighted quantitative modeling used here.
Marginal funding and reliability in effects: Here’s a good example of how a CEA can regress really quickly; GiveWell typically does CEAs on marginal donations made, whereas many other CEAs—including the one you use from Saulius—do not consider marginal funding. I currently think that the marginal dollar to corporate campaigns is way less impactful when compared to the average dollar of spending pre-2018. This can affect a CEA quite drastically. Another example is the funding of numerous animal interventions through corporate campaigns, which have become the “hit” of the animal movement. However, these campaigns often are seen as cost-effectiveness without clear before hand knowledge of the impact an additional dollar of funding would have accomplished. It is a bit like measuring CE’s cost effectiveness by looking at the top charity we incubated and assuming future charities will be equal to that. Variance is a real pain, and it’s not even clear if other corporate campaigns will be equally cost-effective to cage-free. On the other hand, top GW charities have this built in; they are not estimating the average EV of AMF’s top three historical campaigns, they are estimating the impact of marginal average future funding.
Variable animal effects dependent on intervention: You touch on this, but I think there is an important point missed. The effects on animals vary quite a lot, depending on the intervention. Interventions that primarily affect mortality in Africa, for instance, end up looking like how you describe. But morbidity-focused interventions, mental health focused interventions, and family planning interventions are all significantly less affected by this consideration. Same goes for any intervention that operates in contexts where there is lower meat consumption (such as in India). I think if you remodeled this for an organization like Fortify Health (Iron fortification in India), it would result in rather different outcomes.
If you combine these factors and look at a marginal dollar to FH vs a marginal dollar to THL (both of them with similarly rigorous CEAs and flow-through effects that are discounted based on certainty), I think the outcomes would be different enough to change your endline conclusion.
The non-epistemic difference I have is to do with ecosystem limitations, and is more specific to CE itself vs. general EA organizations. When we launch a charity, we need 1) founders 2) ideas, and 3) funding. Each of these are fairly cause area limited (and I think limiting factors are often more important than total scale). For example, if we aimed to found 10 animal charities a year (vs 10 charities across all the cause areas we currently focus on) I do not think the weakest two would be anywhere near as impactful as the top two, and only a small minority of them would get long-term funding. In fact, with animal charities making up around a third of those we have launched, I think we already run close to those limitations. This means that even if we thought that animal charities were more impactful than human ones on average, the difference would have to be pretty large for us to think that adding a 9th or 10th animal charity into the animal ecosystem would be more impactful than adding the first or second human-focused charity. I expect a version of this consideration can apply to other actors too. In general, I believe that given the current ecosystem, more than ~three-five charities founded per year within a given area would start to result in cannibalization between charities.
Thanks again for the consideration of this; I do think people should do a lot more cross-cause thinking, and I expect there are some really neglected areas that have significant intercausal impact.- 15 May 2023 17:25 UTC; 4 points) 's comment on My impact assessment of Giving What We Can by (
So I want to be pretty careful about going into details, but I can mix some stories together to make a plausible sounding story based on what I have heard. Please keep in mind this story is a fiction based off a composite of case studies I’ve witnessed, not a real example of any particular person.
Say Alice is an EA. She learns about it in his first year of college. She starts by attending an EA event or two and eventually ends up being a member of his university chapter and pretty heavily reading the EA forum. She takes the GWWC pledge and a year later she takes a summer internship at an EA organization. During this time she identifies strongly with the EA movement and considers it one of her top priorities. Sadly, as Alice is away at her internship her chapter suffers and when she gets back she hits a particularly rough year of school and due to long term concerns, she prioritizes school over setting the chapter back up, mainly thinking about her impact. The silver lining is at the end of this rough year she starts a relationship. The person is smart and well suited, but does not share her charitable interest. Over time she stops reading the EA content she used to and the chapter never gets started again. After her degree ends she takes a job in consulting that she says will give her career capital, but she has a sense her heart is not as into EA as she once was. She knows a big factor is her boyfriend’s family would approve of a more normal job than a charity focused one, plus she is confident she can donate and have some impact that way. Her first few paychecks she rationalizes as needing to move out and get established. The next few to build up a safe 6 month runway. The donations never happen. There’s always some reason or another to put it off, and EA seems so low on the priorities list now, just a thing she did in college, like playing a sport. Alice ends up donating a fairly small amount to effective charities (a little over 1%). Her involvement was at its peak when she was in college and she knows her college self would be disappointed. Each choice made sense at the time. Many of them even follow traditional EA advice, but the endline result is Alice does not really feel she is an EA anymore. She has many other stronger identities. In this story, with different recommendations from the EA movement and different choices from Alice, she could have ended up doing earning to give and donating a large percentage long term or working with an EA org long term, but instead she “value drifted”.
Really interesting post, but I do want to flag a big concern I have in the comparative calculation. Broadly, estimated effects are almost always just going to be way more positive than well studied effects. For example if you estimated GD’s impact using standard income vs happiness adjustment measures (e.g. the value of double someone’s income on their happiness) you end up at a much higher number than the RCT results. I think this sort of thing happens pretty consistently and predictability. For example, it would be really easy to imagine Strong Minds treatments are different enough from the most studied ways of doing CBT for the treatment effects to only persist 1 year (which would reduce the cost-effectiveness to about equal), and it’s easy to imagine several such changes (almost all going in a more pessimistic direction).
On the flip side, there has been extensive research, evaluation and huge numbers of charities founded in the global health space leading to a comparatively very small number of super strong charities, many of which are explicitly focused on cost-effectiveness/impact, etc. This same work (as far as I know) has not been done in the mental health area. In many ways, you are comparing a very strong global poverty charity to a much more average mental health charity. Thus personally I would not necessarily need to see a current mental health charity beating GiveWell’s best to be convinced the area as a whole could be very effective (if some strong research, evaluations and impact focused charities) were founded or identified in the area. Given my current work with Charity Entrepreneurship, the main case I am considering is if a new well researched and impact focused charity in mental health could be competitive with GiveWell top charities in effectiveness. I feel like the posts you have made over time have made this claim seem pretty plausible.
1) Where do you see untapped opportunities for nonprofit entrepreneurs in the space of mental health?
2) What role do you see entrepreneurs (vs. established organizations) play in this field, including incubation programs like CharityEntrepreneurship.com that has incubated mental health charities before?3) How do you assess the potential of new mental health treatments for the Global South? Is this sufficiently prioritized and do you see particular roadblocks to rapid adoption?
Hey Silas, really glad you wrote this up. I also recently donated bone marrow (after donating blood many times and being a bit torn on kidney donations). My experience was equally positive and probably even easier logistically (from London, UK).
Some hard-nosed calculations for those who might be interested (that I will write up in a full post one day): I lost about 1 full day of work and would expect the average person to lose between 1-3 days of work if they wanted to lose as few workdays as possible. My best estimate is this saved between 4-12 years of life for the person who received the donation. Overall, I think it fits quite well with my altruism sharpens altruism concept and is likely worth many EAs signing up for.
First a meta note less directly connected to the response:
Our funding circles fund a lot of different groups, and there is no joint pot, so it’s closer to a moderated discussion about a given cause area than CE/AIM making granting calls. We are not looking for people to donate to us or our charities, and as far as I understand, OpenPhil and AWF do not have a participatory way to get involved other than just donating to their joint pot directly. This request is more aimed at people who want to put in significant personal time to making decisions independent from existing funding actors.
More connected response:
Thanks for the thoughts, and the support you have given our past charities. I can give a few quick comments on this. Our research team might also respond a bit more deeply.
1) Research quality: I think in general, our research is pretty unusual in that we are quite willing to publish research that has a fairly limited number of hours put into it. Partly, this is due to our research not being aimed at external actors (e.g., convincing funders, the broader animal movement, other orgs) as much as aimed at people already fairly convinced on founding a charity and aimed at a quite specific question of what would be the best org to found. We do take an approach that is more accepting of errors, particularly ones that do not affect endline decisions connected directly to founding a charity. E.g., For starting a charity on fish in a given country, we are not really concerned about the number of fish farmed unless that number is significant determining factor in terms of founding a charity in that space. We have gone back and forth as to how much transparency to have on research and how much time to spend per report and have not come to a fixed answer. We are more likely to get criticism/pushback on higher transparency + lower hours per report but typically think it will still lead to more charities that are promising in the end.
2) CE’s animal charity quality: I think both our ordering and assessment of charity quality would be different from what is described here. I also think animal welfare funds and Open Phil’s (both of who have funded the majority of these projects) assessments would also not match your description. However, in some ways, these are small differences as our general estimate is that 2⁄5 charities in a given area are highly promising. It is quite a hits-based game, and that is the number we would expect (and would rank internally) about how many charities are performing really well.
2.5) Feedback on animal charities: I did a quick review of charities that got the most positive vs negative feedback at the time of idea recommendation from the broader animal community relative to your rank order and relative to our internal one and did not find a correlation. Generally, I think the space is pretty uncertain and thus the charity that got the most positive expectations were typically the charities that deviated the smallest amount from other actions already taken in the space. I think that putting more time into the research reports (including getting more feedback) is one way to improve charity quality (at the cost of quantity) but I’m pretty skeptical it’s the best way. So far, the biggest predictive factor has not been idea strength but the founder team, so when thinking about where to spend marginal resources to improve charities, I would still lean that way (although it’s far from clear if that will always be the case).
3) I would be interested in doing a survey on this to get better data. I get the impression that we are seen as pretty disconnected from the animal space (and I think that is fairly true). I think we are far more involved in e.g., the EA space both when it comes to more formal research and when it comes to softer social engagement. I think our charities tend to go deeper into whatever area they are focusing on than our team does, and I am pretty comfortable with that. I would not be surprised if we both were invited less and attended less coordination events and meetings connected to the animal space; we like to stay focused quite directly on the problems we are working on.
Thanks again for writing this up. I put some chance that these are issues that are correct and important enough to prioritize, and it’s valuable to get pushback and flags even if we end up disagreeing about the actions to take.
We (Charity Entrepreneurship) have considered doing something like this. Would love to see the results and to know what locations you are considering. We are in west London.
So I think this conversation might be more productive if we clarified some terminology/dove into the specifics. There are a lot of different ways to set salaries in general.
Needs of the employee
Resources the organization has
Market rate including benefits (how desirable the job is—e.g. hedge funds pay loads but are stressful so need to pay more to make up for that)
Amount for the employee to be psychologically content
Amount that creates the best incentives for the organization/EA movement
Market rate replacement (if someone left, what you’d have to pay to get someone equally talented)
Pure market-rate earnings (what would be the highest salary job rate- not taking into account non-salary benefits—e.g. a hedge fund salary)
Value in impact to the organization
These varying ways cause a pretty dramatically wide spectrum of possible salaries. There is a case for using basically any of them. Ballpark numbers might range from 40k-400k depending on which system you use.
I think a lot of people are conflating the conversation a bit, there seem to be two central questions; 1) which of the systems (or index of systems) that’s best to use, and 2) pragmatically, what do these systems look like when cashed out?
For example, Josh’s comment is getting at number 1; maybe we should be using “pure market rate earnings” or “value in impact to the organization” instead of “amount that creates best incentives”.
Ryan’s comment on the other hand is basically “the ideal incentives” might in fact correlate quite a lot to the resources the organization has.
I think splitting these out can make it easier to discuss each possibility.- 10 Jun 2022 9:16 UTC; 8 points) 's comment on The dangers of high salaries within EA organisations by (
The following is a rough breakdown of the percentage of people who were not asked to move on to the next round in the Charity Science hiring process. These numbers assume one counterfactual hour of preparation for each interview and no preparation time outside of the given time limit for test tasks.
~3* hour invested (50%) - Cover letter/resume
~5 hours invested (20%) - Interview 1
~10 hours invested (15%) - Test task 1
~12 hours invested (5%) - Interview 2
~17 hours invested (5%) - Test task 2
~337 hours invested (2.5%) - paid 2-month work trial
Hired (2.5%)
So, 95% of those not hired spend 17 hours or less, 85% spend 14 hours or less, and 70% spend 5 hours or less.*changed from 1 hour to 3 hours based on comments
Hey Larks, thanks for the great comment. I think it gets at some key assumptions one has to consider when evaluating this as an intervention. We didn’t end up going into that in this post, but happy to cover it below.
I both see the scenario in which the benefits outweigh the costs (the one in which we are happy to incubate this charity), and I also see scenarios where the costs are higher than the benefits (in that case we wouldn’t recommend it). Specifically:Existing people get the benefit of building relationships with these new people.
When you consider the context of the families that an intervention such as this would be impacting I think the benefits you layed out are a lot smaller (to the point they do not largely change the calculation). They are typically families with large family size (my expectation is that the 4th child or grandchild does not carry the same weight as the first, particularly when it comes to long term support of the family).
Division of Labour—whereby people specialise in one specific area they become more efficient at it. The larger the population, the more specialisation it can support.
They are also typically in low-income jobs with limited specialization (often family planning is most needed in families earning income from primary agriculture). I expect that averting unwanted pregnancy frees up the income of the household to spend on the current family, e.g. on more education opportunities or a more nutritious diet that has further positive flow-through effects on the family. I think this same education confounder also cross-applies to creating more artists and scientists. It’s not at all clear to me that net higher population vs higher average education but smaller families would result in this.Many things have increasing returns to scale, and so are more efficient with larger populations
Although I have some sympathy for the economies of scale arguments, I think depending on the country the efficiency effects of having a very young or rapidly growing population trade off against this in quite an unfavourable way. I also think there are less economies of scale in less connected and more rural settings. (E.g. things like electricity or water have limited scale in these locations.) I also expect these benefits to be quite small relative to the current factors we consider.
It is of course possible that these benefits might be outweighed by the costs outlined in the report. But we cannot simply assume that this is the case.
When we are modelling cost-effectiveness on that sheet we are not aiming to take into account all of the externalities, but rather compare between interventions within family-planning, so you probably won’t find them there. We would use a different methodology to take them into account. But I take your point about the broader cost-benefit considerations.
As life is good for most people, this is a major advantage. They get to experience the joys of playing and growing and love and all the other good things in the world.
I do think you have hit on the really key assumption that can change one’s model of family planning though. “Life is good for most people”. We spend a considerable amount of time and work thinking about it and I agree that there is a lot of moral and epistemic uncertainty around the issue. It is probably the hardest thing to take into account when it comes to the assessment of moral weights of various outcomes. Depending on how one takes it, it can either result in 60 years equivalent of utility or disutility. However, I think again we have to look at the population very closely. Populations that do not have access to family planning information or counselling are more likely to have lower happiness levels. The country our last family planning charity chose to work in is Nigeria, where the average happiness goes up and down between 5 and 6 out of 10. Another country we recommend is Senegal, where the numbers are even lower. But I would say even this data is not precise enough as even within countries populations without access to family planning are typically far lower income than average. Also, the child whose existence would be prevented would be a child the family would prefer not to have, and this seems likely to have an effect on the average happiness of both the child and the family. We know the SD of happiness in Nigeria is pretty large ~2.5 (this variation is also typical across other locations). It’s hard to know exactly what happiness that person would have over their life. It could easily be in the 3-4/10 range. If you think a year lived at 3-4 is net positive and something you would want to create more of, then indeed this is a huge factor against family planning. If you think its net negative then its a huge factor in favour. I think this is one of the key ethical questions. It comes down a lot more to do with positive vs negative leaning utilitarianism and how you view various weightings of subjective well being. This is a factor we considered a lot when thinking about it and although I think there are defendable different perspectives our team generally came down on the side of this effect being a net positive for family planning (some more info here).
I do think we could have made improvements to the report to make some of these judgement calls more clear and bring people’s attention to the factors that affect the analysis significantly. We do tend to discuss these considerations and outline when the results of the general judgement about family planning may differ according to some ethical or empirical differences in much greater depth with incubatees who are considering working in these areas and it’s indeed a complex issue, because of this we have typically found it it easier to discuss it in conversation rather than in writing. I agree that the report could have been better written to take that into account.- What is the role of public discussion for hits-based Open Philanthropy causes? by 4 Aug 2021 20:15 UTC; 69 points) (
- 16 Oct 2022 14:58 UTC; 5 points) 's comment on Ask Charity Entrepreneurship Anything by (
This in many ways is the default path for how many NGOs grow. I think there are quite a few reasons why CE overperforms relative to this. Decentralization broadens the risk profile that each charity is able to take, and smaller organizations move far, far quicker. I suspect the biggest factor though, is not structural but social. The level of founders we get applying are really strong relative to an organization like CE hiring program directors. Due to the psychology of ownership they work far more effectively for their project than they would as an employee of a larger organization.
As someone who has been concerned about insects as an area for years, I think the aspect that stops animal-focused people I speak to from engaging with insects as a cause area is not really to do with scale or neglectedness. Many vegans do not eat honey; suggesting a concern for the bees creating it, and SWP (https://www.shrimpwelfareproject.org/) has gotten quite a lot of support from the animal movement. The issue is pretty directly tied to tractability and concrete actions that can be taken. If the current inventions focused on insects are research-orientated with unclear pathways for how insects do in fact get helped, that will be a blocking factor for many EA animal advocates. I think in many cases right now, people see insect welfare much like wild animal suffering; as an interesting, high scale area with no clear significant actions that can be taken.
Hey Stefan,
Thanks for the comment, I think this describes a pretty common view in EA that I want to push back against.
Let’s start with the question of how much you have found practical criticism of EA valuable. When I see posts like this or this, I see them as significantly higher value than those individuals deferring to large EA orgs. Moving to a more practical example; older/more experienced organizations/people actually recommended against many organizations (CE being one of them and FTX being another). These organizations’ actions and projects seem pretty insanely high value relative to others, for example, a chapter leader who basically follows the same script (a pattern I definitely personally could have fallen into). I think something that is often forgotten about is the extremely high upside value of doing something outside of the Overton window, even if it has a higher chance of failure. You could also take a hypothetical, historical perspective on this; e.g. if EA deferred to only GiveWell or only to more traditional philanthropic actors, how impactful would this have been?.
Moving a bit more to the philosophical side, I do think you should put the same weight on your views as other epistemic peers. However, I think there are some pretty huge ethical and meta epistemic assumptions that a lot of people do not realize they are deferring to when going with what a large organization or experienced EA thinks. Most people feel pretty positive when deferring based on expertise (e.g. “this doctor knows what a CAT scan looks like better than me”, or “Givewell has considered the impact effects of malaria much more than me”). I think these sorts of situations lend themselves to higher deference. Something like “how much ethical value do I prescribe to animals”, or “what is my tradeoff of income to health” are; 1) way less considered, and 2) much harder to gain clarification on from deeper research. I see a lot of deferrals based on this sort of thing e.g. assumptions that GiveWell or GPI do not have pretty strong baseline ethical and epistemic assumptions.
I think the amount of hours spent thinking about an issue is a somewhat useful factor to consider (among many others) but is often used as a pretty strong proxy without regards to other factors; e.g. selection effects (GPI is going to hire people with a set of specific viewpoints coming in), or communication effects (e.g. I engaged considerably less in EA when I thought direct work was the most impactful thing, compared to when I thought meta was the most important thing.). I have also seen many cases where people make big assumptions about how much consideration has in fact been put into a given topic relative to its hours (e.g. many people assume more careful, broad-based cause consideration has been done than really has been done. When you have a more detailed view of what different EA organizations are working on, you see a different picture.).
It might be helpful to add some useful reference classes here as I think it’s often forgotten how unusual EA salaries are relative to other fields.
Average GDP of the world: £11,000
London’s living wage: £21,800
Median full-time UK employees: £26,800
Average salary nonprofit jobs: £31,700
The average annual salary in London: £39,000
Average salary nonprofit London: £39,600
Average CE employee salary: £39,300
Entry-level EA job: £48,000
Average EA job: £80,000- 11 Nov 2023 18:21 UTC; 12 points) 's comment on Why and how to earn to give (80,000 Hours) by (
- 8 Nov 2023 8:22 UTC; 6 points) 's comment on AMA: Earning to Give by (
Our policy regarding salaries has not changed as much as other meta charities; leanness tends to attract a different sort of applicant. We have a range ($40-$60k) but would consider applications from candidates who need higher than that range. In practice, we have often found the most talented candidates are less concerned with salary and more concerned about other factors (impact of the role, culture, flexibility, etc.). We are a bit skeptical about the perception that talent increases from offering higher salaries (instead of attracting new talent, we typically see the same EA people getting job roles but just for a higher cost).
- 31 Jul 2023 23:37 UTC; 1 point) 's comment on [JOB] Opportunity to found Charity Entrepreneurship NGO (outside of the incubation program): Tobacco taxation advocacy by (
I do tend to think that most people’s limiting factor is energy instead of time. E.g. it is rare to see someone work till they literally run out of hours on a project vs needing a break due to feeling tired. Even people working 12 hours a day, I still expect they run out of energy before time, at least long term. I would not typically see emotional energy as my limiting factor, but I do think it’s basically always energy (a variable typically positively affected by altruism in other areas) vs. time or money (typically negatively affected).
I think there are a few things that fit into this category, how much deference is in the EA space would be one. Another would be the relative importance of high-absorbency career paths. Some things we have not written about but also fit would be how EA deals with low evidence base/feedback loop spaces. Or how little skepticism is applied to EA meta charities.
Just wanted to chip in on this. Although I do not think this addresses all the concerns I have with representativeness, I do think CEA has been making a more concerted and genuine effort at considering how to deal with these issues (not just this blog post, but also in some of the more recent conversations they have been having with a wider range of people in the EA movement). I think it’s a tricky issue to get right (how to build a cause neutral EA movement when you think some causes are higher impact than others) and there is still a lot of thought to be done on the issue, but I am glad steps are happening in the right direction.
“Please don’t criticize central figures in EA because it may lead to an inability to secure EA funding?” I have heard this multiple times from different sources in EA.