I run the Centre for Exploratory Altruism Research (CEARCH), a cause prioritization research and grantmaking organization.
Joel Tan🔸
Hi Jamie,
For (1) I’m agree with 80k’s approach in theory—it’s just that cost-effectiveness is likely heavily driven by the cause-level impact adjustment—so you’ll want to model that in a lot of detail.
For (2), I think just declaring up front what you think is the most impactful cause(s) and what you’re focusing on is pretty valuable? And I suppose when people do apply/email, it’s worth making that sort of caveat as well. For our own GHD grantmaking, we do try to declare on our front page that our current focus is NCD policy and also if someone approaches us raising the issue of grants, we make clear what our current grant cycle is focused on.
Hope my two cents is somewhat useful!
I think you’re right in pointing out the limitations of the toy model, and I strongly agree that the trade-off is not as stark as it seems—it’s more realistic that we model it aa a delay from applying to EA jobs before settling for a non EA job (and that this wont be like a year or anything)
However, I do worry that the focus on direct work means people generally neglect donations as a path to impact and so the practical impact of deciding to go for an EA career is that people decide not to give. An unpleasant surprise I got from talking to HIP and others in the space is that the majority of EAs probably don’t actually give. Maybe it’s the EA boomer in me speaking, but it’s a fairly different culture compared to 10+ years ago where being EA meant you bought into the drowning child arguments and gave 10% or more to whatever cause you thought most important
I apologize if we’re talking at cross purposes, but the original idea I was trying to get across is that when valuing additional talent from community building, there is the opportunity cost of a non-EA career where you just give. So basically you’re comparing (a) the value of money from that earning to give vs (b) the value of the same individual trying for various EA jobs.
The complication is that (i) the uncertainty of the individual really following through on the intention to earn to give (or going into an impactful career) applies to both branches; however, (ii) the uncertainty of success only applies to (b). If they really try to earn to give they can trivially succeed (e.g. give 10% of the average American salary—so maybe $5k, ignoring adjustments for lower salaries for younger individual and higher salaries for typically elite educated EAs). However, if they apply to a bunch of EA jobs, the aren’t necessarily going to succeed (i.e. they aren’t necessary going to be better than the counterfactual hire). So ultimately we’re comparing the value an additional $5k annual donation vs additional ~10 applications of average quality to various organizations (depends on how many organizations an application will apply to per annum—very uncertain).
I also can’t speak with certainty as to how organizations will choose, but my sense is that (a) smaller EA organizations are funding constrained and would prefer getting the money; while (b) larger EA organizations are more agnostic because they have both more money and the privilege of getting the pick of the crop for talent (c.f. high demand for GiveWell/OP jobs).
1) It agree that policy talent is important but comparatively scarce, even in GHD. It’s the biggest bottleneck that Charity Entrepreneurship is facing on incubating GHD policy organizations right now, unfortunately.
5) I don’t think it’s safe to assume that the new candidate is better than your current candidate? While I agree that’s fine for dedicated talent pipeline programmes, I’m not confident of making this assumption for general community building, is by its nature less targeted and typically more university/early-career oriented.
I think this is a legitimate concern, but it’s not clear to me that it outweighs the benefits, especially for roles where experience is essential.
Hi Chris,
Just to respond to the points you raised
(1) With respect to prioritize India/developing country talent, it probably depends on the type of work (e.g. direct work in GHD/AW suffers less for this), but in any case, the pool of talent is big, and the cost savings are substantial, so it might be reasonably to go this route regardless.
(2) Agreed that it’s challenging, but I guess it’s a chicken vs egg problem—we probably have to start somewhere (e.g. HIP etc does good work in the space, we understand).
(3) For 80k, see my discussion with Arden above—AGB’s views are also reasonably close to my own.
(4) On Rethink—to be fair, our next statement after that sentence is “This objection holds less water if one is disinclined to accept OP’s judgement as final.” I think OP’s moral weights work, especially, is very valuable.
(5) There’s a huge challenge over valuing talent, especially early career talent (especially if you consider the counterfactual being earning to give at a normal job). One useful heuristic is: Would the typical EA organization prefer an additional 5k in donations (from an early career EA giving 10% of their income annually) or 10 additional job applications to a role? My sense from talking to organizations in the space is that (a) the smaller orgs are far more funding constrained, so prefer the former, and (b) the bigger orgs are more agnostic, because funding is less a challenge but also there is a lot of demand for their jobs anyway.
(6) I can’t speak for OP specifically, but I (and others in the GHD policy space I’ve spoken to) think that Eirik is great. And generally, in GHD, the highest impact work is convincing governments to change the way they do things, and you can’t really do that without positions of influence.
Hi Arden,
Thanks for engaging.
(1) Impact measures: I’m very appreciative of the amount of thought that went into developing the DIPY measure. The main concern (from the outside) with respect to DIPY is that it is critically dependent on the impact-adjustment variable—it’s probably the single biggest driver of uncertainty (since causes can vary by many magnitudes). Depending on whether you think the work is impactful (or if you’re sceptical, e.g. because you’re an AGI sceptic or because you’re convinced of the importance of preventing AGI risk but worried about counterproductivity from getting people into AI etc), the estimate will fluctuate very heavily (and could be zero or significantly negative). From the perspective of an external funder, it’s hard to be convinced of robust cost-effectiveness (or speaking for myself, as a researcher, it’s hard to validate).
(2) I think we would both agree that AGI (and to a lesser extent, GCR more broadly) is 80,000 Hour’s primary focus.
I suppose the disagreement then is the extent to which neartermist work gets any focus at all. This is to some extent subjective, and also dependent on hard-to-observe decision-making and resource-allocation done internally. With (a) the team not currently planning to focus on neartermist content for the website (the most visible thing), (b) the career advisory/1-1 work being very AGI-focused too (to my understanding), and (c) fundamentally, OP being 80,000 Hour’s main funder, and all of OP’s 80k grants being from the GCR capacity building team over the past 2-3 years—I think from an outside perspective, a reasonable assumption is that AGI/GCR is >=75% of marginal resources committed. I exclude the job board from analysis here because I understand it absorbs comparatively little internal FTE right now.
The other issue we seem to disagree on is whether 80k has made its prioritization sufficiently obvious. It appreciate that this is somewhat subjective, but it might be worth erring on the side of being too obvious here—I think the relevant metric would be “Does a average EA who looks at the job board or signs up for career consulting understand that 80,000 Hours prefers I prioritize AGI?”, and I’m not sure that’s the case right now.
(3) Bad career jobs—this was a concern aired, but we didn’t have too much time to investigate it—we just flag it out as a potential risk for people to consider.
(4) Similarly, we deprioritized the issue of whether getting people into AI companies worsens AI risk. We leave it up to potential donors to be something they might have to weigh and consider the pros and cons of (e.g. per Ben’s article) and to make their decisions accordingly.
EA Meta Funding Landscape Report
I currently do direct work—my organization CEARCH researches cost-effective ideas in GHD/longtermism/meta, and works to direct resources in support of particular promising ideas (e.g. via grantmaking, donor advisory, working with Charity Entreneurship and other talent orgs).
However, for most of my career, I was doing a non-EA job (policy work in government and as a consultant), and I engaged with EA simply by giving money to GiveWell. I’ve been a GWWC pledger since 2014, and that to me is classic EA, and the furthest thing from being less engaged or less EA (than someone who does direct work but doesn’t donate).
Edit: And beyond having impact via your donations, you can always attend events (particularly EAGxs) - I think it’s super valuable for younger EAs to get advice from older folks who primarily live and work in non-EA environments, since younger EAs can get stuck in a social and professional environment that is unadulterated EA, the end result of which is adopting a bunch of norms and behaviours that may leave them less effective at achieving impact (e.g. unprofessional workplace or organizational norms, since they literally haven’t worked in a non-EA organization before; or not being used to persuading and engaging non-EA folks, including in government or in corporate environments etc).
Hi Jaime, I’ve updated to clarify that the “MEV” column is just “DALYs per USD 100,000″. Have hidden some of the other columns (they’re just for internal administrative/labelling purposes).
Thanks for the thoughts, Jaime and Nick!
For what it’s worth, CEARCH’s list of evaluated causes (or more specifically, top interventions in various causes) and their estimated cost-effectiveness is here: https://docs.google.com/spreadsheets/d/14y9IGAyS6s4kbDLGQCI6_qOhqnbn2jhCfF1o2GfyjQg/edit#gid=0
I think Nick is fundamentally correct that because uncertainty is so high, sorting isn’t particularly useful. Most grantmaking organizations, to my understanding, prefer to use a cost-effectiveness threshold/funding bar, to decide whether or not to recommend/support a particular cause/intervention/charity.
For ourselves, we use 10x GiveWell for GHD, as (a) most of the money we move is EA and the counterfactual is GiveWell (so to have impact we the ideas we redirect funding/talent to be more cost-effective than GiveWell in expectation, and (b) we have such an aggressive bar because GiveWell is very robust in their discounting relative to us (which takes a lot of time and effort). An aggressive bar helps ensure that even if your estimated cost-effectiveness estimate is too optimistic relative to GiveWell, it can eat a lot of implicit discounts while still ensuring that the true cost-effectiveness is >GiveWell. (so when we say something is >=10x GiveWell it’s not literally so, more of a reasonably high confidence claim that it’s probably more cost-effective (in expectation).
Final Call: EA Meta Funding Survey
If I’m understanding this concern correctly, it’s along the lines of: “they’re not making a financial sacrifice in shutting down, so it’s less praiseworthy than it otherwise would be”.
Just to clarify, charity founders (at least CE ones) take a pay cut to start their charity—they would earn more if working for other EA organizations as employees, and much more if in tech/finance/consulting/careers that typical of people with oxbridge/ivy/etc education levels. The financial sacrifice was already made when starting the charity, and if anything, quitting is actually better for you financially.
Disclosure: Sarah and Ben are friends, and we came out of the same CE incubation batch, so I’m not unbiased here.
I think it’s speaks well of a person’s integrity, objectivity, and concern for impact that they’re able to make a clear eyed assessment that their own project isnt having the desired impact, and then going ahead to shut it down so as to not burn counterfactually valuable resources.
It’s something that’s worth emulating, and I do try to apply this myself—via regular CEAs and qualitative evaluations of CEAECH’s expected impact (especially as a meta org with a more indirect path to impact). We’re only wasting our own time otherwise!
Hi Vasco,
The GiveWell team handling the nutrition portfolio reached out to me to discuss salt policy, and SSB taxes are on their longlist, iirc. Of note is the fact that Vital Strategies (which have gotten GiveWell grants for their alcohol policy work, also does SSB tax advocacy).
I’ve generally moved to the view that geomeans are better in cases where the different estimates don’t capture a real difference but rather a difference in methodology (while using the arithmetic makes sense when we are capturing a real difference, e.g. if an intervention affects a bunch of people differently).
In any case, this report is definitely superseded/out-of-date; Stan’s upcoming final report on abrupt sunlight reduction scenarios is far more representative of CEARCH’s current thinking on the issue. (Thanks for your inputs on ASRS, by the way, Vasco!)
EA Meta Funding Survey
Hi Nick!
Yep, that’s definitely a concern for governments (same with other policy interventions for nutrition). For funders—to be fair, that’s not much different from direct delivery (e.g. for vaccinations or contraception, we can’t really know the impact until we finish our M&E and see the uptake rates/disease rates change)
Hi Mo,
I don’t think I read that part of Michael’s thesis before, but it does look interesting!
In general, I think it’s fairly arbitrary what a cause is—an intervention/solution can also be reframed as a problem (and hence a cause) through negation (e.g. physical activity is a preventative solution to various diseases like cardiovascular disease or diabetes, and in a real sense physical inactivity is a problem; having an ALLFED-style resilient food supply is a mitigatory solution to nuclear winter—even if we can’t prevent nuclear exchange, we can perhaps stop billions from dying from famine—and in that sense lack of foods capable of growing in abrupt sunlight reduction scenarios is a problem).
Hi Gisele,
At CEARCH (https://exploratory-altruism.org/), we generally agree that combating non-communicable chronic diseases is highly cost-effective (e.g. salt reduction policies to combat high blood pressure, sugar drinks taxes to combat obesity, also things like trans fat bans or alcohol taxes).
As part of our grantmaking work, we’re on the lookout for charities/NGOs working on these issues (or more generally on advocating for health policy, and helping governments implement such policies). If you are aware of any organizations in this space, do let us know!