I run the Centre for Exploratory Altruism Research (CEARCH), a cause prioritization research and grantmaking organization.
Joel Tan
A useful post, Ozzie, and definitely food for thought.
I would just like to point out one fairly significant consideration in favour of small organizations that isn’t factored in here—ownership and motivation (i.e. the founder and other early-stage employeesslave awaywork far harder because we feel a sense that the organization is yours—you are the organization; you don’t work for it). This has been my own experience, and I imagine it’s much the same for you. I believe Joey Savoie talks about this fairly often, when asked why Charity Entrepreneurship doesn’t just hire people to implement effective global health & animal ideas in-house, rather than using these people to incubate new orgs
Hi Bob & team,
Really great work. Regardless of my specific disagreements, I do think calculating moral weights for animals is literally some of the highest value work the EA community can do, because without such weights we cant compare animal welfare causes to human-related global health/longtermism causes—and hence cannot identify and direct resources towards the most important problems. And I say this as someone who has always donated to human causes over animal ones, and who is not, in fact, vegan.
With respect to the post and the related discussion:
(1) Fundamentally, the quantitative proxy model seems conceptually sound to me.
(2) I do disagree with the idea that your results are robust to different theories of welfare. For example, I myself reject hedonism and accept a broader view of welfare (given that we care about a broad range of things beyond happiness, e.g. life/freedom/achievement/love/whatever). If (a) such broad welfarist views are correct, (b) you place a sufficiently high weight on the other elements of welfare (e.g. life per se, even if neutral valenced), and (c) you don’t believe animals can enjoy said elements of welfare (e.g. if most animals aren’t cognitively sophisticated enough to have preferences over continued existence), then an additional healthy year of human life would plausibly be worth a lot more than an equivalent animal year even after accounting for similar degrees of suffering and the relevant moral weights as calculated.
(3) I would like to say, for the record, that a lot of the criticism you’re getting (and I don’t exempt myself here) is probably subject to a lot of motivated reasoning. I am personally uncertain as to the degree to which I should discount my own conclusions over this reason.
(4) My main concern, as someone who does human-related cause prioritization research, is the meat eater argument and whether helping to save human lives is net negative from overall POV, given the adverse consequences for animal suffering. I am moderately optimistic that this is not so, and that saving human lives is net positive (as we want/need it to be) . Having very roughly run the numbers myself using RP’s unadjusted moral weights (i.e. not taking into account point 2 above) and inputting other relevant data (e.g. on per capita consumption rate of meat), my approximate sense is that in saving lives we’re basically buying 1 full week of healthy human life for around 6 days of chicken suffering or above 2 days of equivalent human suffering—which is worth it.- 24 Jan 2023 20:08 UTC; 21 points) 's comment on Rethink Priorities’ Welfare Range Estimates by (
Thanks for the write-up, Julia. I’ll say that this dovetails with my experience working in the non-EA world, including in organizations where things went really, really bad.
My main recommendation is that, even if it is hard, staff stand up for themselves and their colleagues, and to push back against bad bosses—something that is necessary even if not sufficient. This goes double for those of us who are senior staff:
(1) You are harder to replace and your opinion carries more weight
(2) You have more working experience, and unlike your more junior colleagues, you know that what’s happening isn’t normal and isn’t acceptable—something that isn’t necessary obvious for someone for whom this is their first job our of university.
(3) You may be more financially secure, but this depends (e.g. new mortgages and kids, or being on a work visa make things harder).
(4) Your silence is tacit acceptance.
On (2). If you go to 80k’s front page (https://80000hours.org/), there is no mention that the organizational’s focus is AGI or that they believe it to be the most important cause. For the other high-level pages accessible from the navigation bar, things are similar not obvious. For example, in “Start Here”, you have to read 22 paragraphs down to understand 80k’s explicit prioritization of x-risk over other causes. In the “Career Guide”, it’s about halfway down the page. If the 1-1 advising tab, you have to go down to the FAQs at the bottom of the page, and even then it only refers to “pressing problems” and links back to the research page. And on the research page itself, the issue is that it doesn’t give a sense that the organization strongly recommends AI over the rest, or that x-risk gets the lion’s share of organizational resources.
I’m not trying to be nitpicky, but trying to convey that a lot of less engaged EAs (or people who are just considering impactful careers) are coming in, reading the website, and maybe browsing the job board or thinking of applying for advising—without realizing just how convinced on AGI 80k is (and correspondingly, not realizing how strongly they will be sold on AGI in advisory calls). This may not just be less engaged EAs too, depending on how you defined engaged—like I was reading Singer since two decades ago; have been a GWWC pledger since 2014; and whenever giving to GiveWell have actually taken the time to examine their CEAs and research reports. And yet until I actually moved into direct EA work via the CE incubation program, I didn’t realize how AGI-focused 80k was.
People will never get the same mistaken impression when looking at Non-Linear or Lightcone or BERI or SFF. I think part of the problem is (a) putting up a lot of causes on the problems page, which gives the reader the impression of a big tent/broad focus, and (b) having normie aesthetics (compare: longtermist websites). While I do think it’s correct and valuable to do both, the downside is that without more explicit clarification (e.g. what Non-Linear does, just bluntly saying on the front page in font 40: “We incubate AI x-risk nonprofits by connecting founders with ideas, funding, and mentorship”), the casual reader of the website doesn’t understand that 80k basically works on AGI.
Strongly agree. Having been on both sides of the policy advocacy fence (i.e. in government and as a consultant/advocate working from the ouside), policy ideas have to be concrete. Asking the government to improve disease surveillance (as opposed to something specific like e.g. implementing threatnet) is about as useful as asking a government to improve education outcomes by improving pedagogy, or to boost the economy by raising productivity.
Of course, you don’t have to be an expert yourself per se, but you have to talk to those who are, and get their inputs—and beyond a certain point, if your knowledge of that specific space becomes great enough after working in it for a long time, you’re practically an expert yourself.
While EA is great, a lot of us have naive views of how governance works, and for that matter have overly optimistic theories of change of how abstract ideas and research affect actual policies and resource allocation, let alone welfare.
While the idea of moral licensing makes sense to me in theory, I’m not too persuaded by the empirical evidence, at least from the cited meta-analysis—the publication bias is enormous, as the authors note.
CEARCH did a shallow dive on ageing a few months ago (link), but our tentative conclusion is that life extension reasearch doesn’t look too cost-effective (n.b. ideas tend to look good up front and worse as we understand the idea more and discount for more complications, so if anything we should expect the cause to even less cost-effective than current estimates suggest).
In any case, Nuno has a list of relevant research here if anyone wants to read more.
Disclosure: Sarah and Ben are friends, and we came out of the same CE incubation batch, so I’m not unbiased here.
I think it’s speaks well of a person’s integrity, objectivity, and concern for impact that they’re able to make a clear eyed assessment that their own project isnt having the desired impact, and then going ahead to shut it down so as to not burn counterfactually valuable resources.
It’s something that’s worth emulating, and I do try to apply this myself—via regular CEAs and qualitative evaluations of CEAECH’s expected impact (especially as a meta org with a more indirect path to impact). We’re only wasting our own time otherwise!
A couple of considerations I’ve thought about, at least for myself
(1) Fundamentally, giving helps save/improve lives, and that’s a very strong consideration that we need equally strong philosophical or practical reasons to overcome.
(2) I think value drift is a significant concern. For less engaged EAs, the risk is about becoming non-EA altogether; for more engaged EAs, it’s more about becoming someone less focused on doing good and more concerned with other considerations (e.g. status); this doesn’t have to be an explicit thing, but rather biases the way we reason and decide in a way that means we end up rationalizing choices that helps ourselves over the greater good. Giving (e.g. at the standard 10%) helps anchor against that.
(3) From a grantmaking/donor advisory perspective, I think it’s hard to have moral credibility, which can be absolutely necessary (e.g. advising grantees to put up modest salaries in their project proposals, not just to increase project runway but also the chances that our donor partners approve the funding request). And this is both psychologically and practically hard to do this if you’re not just earning more but earning far more and not giving to charity! Why would they listen to you? The LMIC grantees especially may be turned off—disillusioned, by the fact that they have to accept peanuts while those of us with power over them draw fat stacks of cash! The least we can do is donate! Relatedly, I think part of Charity Entrepreneurship’s success is absolutely down to Joey and co leading by example and taking low salaries.
(4) Runway is a legitimate consideration, especially since there are a lot of potentially impactful things one can do but which won’t be funded upfront (so you need to do it on savings, prove viability and then get it funded). However, I don’t think this is sufficient to outweigh points 1-3.
(5) In general, I think it’s not useful at all to compare with how much others are earning—that only leads to resentment, unhappiness, and less impactful choices. For myself, the vast majority of my friends are non-EAs; we have similar backgrounds (elite education, worked for the Singapore government as policy officers/scholars at one point or another) and yet since leaving government I’ve had a riskier career, earn far less, have fewer savings, and am forced to delay having a family/kids because of all those reasons. All of this is downstream of choices I’ve made an EA, particularly in avoiding job offers that paid very well but which didn’t have impact (or in fact, had negative impact). Is the conclusion I’m supposed to draw that I’ve made a mistake with my life? I don’t think so, because statistically speaking, some random African kid out there is alive as a result of my donations (and hopefully, my work), and that’s good enough for me.
Great and comprehensive piece. I’m personally very enthusiastic about this intervention, based onCharity Entrepreneurship’s report on tobacco taxation, which found that it’s extremely cost-effective (i.e. maybe USD 27-37 per DALY, which is >GiveWell top charities in expectation), and also on CEARCH’s own research that global health policy interventions tend to be enormously cost-effective.
Important points to add.
(1) I would push back on modelling such mere speeding up as involving simply a set number of years (year of introduction san intervention—year of introduction with intervention) in which the intervention counterfactually applies. It’s important to realize that future tax increases apply on top of the intervention—in this sense, it’s better to think of the tax as a permanent level increase, which of course brings long-term benefits.
(2) These benefits are subject to various discount rates, but generally (a) tobacco taxes are sticky (i.e. aren’t reversed), though inflation can be an issue, and (b) as you say, DALY burdens are growing due to population growth outweighing the secular decline in per capita tobacco consumption.
(3) Economic benefits are around 10% of the health benefits for tobacco, last I did an analysis on this, so agreed that the lack of incorporate in the BOTEC doesn’t alter conclusions overmuch.
(4) The main issue not talked about in this report but which you’ll find is most politically salient is the issue of regressivity—policymakers are afraid the poor are disproportionately hurt, which is a legitimate concern. That said (a) low SES populations (and youths) tend to be disproportionately price sensitive, so their consumption falls more than average (and hence they’re hurt less than they otherwise would be), and (b) you can design income-targeting lump sump compensations.
(5) Caleb Parikh and Joel Burke can speak to this better than me, but my understanding is that the Ministries of Finances tend to be the obstacle in lobbying attempts—not the Ministries of Health, which tend to be supportive. To address the former’s concern (which will be revenue-related), it’s important to point out that actually tax revenues rise with tobacco excise rates, at least over the short and medium term, due to the elasticities involved.
(6) From a consumer welfare perspective (i.e. the pleasure from smoking argument), Gruber had an interesting paper showing that, far from reducing consumer welfare, excise taxes makes the smoking population – whether actual smokers, former smokers or even potential smokers – happier, by helping them overcome their time-inconsistent preferences (i.e. helping them quit something that gives them pleasure in the short-term but which reduces their overall life satisfaction in the
long-term). And this makes sense—that’s literally what addiction is.
(7) For the issue of freedom of choice - under plausible moral weights, the health benefits dominate the autonomy considerations up until the point where you transition from taxes to de jure bans; not a particularly significant issue
(8) Smuggling/black market considerations are unlikely to be an issue. While there are theoretical concerns about higher taxes causing increased black market activity, such worries are not justified by the empirical evidence. As Schwartz and Zhang (2016) find, the international experience has been that raising tobacco prices either (a) fails to raise contraband tobacco activity at all, or (b) does so only temporarily, or (c) causes a sufficiently small increase in black activity such that cigarette consumption still falls.
(9) Critically, this shallow report may be too bearish on the chances of tobacco tax advocacy. CE’s case study of 159 case studies suggests about a 27% chance of success, and while I would discount this to some extent due to selection bias (i.e. taxes being pushed in countries where it’s more likely to succeed), this would still suggest a >10% chance of success.
All in all, great work, and I am keen to seen more direct work in this area.
The Gruber paper (linked below in my comment) suggests that reducing smoking actually makes the population of smokers and potential smokers happier.
In any case, it doesn’t appear to me true that most smokers don’t want to quit—see data on the US and even in China where most people don’t want to quit, a strong majority (70%) supports the government doing more to control smoking.
GiveWell is available (search Clear Fund)!
Great curation. I would have missed this otherwise.
There’s a well-established bias in general in media towards negative reporting—it’s just what people are more interested in/animated by (see: https://www.pnas.org/doi/10.1073/pnas.1908369116) It’s the same reason why negative stuff tends to get shared more on FB and social media in general, iirc.
Basically, it’s not an EA-specific issue. When was the last time you read a story about (virtually all) planes not crashing when they take off, or unemployment not being a problem for the vast majority of the population?
If I’m understanding this concern correctly, it’s along the lines of: “they’re not making a financial sacrifice in shutting down, so it’s less praiseworthy than it otherwise would be”.
Just to clarify, charity founders (at least CE ones) take a pay cut to start their charity—they would earn more if working for other EA organizations as employees, and much more if in tech/finance/consulting/careers that typical of people with oxbridge/ivy/etc education levels. The financial sacrifice was already made when starting the charity, and if anything, quitting is actually better for you financially.
Hi Arden,
Thanks for engaging.
(1) Impact measures: I’m very appreciative of the amount of thought that went into developing the DIPY measure. The main concern (from the outside) with respect to DIPY is that it is critically dependent on the impact-adjustment variable—it’s probably the single biggest driver of uncertainty (since causes can vary by many magnitudes). Depending on whether you think the work is impactful (or if you’re sceptical, e.g. because you’re an AGI sceptic or because you’re convinced of the importance of preventing AGI risk but worried about counterproductivity from getting people into AI etc), the estimate will fluctuate very heavily (and could be zero or significantly negative). From the perspective of an external funder, it’s hard to be convinced of robust cost-effectiveness (or speaking for myself, as a researcher, it’s hard to validate).
(2) I think we would both agree that AGI (and to a lesser extent, GCR more broadly) is 80,000 Hour’s primary focus.
I suppose the disagreement then is the extent to which neartermist work gets any focus at all. This is to some extent subjective, and also dependent on hard-to-observe decision-making and resource-allocation done internally. With (a) the team not currently planning to focus on neartermist content for the website (the most visible thing), (b) the career advisory/1-1 work being very AGI-focused too (to my understanding), and (c) fundamentally, OP being 80,000 Hour’s main funder, and all of OP’s 80k grants being from the GCR capacity building team over the past 2-3 years—I think from an outside perspective, a reasonable assumption is that AGI/GCR is >=75% of marginal resources committed. I exclude the job board from analysis here because I understand it absorbs comparatively little internal FTE right now.
The other issue we seem to disagree on is whether 80k has made its prioritization sufficiently obvious. It appreciate that this is somewhat subjective, but it might be worth erring on the side of being too obvious here—I think the relevant metric would be “Does a average EA who looks at the job board or signs up for career consulting understand that 80,000 Hours prefers I prioritize AGI?”, and I’m not sure that’s the case right now.
(3) Bad career jobs—this was a concern aired, but we didn’t have too much time to investigate it—we just flag it out as a potential risk for people to consider.
(4) Similarly, we deprioritized the issue of whether getting people into AI companies worsens AI risk. We leave it up to potential donors to be something they might have to weigh and consider the pros and cons of (e.g. per Ben’s article) and to make their decisions accordingly.
Sounds great!
One question I have is about the extent to which this is counterfactually better than just advising foundations to dump money into GiveWell charities; or, if one thinks GiveWell is too risk adverse or too short-termist, there are other cause/intervention/charity evaluation organizations out there like CE or Open Phil or 80k etc.
I would be fairly sceptical that smaller grantmaking organizations will be as accurate at identifying cost-effective or promising opportunities relative to the established organizations like GiveWell (who have big teams and a process that has been refined and improved over the years), or even compared to smaller organizations like CE/HLI/CEARCH doing cause prioritization (whose research teams are presumably by people whose interests/abilities meant they self-selected into cause prioritization work, rather than people who by good/bad luck are put into a position where they have to disburse e.g. one’s inheritance, or the family office funds).
Having spoken to some EA-sympathetic family offices, this is basically what they’ve said (i.e. they largely prefer to outsource the decision-making, subject to non-philanthropic/political considerations like giving locally etc).
In any case, keep up the great work, as always.
CEARCH did a shallow dive into this (https://docs.google.com/spreadsheets/d/116DqgnzADo8zAmJ_QAp9AKcjKPMlgBs2Hc9E7SSAASM/edit#gid=0) and our preliminary conclusion is that the marginal expected value of funding life extension research doesn’t meet our threshold of 10x GiveWell. A lot of uncertainty, obviously, but generally things look good upfront and worse later, so this wasn’t a promising sign, and we decided not to spend more time on this.
I think you’re right in pointing out the limitations of the toy model, and I strongly agree that the trade-off is not as stark as it seems—it’s more realistic that we model it aa a delay from applying to EA jobs before settling for a non EA job (and that this wont be like a year or anything)
However, I do worry that the focus on direct work means people generally neglect donations as a path to impact and so the practical impact of deciding to go for an EA career is that people decide not to give. An unpleasant surprise I got from talking to HIP and others in the space is that the majority of EAs probably don’t actually give. Maybe it’s the EA boomer in me speaking, but it’s a fairly different culture compared to 10+ years ago where being EA meant you bought into the drowning child arguments and gave 10% or more to whatever cause you thought most important
(1) I think Joey’s right, and I’ll phrase the issue in this way—a lot of EAs underrate the impact of habit-formation and overrate the extent to which most of your choices even require active willpower. Your choices change who you are as a person, so what was once hard becomes easy.
I’ve always given at least 10% to effective charities, and now it’s just something I do; it’s barely something I have to think about, let alone require some heroic exertion of will. And while I’m not vegan, I am successfully eating less meat even on a largel6 keto diet, and what surprised me is how much easier it is than I thought it would be.
(2) Let’s accept for the sake of argument that there is a lot of heterogeneity, such that for some people the impact of habit formation is weak and it is psychologically very difficult for them to consistently adhere to non-job avenues to impact (e.g. donating, being vegan, etc). Even so, how would one know in advance? Why not test it out, to see if you’re in the group for which habit formation impact is high and these sacrifices are easy, or if you are in the other group?
Surely it’s worth doing—the potential impact is significant, and if it’s too hard you can of course stop! But many people will be surprised, I think, at just how easy certain things are when they become part of your daily routine.