Should you switch away from earning to give? Some considerations.
At EA Global, a common discussion topic was that direct work was becoming increasingly higher priority in comparison to earning to give. I’ve helped to contribute to this conversation, and I’m glad we’re talking about this. This post is just to give some considerations on the other side, to ensure that (i) we approximate allocative efficiency, where the people who have the strongest comparative advantage in earning to give do so, and those who have strongest comparative advantage in direct work do so; and that (ii) we don’t overcorrect. There were a few conversations I had at EA Global where I thought there were some considerations that should be on the table that weren’t widely known about, and weren’t written up publicly, so I’ve chosen to write them up here. I should caveat that these are just my personal off-the-cuff thoughts, rather than necessarily representative of CEA or 80,000 Hours. This also isn’t meant to be a complete picture; it’s just some considerations that I think might not currently be widely known or fully understood.
Background
The primary reason people were less excited by earning to give was simply that we’ve raised a lot of money already, and diminishing marginal returns means that additional money becomes comparatively less important.
It’s true that, as a community, we’ve raised an awful lot of money; it’s very plausible that we’ve had comparatively greater success at moving money to the highest-priority issues than convincing people to work on those issues. What’s more, Open Philanthropy has started making grants in the areas that people in the community often think of as highest-priority: factory farming, global catastrophic risks, artificial intelligence, and building the effective altruism community.
These facts make me believe that fewer people should earn to give than I used to believe several years ago. But I still think some people should earn to give; in my previous post a year ago I asked, “At this point in time, and on the margin, what portion of altruistically motivated graduates from a good university, who are open to pursuing any career path, should aim to earn to give in the long term?” The median and mean answer among myself, Ben Todd, Roman Duda and Rob Wiblin was 15%. This number still seems about right to me: the people with the highest comparative advantage in earning to give should continue to earn to give; other people should do direct work (including research, advocacy, policy, socially-motivated entrepreneurship, etc).
Allocative Efficiency
My primary concern going forward is that we as a community fail on allocative efficiency. Different people vary considerably on both their donation potential and their potential at direct work. Ideally, if 15% of people should earn to give long-term, then it’s the 15% of people who have the strongest comparative advantage at earning to give.
For this reason, I believe that those people who have the strongest comparative advantage at earning to give, such as those working in quantitative trading, or who are already far along in their earning-to-give career, should at least wait and see over the next year or two how the community responds and develops before switching to direct work. To emphasise, this is about comparative advantage: if you have lots of amazing options, then the fact that you have an amazing earning to give option doesn’t mean you should earn to give; for example, for someone concerned about AI safety, who has the option to work on AI safety at an AI lab, it’s plausible to me that they ought to do that even over a great earning to give job in quantitative trading.**[later edit] For other people, for example people earning to give in software engineering, who have skills that would be useful in direct work, and who would be happy either earning to give or doing direct work, I’d still encourage them to seriously think about what sorts of direct work they could do.
In general, when considering whether to do direct work or earn to give, you could ask yourself: am I in the top 15% of people in terms of comparative advantage at earning to give?
Overcorrection
As well as ensuring that we get allocative efficiency, it’s also possible that the community will overcorrect to changes in circumstance. So, for people who are currently pursuing earning to give and wondering whether to switch to direct work, here are some countervailing considerations to bear in mind:
-
You may differ from Open Phil or other major funders in your assessment of the funding gaps within what you think of as the highest-priority issues. As a hypothetical: If you think that factory farming would still be the highest-priority problem even if a further $100mn/yr were put towards it, whereas Open Phil think that the room for more funding is only $10mn/yr, then earning to give, from your perspective, would still be very valuable over the long-run, even though you’re both funding it as much as you can right now.
-
Over time, your views or the views of Open Phil and other major funders may change substantially on what the top priorities are. Even if you’re in perfect agreement right now, that might change in the future; earning to give can be an important hedge against this possibility.
-
For some organisations, Open Phil and other major funders might wish to limit their donation to a certain % of their overall budget, in order to ensure that the organisation doesn’t become overly dependent on a single donor. In some circumstances, this can actually increase the argument for earning to give, because every dollar you donate unlocks an additional amount of room for more funding from those %-capped major donors.
-
There will be giving opportunities that Open Phil and other major funders won’t look at, for example because the funding gap for the organisation in question is comparatively small.
-
We can already see people in the community adapting their plans on the basis of the emphasis away from earning to give; so you need to take this adjustment into account. Even if you thought that too many people are earning to give right now, this might not be true in two years, after the adjustment takes place.
-
Similarly, organisations are able to increase their room for more funding in response to a greater availability of funding. For example, many organisations that people in the community donate to pay significantly less than market rates; it’s possible that with a greater abundance of funding they could pay more in order to attract more experienced people; or they could start hiring more expert contractors, which are typically expensive; or they could come up with innovative ways of spending money that don’t require hiring a lot of people.
-
Similarly, new organisations within a particular area may come into existence if it’s widely known that there’s funding for such organisations.
-
In some cases, it’s possible to ask the organisations where you might work for the amount of donations at which they’d be indifferent between having you work for them and having you donate more. This can be an awkward conversation to have, but does enable you to more directly make a comparison between earning to give and direct work.
-
There may be fewer people earning to give than you think. Only 10% of attendees at EAG were earning to give as their long-term plan for impact. This is less than the 15% suggestion I made in my previous blog post on the topic. In an informal survey of organisations in the effective altruism community done by 80,000 Hours at EA Global, respondents on average only claimed to be slightly more people-constrained than funding-constrained.
The area I know in most depth is funding of effective altruism community-building. In this area, all of the considerations above weigh on my mind; I think it would be a very precarious position (both for insurance and impartiality reasons) if the EA community were heavily dependent on a single donor, or a small number of donors. I wouldn’t be that surprised if in a few years we switched to emphasising how funding-constrained rather than people-constrained we are (though I’m aware that other people disagree with me on the likelihood of this).
Career advice is hard to give general recommendations about, because everyone’s circumstances and options are so different. I’m glad that we’re now more heavily emphasising career paths other than earning to give. But I think that if you’re particularly well-suited to earning to give, compared to your other options, or if you’d gain a lot of skills by earning to give, or if you’d be particularly happy in that path, or if you are particularly unsure about which problems are highest priority to tackle, then it’s often still a great option for having a positive impact, and you should be cautious about moving on from that.
Thanks to Nick Beckstead, Holden Karnofsky and Benjamin Todd for comments on an earlier draft.
**[Later Edit] To clarify again, this is about comparative advantage, not absolute advantage: x has a comparative advantage over y at producing G iff x can produce G at a lower opportunity cost than y. (In this post I’m most interested in comparative advantage within the EA community). Example: If Jane can earn to give and donate $100,000 per year, or do research and write 8 papers per year; and Joe can earn to give and donate $50,000 per year or do research and write 3 papers of the same quality that Jane could write per year, then Jane has a comparative advantage in research (giving up 8 research papers for every $100,000 donated) and Joe has a comparative advantage at earning to give (giving up only 6 research papers for every $100,000 donated).
[Second later edit]: Disclosure: I am CEO of the Centre of Effective Altruism, which is funded in significant part by the donations of effective altruists, including donors who earn to give. You can find out more about my background at www.williammacaskill.com.
- EA Survey 2019 Series: Donation Data by 13 Feb 2020 21:58 UTC; 49 points) (
- EA Survey 2019 Series: Careers and Skills by 7 Jan 2020 21:13 UTC; 46 points) (
- Slightly advanced decision theory 102: Four reasons not to be a (naive) utility maximizer by 23 Nov 2021 10:47 UTC; 16 points) (
- Slightly advanced decision theory 102: Four reasons not to be a (naive) utility maximizer by 23 Nov 2021 11:02 UTC; 10 points) (LessWrong;
- CEA is Fundraising! (Winter 2016) by 6 Dec 2016 16:42 UTC; 9 points) (
- 20 Apr 2019 22:20 UTC; 4 points) 's comment on Thoughts on 80,000 Hours’ research that might help with job-search frustrations by (
[Speaking just for myself here, not for my employer, the Open Philanthropy Project, which is housed at GiveWell]
UPDATED 8/27/16. I added the name of my employer to the top of the post because Vipul told me offline that he thinks “my financial and institutional ties . . . could be construed as creating a conflict of interest” in this post.
One of the things that makes this decision so hard for anybody considering ETG to fund relatively small projects that staffed foundations might miss is that projects that receive funding get way more visibility than projects that do not.
This makes it incredibly hard to figure out what the right margin is and how many projects are at that margin (particularly important when you know lots of others are making the same decision at the same time). Unless they do an incredible amount of research, a potential ETGer can mostly see examples of projects they support that that WERE funded and then speculate on whether they were close to not being funded. You can also look at projects that are currently fundraising but, again, it’s hard to tell in advance how many of them will actually struggle to get support
If I were CEA/80k and wanted to make progress on this question, I think the first project I’d try would be to create a list of people willing to disclose projects that they tried and failed to fundraise for over the last year or two. Ideally, they’d also give some sense of their own opportunity cost—what they ended up doing instead (this is especially important if it included projects pitched by medium/large EA orgs where staff that didn’t get funding for one thing may have ended up just working on a different priority which is pretty different from somebody who wanted to quit their job to start something and couldn’t).
There are all kinds of reasons this would be imperfect. It wouldn’t be a complete survey. It wouldn’t account for the potential growth of the community. It wouldn’t capture all of the effects of a bigger funding pool—e.g. projects happening faster, less time wasted fundraising, people feeling more confident pitching projects in the first place because their odds are higher. But I think it’d be a lower bound with fairly high information content. If I were 80k and advising lots of people on whether to ETG at the same time, I’d like to see something like this.
A survey of EAs would probably identify a bunch of projects and CEA could also ask ETGers and people who see lots of pitches (e.g. EAV, Carl, Nick) if they can ask rejected people whether they’d be willing to disclose. Presumably 80k also knows of advisees who considered starting an organization but couldn’t get funded. It’s a bit embarrassing to admit failure but it’d be worth it for some people as it might also give them another shot at funding. Although whoever carried this out would have to make sure it stayed a list of projects that failed to fundraise or else it’ll just become a big pitch bank.
This post initially didn’t disclose the name of my employer (Open Phil, which is housed at GiveWell) at the top of this post. I’d be interested in feedback on whether that was a mistake and whether anybody feels like there’s a conflict they wish I’d disclosed. Context is that Vipul told me he thinks there could be one offline.
My main reason for not disclosing is that I didn’t consciously think much about it and was being kind of lazy bc I wrote the comment on my phone on BART. “Speaking just for myself here, not for my employer” is shorter.
I always explicitly disclose who I work for when something that closely touches on our work is being discussed. In this particular case I don’t actually see the conflict and I’m actually not even sure what direction my financial/institutional interests point. But here are some other factors:
1) Even if I say I’m not speaking for GiveWell/Open Phil, I think prominently mentioning their name in all of my comments creates a stronger association between my views and Open Phil’s and makes it more likely that people will confuse my own views with my employers. I think this is a pretty big risk because some people could make big decisions based on their predictions of Open Phil’s actions. This concern (and the general friction caused by needing to think about it) has frequently stopped me from publicly commenting on things.
2) I worry that flagging all my posts with the name of my employer, which is high status in this community, uses their credibility to artificially inflate my own. In general, I think it would be bad for EA if it feels like Open Phil/GW is throwing their weight around.
3) I mostly thought about this as a suggestion to Will/80k/CEA all of whom know me well and know where I work. I email them with suggestions fairly frequently and am used to giving them thoughts w/o needing to disclose.
Curious about what others think.
I think that this isn’t a mistake and I think Vipul’s being ridiculous, FWIW.
Why is this a failure mode? What if someone deliberately created a big pitch bank for the purpose of collecting these kind of statistics? (“Kickstarter/AngelList for effective nonprofits.” Edit: It seems that AngelList does allow nonprofits to list, but the EA community might also have unique funding needs, as described in this post.) This could solve some of the data collection issues, since you’re giving people an incentive to put their info in your database. And potentially work to address issues related to time spent fundraising/ease of pitching new projects without requiring any new charitable funds (beyond those required to create the pitch bank itself). Heck, it even might eliminate the need for people to have their failure to fundraise analyzed publicly, if a more liquid market solves the original problem of matching supply and demand better.
I know this is kind of what Effective Altruism Ventures was. I’m not entirely clear on why it’s no longer in operation. Kerry mentioned difficulty finding both quality projects and generous donors—apparently resources for EAV were allocated towards other projects that were doing better. So maybe this is something that only starts to be worth the overhead once the community reaches a certain size.
Oh—in the long run a pitch bank could definitely be good. It might be more valuable than the project I was suggesting. Although it would also, I think, take substantially more work to do well. You’d need to keep it updated, create a way to get in touch with potential grantees, etc.
The reason I think it would corrupt the data is because if the list included lots of projects that are still fundraising (and perhaps only recently started fundraising) then it would no longer help someone figure out today which projects are actually on the margin. It would make for interesting data in a year or so once we could see which projects from the list were still fundraising.
This is a nice idea!
My impression is that most EA orgs are paying employees significantly below the market rate. Given most EA orgs don’t have the cash on hand to give their staff a payrise to market rates, there may be considerable human capital they are missing out on (imagine a effective employee who nonetheless is unwilling to paid below market). So I’m not sure EA at the moment is not money constrained. (On the other side, I confess to looking around some marginal projects I am aware of and not being greatly impressed by their upside of likelihood of success),
My hunch is it will move in that direction in the future, as most E2Gers are fairly early in careers and will expectedly earn more in the long term.
One reason to expect a long term excess of EAs earning to give is that EtG can be much less of a sacrifice than direct work. Earning to give lets you keep your cushy job, normally means you end up living on more than most people doing direct work live on, and is easy to exit if you start feeling more selfish in the future.
So if you’re more altruistic than most EAs, you should overcorrect away from earning to give.
ETA: Also, there are lots of EAs who have only recently started EtG. They’re going to massively increase their donations over the next few years, so we shouldn’t forget to plan for that too. I don’t think most of these people will switch away from EtG even if it’s the right thing to do.
Edit again, 2015-09-05: This criticism is much more applicable to software engineers than other EtG careers.
Early on in 80k, when promoting earning to give we were regularly getting the opposite argument, that what we were promoting was too much of a sacrifice! I just about agree with you, but I think it’s unclear—there are a lot of people who want to do meaningful work, and don’t care much about giving.
Maybe loss aversion/endowment effect is a bias that’s operating here? Donating hard-earned money is more psychologically painful than forgoing additional income?
I for one would find it much psychologically easier to live on 25% of my current comp than to donate 50% of it.
I think you and I are psychologically different in that way. So maybe this gives you a comparative advantage in direct work, and me a comparative advantage in EtG?
Let’s see if the Less Wrong poll code works on the EAF… Which option would you find easier psychologically?
[pollid:6]
Are you considering the greatly reduced financial security? With etg, if you run into a problem you can just cut back your donations. That’s not true if your income is 4-times less.
It’s true that donations provide a cushion which should reduce insecurity, but I think the insecurity would be low anyway so reducing it has little value.
I think you’re reflexively looking for a heuristic explanation for something which is in fact fairly obvious. Most people consider stereotypical earning-to-give careers—management consultancy, IB and so on—as both stultifyingly dull and ethically nebulous on their own terms. The one redeeming fact of the situation is supposed to be that you are giving away an appreciable portion of your earnings. A life of this order requires you to meet a fairly high threshold of asceticism.
The idea that people might avoid earning-to-give because of the psychological toll of loss aversion fails to take into account that a lot of the people who are attracted to EA rate personal income as a low priority (or even something to be avoided).
Your statement sounds correct as far as it goes. I was picturing a person who already had a high-earning career being told that they were expected to give up income which had been going to savings or luxuries. Not sure which scenario Will’s experience was closer to.
I don’t think it’s fair to say that EtG is less of a sacrifice than direct work. It’s dependent on a number factors. If someone EtG by staying in the same job and working the same # of hours one would otherwise while still living on a substantial proportion of one’s salary, it may not be that much of a sacrifice.
However, EtG could also mean working at a job that may not have been one’s first choice otherwise (eg. Finance), working many more hours than one would otherwise and/or living on just as much or less than one would if they were doing direct work. The EtG work MacAskill suggests involves taking high paying jobs like finance rather than staying in whatever job one happens to be doing, so I don’t think your criticism stands in that case.
Many more EtGers are in the first situation rather than the second, I think.
Most people who work on direct work are also probably working suboptimal jobs that require less sacrifice as well. But, whether the average EtG EA or the average direct work EA is making a greater sacrifice is irrelevant in deciding whether you pursue MacAskill’s suggested EtG path. There’s no reason why you would want to over correct from that. If your EtG plan itself involves minimal sacrifice, than you might want to correct for that. Same with direct work that requires minimal sacrifice.
I’m not sure if framing it as a “sacrifice” may be the best phrasing here. Though it may be descriptively accurate to say that for most people who are giving, they mentally account it as sacrificial, we should try—where possible—to frame it as something positive and willingly done. This would probably make it more motivational.
http://effective-altruism.com/ea/4r/cheerfully/
I agree with all of this Will. I’ll just add that I also think individuals who earn to give can fund some projects that large philanthropic groups like OpenPhil can’t because they are: i) very new/small; ii) very risky, iii) look too strange to the public. This is another way in which we can’t put all our eggs in one basket: we need a diversity of funders and funding strategies to ensure good things aren’t missed. Individuals can play a role filling gaps left by others, though we may not need that many of them.
EAF is sufficiently funding-constrained again that we’ve decided that I’ll mostly ETG for the foreseeable future even though I feel a stronger personal fit for direct work, strong value alignment with EAF, and “only” earn a roughly average German software engineer salary. So I share the impression that at least in my EA circles ETG is still not overdone.
It seems like there’s a disconnect between EA supposedly being awash in funds on the one hand, and stories like yours on the other. I know Open Phil has struggled with the issue of whether to fill up the room for more funding of the groups it does choose to fund. I wonder what would happen if they just said screw it and fully funded all the groups they had confidence in. This would push smaller donors towards evaluating and funding niche opportunities like EAF. Let tigers hunt buffalo and bobcats hunt rodents.
Some disadvantages: “find and fund a niche effective giving opportunity” is a weaker call to action than “donate to AMF and save kids in Africa”. I also suspect people who evaluate charities professionally for e.g. Open Phil are better at it than random members of the EA community working in their spare time. But I’m not very confident in this… check out this excerpt from Thinking Fast and Slow on the kind of things it’s possible for a person to develop expertise in. There’s a pretty interesting case for radical skepticism to be made here. (Also, since we’re talking about smaller amounts of money, it’s less important for the donations to be thoroughly considered?)
Related to the expertise point: I’ve been told that there’s a decent size literature on how to make accurate forecasts that the EA community is mostly ignoring. (Tetlock being the most visible forecasting researcher, but definitely not the only one.)
This line is spot-on. When I look around, I see depressingly many opportunities that look under-funded, and a surplus of talented people. But I suspect that most EAs see a different picture—say, one of nearly adequate funding, and a severe lack of talented people.
This is ok, and should be expected to happen if we’re all honestly reporting what we observe! In the same way that one can end up with only Facebook friends who are more liberal than 50% of the population, so too can one end up knowing many talented people who could be much more effective with funding, since people’s social circles are often surprisingly homogeneous.
Hi John! :‑)
I agree with Michael’s most recent post that it would be hard to overinvest into areas such as WAS given sufficient funding gaps, and at least in the case of WAS I also see a bunch of these funding gaps. But I also think that the strategy Open Phil has been following in funding individual charities is sound. The charities themselves would be ill-advised to rely on Open Phil as their only funder. GiveWell itself once turned down a donation that was too big for its funding gap at the time because it would’ve made itself dependent on the funder.
If Open Phil were to fill funding gaps completely, the charity would have to have absolute trust that Open Phil will support them indefinitely or with a very long period of exit grants. In the interest of cooperativeness, Open Phil would be obliged to do so. Even if Open Phil’s research were mature enough that it could make such a commitment to some charities, it could not fund charities working on problems they can plausibly solve for good. Whether Open Phil wants to change its priorities or the charity wants to switch to a different program, in either case the year-long work of building up a diversified donor base would have gone down the drain and would have to be repeated.
GiveWell may be in a special position in that it has donors who are loyal to beneficiaries rather than charities and will switch their donation targets when GiveWell announces that it has receive a huge grant. But even in the case of charities that are not predominantly funded by EAs, it would be uncooperative of the charity to retain most of its donors when their marginal donations could do much more good with another organization, and then it will be very hard to win them back if Open Phil wants to phase out its grants. So I think its current strategy is sound.
Then again it’s a bit of a moot point in this context since their grants to EA charities, WAS research, etc. are still forthcoming afaik.
About the expertise: On the one hand, charity-picking is a lot like stock-picking, so that would be a point in favor of radical skepticism. And the outcomes are harder to observe and be sure of than on the for-profit market. But since there is hardly anything resembling an efficient market for impact, the current situation for experts is probably closer to that of the first savvy traders to join the stock market some 100+ years ago. They probably had a much easier time finding great investment opportunities than we have today. Even if someone as young as Warren Buffett were born today, he’d probably be much less successful. Still, 2:1 for radical skepticism. Not sure how to weigh the arguments.
Friends of mine are talking a lot about Tedlock, so, fwiw, it’s not as neglected in Berlin. ^^
I don’t think this makes sense—you will always do better with more money than with less money. If you’re concerned about becoming dependent, you can simply use a small portion of the money you’re given and hold the rest in reserve. That way you don’t grow to the point of relying on this new funder, and you’re strictly less dependent on funding because you have bigger reserves.
If you believe the funder could better use the money elsewhere, maybe it would make sense to tell them to donate it somewhere else, but I don’t think this makes sense either. If you have more money than you need, you can give the money to whomever you think needs it most, and this can’t possibly be worse than letting the original funder give the money wherever they want.
Iirc, the funder was Good Ventures in this case, so a smart, value-aligned funder. Often charities may also not have the option to legally forward even unrestricted funding, though that’s probably not an issue in the case of GiveWell.
But even when faced with a dumb funder who wants to fill your funding gap because they like the colors of your logo (and assuming you’re not free to forward the funds), the signaling risk of breaking with cooperativeness should weigh heavily. Trying to walk some middle ground here seems socially risky; it is probably better to strongly signal cooperativeness and follow it blindly than to compromise on a case-by-case basis hoping no one will notice – especially in tight-knit spaces like animal rights activism, WAS, and the like. My impression of the North Korean human rights space is that a lot of charities don’t trust each other, and it’s holding the space back a lot. (See also, once more, The Attribution Moloch.)
How is it uncooperative to accept people’s money when they want to give it to you?
It’s a defection against other charities that would be more effective at the margin than your charity. It’s a bit unclear what a funding gap is supposed to be, but let’s use the definition GiveWell uses for itself: “We seek to be in a financial position such that our cash flow projections show us having 12 months’ worth of unrestricted assets in each of the next 12 months.”
Highly effective charities probably have huge ROI, and even the last bits of the above assets, even when they are kept on a bank account, may still reduce risks to the point of being better investments than a stock portfolio. But at least when a donor swamps a charity with so much money that it exceeds even a funding gap as defined above, the charity will need to put the excess money into a portfolio and will earn just the usual ~ 5% p.a.
At this point the marginal utility has dropped so far that it’s likely that another charity would’ve been able to use the money better.
If the funder is known to be capricious and might buy a yacht from the remainder if you don’t lie about your funding gap (a.k.a. use some suitably wide definition), then other charities will understand your predicament. But with a smart, value-aligned funder, such a move will signal defection, and defection is best countered by defection (Robert Axelrod’s tit for tat algorithm) in repeated prisoner’s dilemmata, so that we’ll end up with a space where all the charities are just busy keeping each other down.
In the case of Open Phil, this is mitigated by the abundance of funding that Open Phil has available. The prisoner’s dilemma is a product of relative scarcity, so greater abundance ameliorates it.
If a charity thinks that its excess cash would be better off with another charity than sitting in its bank account, wouldn’t it just give the cash to another charity? And this is at least as good from the charity’s perspective as letting the original donor decide what to do with the excess funds.
Yes, I premised my argument above (in the comment from August 27) on that not being possible. In Germany I think foundations can forward donations like that, but I think it’s more complicated or only possible in some cases for other types of nonprofits. I just saw online that US 501(c)(3) charities can forward donations under certain conditions that didn’t seem too hard to meet. So if the donor is fine with it, that should be a perfectly cooperative option.
Actually, in the case of Open Phil, such things could be discussed beforehand, and if the charity is up for it and can legally do it, then Open Phil–recommended grants could snowball through the space, saving time on the Open Phil side and possibly even reaching a number of highly effective niche charities that would’ve been too small to warrant Open Phil’s attention.
“(Tetlock being the most visible forecasting researcher, but definitely not the only one.)”
Is there anybody in particular other than Tetlock that you think EAs are neglecting?
J. Scott Armstrong’s paper’s have been useful.
Thanks!
oh if you’re actively looking for material in this area I’d also recommend the work of Randolph Pherson:
https://smile.amazon.com/Structured-Analytic-Techniques-Intelligence-Analysis/dp/1452241511/ref=la_B005KRE6B2_1_1?s=books&ie=UTF8&qid=1472336906&sr=1-1
https://smile.amazon.com/Cases-Intelligence-Analysis-Structured-Techniques/dp/1608716813
(referral links go to CFAR) (also check libgen)
What are you interested in funding?
Mostly ACE and EAF. At the moment I tend toward EAF; when it’s less funding-constrained again, ACE will move up again.
I don’t see a practical way to answer this question. You may have a sense of your absolute advantage at a career relative to other people, but to know your comparative advantage you have to know what you’re good at and which skills other people are relatively good or bad at.
...The top 10% of EAs in the 2015 EA survey gave $8K/year each or more, which isn’t a high bar to clear.
I think you’re talking about absolute advantage here, not comparative advantage. Even if I can’t donate $8K, earning to give could still be my comparative advantage.
Suppose the best job I can get pays $40K and I could donate $8K of that, or I could do direct work. Someone else might be able to donate $40K or they could do direct work, and they’d be better at both than I am. If they’re more than 5x better at direct work than I am then I should earn to give, but it’s really hard for me to tell how much better they are at each thing. This gets even more complicated when you’re comparing lots of people instead of just two people.
Sure, but your absolute advantage may provide some evidence of a comparative advantage. If you can give say ~10X the top 90th percentile of self-identified EAs, you might also fine some direct work that allows you to contribute much more effectively than most EAs do directly, but it means there’s a higher bar to clear.
But many of those people aren’t earning to give. If they were, they would probably give more. So the survey doesn’t indicate you are in the top 15% in comparative advantage just because you could clear $8k.
If many of those people aren’t earning to give, then either fewer EAs are earning to give than is generally assumed, or the EA survey is not a representative sample of the EA population.
Alternatively, we may question the antecedent of that conditional, and either downgrade our confidence in our ability to infer whether someone is earning to give from information about how much they give, or lower the threshold for inferring that a person who fails to give at least that much is likely not earning to give.
I think that e.g. talking to someone at 80k can help give you a sense of this—certainly better than nothing. If you’re thinking of leaving earning to give, but people at 80k can think of several examples of people who are currently earning to give and have greater comparative advantage at direct work, then we can at least say that someone’s making a mistake.
I agree with you that as phrased, this question isn’t very useful.
The only way I can think that the EA community would be able to solve this kind of problem would be if an EA organization had detailed notes on everyone earning to give, and used that information to recommend actions to individuals. (This idea is kind of half baked and crazy but I’ve been thinking recently that it might be overall worthwhile.)
heuristic: people who seem like they could be doing valuable direct work are wasting their time learning about and executing on fundraising.
I have encountered several such people in the last year.
heuristic: is EA funding ‘crazy’ projects at the margin? If not, the risk tolerance bar is being set too low.
I have encountered zero projects that seem too crazy in the last year.
heuristic: all projects run by people past a certain competence threshold should be funded.
This one is trickier, I think EA has a lot of room to improve on this metric, and it may be that money should mostly be saved until more people are past some threshold here. OTOH, the easiest way to build this up might be to have more attempts and failures of small scale experimental projects.
These are really interesting heuristics that I think are non-obvious (I’ve never heard anything like them before) but clearly useful.
I’m curious what your definition of “crazy” is. Does “crazy” mean low probability of success and high expected value? By extension, does “too crazy” mean it has a higher payoff than almost anything else, but too low a probability of success to be worthwhile? I can certainly think of things EAs are funding that I don’t believe they should be funding, but I don’t know if that’s the same thing as “crazy.”
WRT “crazy”, I mean things that might not pass initial sniff tests (absurdity heuristic), things that are outside or far away in reference class space and thus are hard to reason about via analogy, things that make taboo tradeoffs and are thus bad to talk about publicly for brand reasons, or just plain audaciousness. Maybe there are more cues for thinking about these, haven’t tried to apply tools to it yet.
Crazy to EAs or crazy to general population? If it’s the latter, AI-safety research qualifies. If it’s the former, EAF’s wild animal suffering research might still qualify. If you disagree, tell an example of a crazy idea.
Paying researchers to investigate AI safety and WAS doesn’t seem crazy at all to me given the low cost of exploration. Pilot interventions might qualify as crazy, once identified.
Actually crazy would be funding the person who thinks solar updraft towers (https://en.wikipedia.org/wiki/Solar_updraft_tower) can be built an order of magnitude cheaper than current projections and wants to create some prototypes. (I can’t find a link to his page, I have it somewhere in my notes.) Other moonshots in the same reference class (area has not been explored, has potential for large gains once upfront costs have been paid): energy storage, novel biological models underlying disease(remember when everyone laughed at bacteria?), starting additional research focused group houses with different parameters, radical communication research focused on eliminating the hurdles to remote work.
These are off the top of my head, but the object level examples are less the point than simply that we aren’t putting effort into coming up with stuff like this vs further elaborating on the stuff we’ve already found.
Was this the guy you were thinking of? :D
http://www.superchimney.org/ (video)
To make it even crazier, buy all the land around the superchimney, build a charter city around the chimney once it starts working, and make a fortune in real estate.
Will: Has 80k or someone else considered writing up a profile of the typical EA in the scenario you note (early career, willing to choose just about any career option if it maximizes good) to give people a better understanding of what standard we should be comparing ourselves to when assessing our comparative advantage? I can see this being particularly useful for people with many good options who don’t know where to go. From what I see, most people I’ve talked to seem to be relying on informal conversations and intuitions about their peers that might easily be wrong. Something like: Early career EAs as a group are very skilled at x but seem to lack y skills as compared to the demands of the ‘EA job market’. Their median SAT/GRE scores are xyz (if that data is available), so to be considered particularly quant-y as compared to the group you should be in the x-y range, etc. Something like this but updated & tailored to help people coordinate amongst themselves would be great: https://80000hours.org/2014/03/coaching-applications-analysis/. If such a resource exists already, it’d be marvelous if someone could point me to it.
Come to think of it, this sounds less feasible but some sort of comparative advantage calculator (intending to do exactly what you describe in your edit but compared against the average EA) sounds like it could be useful, if difficult to achieve.
I realize belatedly my original post sounds like its talking in terms of absolute advantages still :) But having a general sense of the ‘ratios’ between the different skillsets the median young EA possesses would be useful for comparative purposes I think. Maybe presenting that information in terms of ratios rather than absolute figures can also help ward against the anxieties of being part of (at least what I perceive to be) such a highly talented community. This might be easiest to do with things like SAT scores, where you have actual numbers to work with. But if this is a bad/incorrect way to think about comparative advantages I’d appreciate the correction.
To be honest, I don’t think the 15% number is useful; it’s the average of four employee’s guesses, and no supporting evidence for these guesses is presented.
If each employee actually conducted a thoughtful analysis before arriving at an estimate, then maybe the 15% number would be useful, and I think it would be helpful to share those details.
Some napkin math:
EA survey shows $6.75M total donations by EAs. 2352 self-identifying EAs were surveyed. Let’s say it costs $50K/yr per EA employed full time at a nonprofit. $6.75M divided by $50K is 135. 135 likely overestimates the number of people you’d employ on that budget, given overhead costs and the fact that many cause areas are going to have big expenses unrelated to employment (e.g. AMF has to pay for nets).
Given 2352 self-identifying EAs, if 135 are working for nonprofits full-time and 353 (15%) are optimizing their careers for earning to give, that would leave us with 1864 EAs doing self-supported impact-focused work (e.g. working as a researcher in a lab, working as a journalist, etc.)
Open Phil is a big wildcard. Dustin Moskovitz is worth on the order of ~$10 billion. He says he wants to give away his entire fortune in his lifetime. (Inside Philanthropy describes this as a “supertanker” of money… it’ll be interesting to see if/how the nonprofit world responds.) If he’s got 60 years left, that averages out to around $160M/yr. In reality it will likely be above $10B, if Facebook does well or he diversifies his portfolio and invests wisely. That’d be enough money to employ ~3000 EAs given the $50k/yr spending assumptions above. (But the EA movement is growing.)
I added up the grants described on Open Phil’s website. They’re on the order of $40M. Over the past 3 years, Open Phil has been giving away money at a rate of around $13M/yr. I suppose that rate of giving will gradually increase until they’re giving away money 10x as fast? If so, marginal earning to give could be more valuable in the near term than the long term? This could be an argument against e.g. going to grad school to build certain sorts of career capital. I’m also curious what sort of giving opportunities Open Phil is not willing to fund. It does seem like they’ve demonstrated past reluctance to fund weird causes that might hurt their brand, but that might be changing? And, how reluctant are they to be a charity’s primary or sole funder? (Alluding to Telofy’s comments elsewhere in this thread.)
Funding weird stuff should just be a branding/logistics exercise. Highly exploratory stuff gets put out of sight in an R&D lab like Google X and only the successes are shown off. This is valuable to the degree that there might be valuable interventions cloaked by What You Can’t Say.
Giving away only small amounts for now is consistent with the VoI being much higher in the initial exploratory phase than any actual object level outcome. The outside view says: most charitable efforts in the past have NOT consistently ratcheted towards effectiveness, but have, if anything, ratcheted towards uselessness. Understanding why is potentially worth billions given the existence of the giving pledge and the idea that EA type memes might heavily influence a substantial chunk of that money in the coming decades.
Relevant research questions might include: How do we form excellent research teams? How do we divvy up the search space among teams? What sorts of search and synthesis heuristics should be considered best practice?
This direction or frame sort of hints at a furthering of the frame of EA as a leaking into the charity world the lessons and practices of the for profit world. Can we do lean/agile charity? If so, can we find/develop excellent teams for executing on some part of the search space of charity interventions? Can we give them seed funding and check results? etc.
The accuracy of collective forecasting depends more on the number of contributors than on the intelligence of each individual one.
I agree. I was originally under the impression that 80K had surveyed a lot more than four people to come up with the 15% number.
It’s pretty explicit in the original blogpost:
The purpose of the number was to show the view of 80k (which we perceived most people to not be aware of). I guess the usefulness of it depends on how reliable you think the gestalt judgment of the employees at 80k are.
Some of these career paths either allow you to earn to give along the way, or I would have thought fall straightforwardly in to the earn to give category (for-profit entrepreneurship). A person hearing the 15% number without context might not realize this.
That’s fair, if I use it again I’ll try to make that explicit. The 15% also doesn’t include skill-building in well-paid jobs as a stepping stone to direct work.
Oh yeah that is pretty explicit, I guess I forgot that part and just remembered the 15% part, and then assumed you had surveyed like a dozen people.
I’d be interested to learn more about this number. I assume some attendees were uncertain about their long-term plan? If so, some of these people may end up earning to give?
Also what do 80K’s target percentages vs actual percentages look like for other EA career options?
In EA survey (https://eahub.org/sites/effectivealtruismhub.com/files/SurveyReport2015.pdf page 18) there was a question “What broad career path are you planning to follow?”. Results: Direct charity / nonprofit work: 190; Earning to give: 512; Research: 362; None of these: 375; Didn’t answer: 913.
Percentages:
Direct charity / nonprofit work: 190 / 2352 = 8%
Earning to give: 512 / 2352 = 22%
Research: 362 / 2352 = 15%
None of these: 375 / 2352 = 16%
Didn’t answer: 913 / 2352 = 39%
I imagine there’s also some selection bias—those doing ETG often have jobs that are harder to leave to go to a conference.
Yeah, just as an illustrative anecdote three people currently working in Jane Street’s London office attended EAG Oxford last year, myself included. I’m fairly sure none of us attended EAG this year.
I was concerned enough about this before the event to check in with some other people and make sure that at least some EtG people would turn up, which they did, but I’d still expect it that percentage to be on the low side.
On the other hand, they more often have enough spare money to fly halfway around the world to a conference.
Also more hardcore EAs are more likely to come to the conference. But these percentages don’t mean much to me because people can’t be easily categorized into EAs and non-EAs. There are varying degrees of EAness.
I liked the idea!
I would like to see clearer disclosure of your institutional ties, insofar as knowledge of these ties might affect people’s assessment of the direction in which your advice might be biased. Proactive disclosure would also help you preempt criticism or dismissal of your advice due to your institutional affiliations.
I’m also curious for others’ thoughts on whether such disclosure would be helpful.
Here’s a suggested disclosure.
“I am affiliated with the Centre for Effective Altruism (CEA) [description] and 80000 Hours [description]. I have written a book on the effective altruism movement and have been described as a “co-founder” of effective altruism. I also gave the closing keynotes at Effective Altruism Global in 2015 and 2016. Views expressed here are solely my own but are informed by my experience working at CEA and 80000 Hours and interacting with people in the context of the effective altruism movement. While I have a vested personal interest (in financial and prestige terms) in increased funding flowing to the effective altruism movement and to the two institutions I am affiliated with, I believe that my advice is not compromised by this vested interest.”
This disclosure seems too long to me (e.g. talking about EAG speeches?), perhaps because it is doubling up as both a barb and a suggestion. Here’s my suggestion:
“Disclosure: I am CEO of the Center of Effective Altruism, which is funded by the donations of effective altruists, including donors who earn to give.”
Disclosure: I have done paid consulting for CEA, and have interacted with it from its founding (and with Will from before that).
[ETA: as you say, one could also link to a longer conflicts of interest page, but one needs something in the post since most won’t follow the link, and that can’t be too long.]
I see your point about long disclosures being cumbersome. I believe a better solution is to have a canonical long disclosure and simply to link to it with a brief description. I just added a canonical long disclosure about my relationship with GiveWell at http://vipulnaik.com/givewell/ and I have edited my two recent EAF posts about them to include a disclosure link to that. I will try to do the same for any further posts or comments I write about GiveWell, and write similar canonical long disclosures to link to for topics that I frequently write about and have had long, complicated associations with.
Did you mean “blurb” instead of “barb”?
No.
As in you’re criticizing the lack of a disclosure at the same time as providing information.
My assumption would be that basically everyone who reads this post knows who I am, and from the upvote/downvote ratio, it seems that others probably agree. But I don’t think there’s much harm in regularly using Carl’s disclosure (except for his abominable American spelling of ‘Centre’ ;) ), as it’s a reasonable general norm to have.