How important is marginal earning to give?
Some observations:
Most of GiveWell’s senior staff are moving over to the Open Philanthropy Project.
This year, GiveWell had to set explicit funding targets for all of their charities and update their recommendations in April to make sure nobody ran out of room for more funding.
My understanding is that Good Ventures (a) probably has more money than the current discounted cash flows from the rest of the EA movement combined and (b) still isn’t deploying nearly as much money as they eventually will be able to.
Open Phil has recently posted about an org they wish existed but doesn’t and funder-initiated startups.
I can’t remember any EA orgs failing to reach a fundraising target.
Effective altruism is growing quickly; many EAers plan to earn to give but are currently students and will increase their giving substantially in the next few years.
These observations make me feel generally weird about earning to give: Good Ventures and other large foundations can fund a ton of stuff, and there are many individual EA donors who can fund the good ideas that aren’t worth large funders engaging with for whatever reason (at least, many relative to the available opportunities). So it might be important to have more people trying to spot opportunities and start effective charities with support from large funders or current EtGers. For instance, the Gates Foundation has 1200 employees trying to help them deploy their money (and that’s presumably not counting the people who help them start new organizations); applying a similar ratio to Good Ventures would suggest they should have on the order of 100 people helping them, whereas today they have ~10.
Given that doing a normal job and making large donations is psychologically more attractive than trying to start nonprofits for a lot of people (including myself), this suggests that marginal EtGers (also potentially including myself?) might want to give more weight to trying to find opportunities to start new effective organizations, and leave the funding to people like Dustin Moskovitz.
One counterpoint might be that “large funders” are not actually that large; for instance, 72% of total giving is from individuals, but I don’t know if that ratio holds for global poverty or other causes EAs are interested in. And even if it does, it seems like you have to be a certain size of organization to raise grassroots funds effectively, and right now we don’t have enough orgs of that size.
I’d love to get other people’s thoughts on this.
- Working at EA organizations series: Why work at an EA organization? by 18 Oct 2015 9:42 UTC; 7 points) (
- 20 Jun 2015 22:03 UTC; 5 points) 's comment on The career questions thread by (
To play devil’s advocate (these don’t actually represent my beliefs):
This doesn’t necessarily mean much, because fundraising targets have a lot to do with how much money EA orgs believe they can raise.
It’s pretty hard to get funding for a new organization, e.g. Spencer and I put a lot of effort into it without much success. The general problem I see is a lack of “angel investing” or its equivalent–the idea of putting money into small, experimental organizations and funding them further as they grow. (As a counter-counterpoint, EA Ventures seems well poised to function as an angel investor in the nonprofit world.)
Also, to address the general point that EA is talent-constrained, the problem might be that there are very few people with the skills needed, and more funding can be used to train people, like MIRI is doing with the summer fellows program. In that case earning to give is still a good solution to the talent constraint.
I agree with this. Moreover, I think there’s a serious lack of funding in the ‘fringe’ areas of EA like biosecurity, systemic change in global poverty, rationality training, animal rights, or personal development. These areas arguably have the greatest impact, but it’s difficult to attract the major funders.
For example, I think the Swiss EA groups are quite funding-constrained, but they aren’t well-known to the major funders and movement-building lacks robust evidence.
Have the Swiss EA groups tried to raise funding from the broader community? I had no idea they were funding-constrained until you mentioned it.
It’s correct that the Swiss EA organizations are currently funding-constrained. We haven’t pitched any projects to the international community yet, but we’re considering it if an opportunity arises where this makes sense.
I also think that funding is going to be less of an issue once more people in the movement transition from still being students to etg.
Also,
Do you have other evidence on this than Satvik’s? Have you also tried to get angel funding or something?
I agree that this could confound the result, but it’s still some evidence!
It’s hard to say for sure without knowing the fraction of solicited EA startups that get funding, but GiveWell has made some angel-esque investments in the past (e.g. New Incentives), and I think some large individual donors have as well.
This is pretty plausible for AI risk, but not so obvious for generic organization-starting, IMO. Are there specific skills you can think of that might be a factor here?
I get the impression that these are going mostly to programs that already have a lot of evidence and aren’t really exploring the space of possible interventions. I tend to believe that the effectiveness of projects probably follows a power law, and that therefore the most effective interventions are probably ones people haven’t tried yet, so funding variants on existing programs doesn’t help us find those interventions.
GiveWell style research seems very trainable, and it is plausible that GiveWell could hire less experienced people & provide more training if they had significantly more money (I have no information on this though.)
The right way to learn organization-starting skills might be to start an organization; Paul Graham suggests that this is the right way to learn startup-building skills. In that case we’d want to fund more people running experimental EA projects.
I wouldn’t say that New Incentives has “a lot of evidence and aren’t really exploring the space of possible interventions.” But again, this is just dueling anecdata for now.
GiveWell already hires and trains a number of people with 0 experience (perhaps most of their hires).
Ah, good point. This seems like a pretty plausible mechanism.
Oh, cool! I definitely didn’t realize this.
So if starting new projects and enterprises is the constraint, then surely ETG is still less marginally effective than doing and facilitating support for these endeavours where they have high expected value?
Note that GiveWell / Good Ventures (unsurprisingly) like to research a charity or cause area themselves before they direct funding to it, and this is tightly constrained by the pace of GiveWell research staff growth, so in practice many high-leverage opportunities are still (in my opinion) available to marginal EtGers — at-least, if those EtGers are willing to be at least 1/5th as proactive about finding good opportunities as, say, Matt Wage is. Maybe that won’t be true after 10 years of additional research conducted by GiveWell (incl. OpenPhil), but I think it’ll be true for the foreseeable future.
There are probably additional reasons GiveWell / Good Ventures won’t fund particular things, besides the fact that they haven’t been researched in sufficient depth by GiveWell. E.g. GiveWell might think it’s a good thing for there to be multiple meta-charities in the EA space that maintain independence, and so even if funding CEA projects is a clear win, they still might think it’s a bad idea for GW/GV to direct any support to CEA projects.
And finally, it’s also possible that individual EtGers might have different values or world-models than the public faces of GW/GV have, and for that reason those marginal EtGers could have good opportunities available to them that are not likely to be met by GW-directed funding anytime soon, if ever.
(I say all this as a random EA who thinks about these things, not as a soon-to-be GW staffer.)
That said, I also think people with the right collection of talents should seriously consider applying to do cause prioritization research at GW or elsewhere, and people with a different right collection of talents should consider starting new projects/organizations, especially when doing so in coordination with an already-interested funder like GV.
Yes, I think it’s right that people can find opportunities beyond those that are researched by GW if they have different values, different epistemology, pro-actively investigate opportunities to fund, or even outsource this evaluation to Wage, EA Ventures, Beckstead or elsewhere.
I love the idea of outsourcing my donation decisions to someone who is much more knowledgeable than I am about how to be most effective. An individual might be preferable to an organization for reasons of flexibility. Is anyone actually doing this—e.g., accepting others’ EtG money?
In fact, I’d outsource all kinds of decisions to the smartest, most well-informed, most value-aligned person I could find. Why on earth would I trust myself to make major life decisions if I’m primarily motivated by altruistic considerations?
Well, even if you’re primarily motivated by altruistic considerations, there are likely to be some significant personal factors that you can introspect more easily. But what’s related, and clearly beneficial, is getting advice from mentors who you talk to when you have a bigger than usual decision.
My other thought is: what kinds of decisions do you want to outsource? Clever altruistic people have occasionally described why they made various kinds of decisions in their personal lives, and these can be copied e.g.:
http://www.gwern.net/DNB%20FAQ
https://meteuphoric.wordpress.com/2014/11/21/when-should-an-effective-altruist-be-vegetarian/
http://robertwiblin.com/2012/04/19/should-you-floss-a-cost-benefit-analysis/
Absolutely re personal factors. “Outsource” is an overstatement.
And no, I don’t mean decisions like whether to be a vegetarian (which, as I’ve noted elsewhere, presents a false dichotomy) or whether to floss, which can be generically answered.
I mean a personalized version of what 80,000 hours does for people mid-career. Imagine several people in their mid-30s to −40s—a USAID political appointee; a law firm partner; a data scientist working in the healthcare field—who have decided they are willing to make significant lifestyle changes to better the world. What should they do? This seems to be a very different inquiry than it is for an undergrad. And for some people, a lot turns on it—millions of dollars. Given the amount at stake, it seems like a decision that should be taken just as seriously by the EA community as how an EA organization should spend millions of dollars.
Ah, mid-career work-related decisions. Yes, it seems important. As mid-career decisions are more tailored, they’re harder for 80,000 Hours, who are nonetheless better equipped than most for this task.
Although career direction is important, you can see why it might be done less than directing donations—everyone’s money works the same, and so one set of charity-evaluations generalises reasonably well to everyone, assuming they have fairly similar values. Career decisions are harder.
Mentors who sympathise with the idea of effective altruism are helpful here, because they know you. Also special interest groups could be useful. So for people in policy, it makes sense for them to be acquainted with other effective altruists in a similar space, even if they’re living in a different country. If someone who had an unusually high-stakes career (say Jaan Tallinn, a cofounder of Skype) wanted to make an altruistic decision about his career, I’m sure he could pull together some of 80,000 Hours and others to do some relevant research for him.
Beyond that, how we can get these questions better answered is an open question :)
I’m thinking more along the line of mentors for the mentors, and I think one solution would be a platform on which to crowd source ideas for individuals’ ten-year strategic plan. In a perfect world, one would be able to donate one’s talents (in addition to one’s money) to the EA cause, which could then be strategically deployed by an all-seeing EA director. Maybe MIRI could work on that.
MIRI is focussing on mathematical AI safety research at the moment, so they wouldn’t currently want to act as a director of EA resources in general!!
I think for people who really have substantial personal non-monetary resources to give away, there are people who are prepared to step into a temporary advice-giving role, which might not even be so materially different from what you’re describing. even with my limited non-monetary resources, I’ve got quite helpful advice from people like Carl Shulman, Nick Beckstead and Paul Christiano, who I think are somewhat of a collective miscellaneous-problem-EA-question-answerer!!
Mentoring the mentors: the problem with giving advice to senior people is that if you know less about their domain than they do, then your advice might well make them worse off. So in such cases, it’s often preferable to bring you together with similar people, so that you can bounce ideas off one another. Or maybe I’m still missing some considerations, but these reservations seem worth taking into account.
Thanks, Ryan. That’s all very helpful.
(And the MIRI reference was a superintelligent AI joke.)
Haha ohhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh!
This is now a thing: http://effective-altruism.com/ea/174/introducing_the_ea_funds/
Interesting! Are you able to be more concrete about those opportunities? (Or how proactive Matt is?)
Yeah, definitely agree that this is the case—on the other hand, it seems like there are a lot of EtGers with a fairly diverse set of values/world-models in place already. I’m worried specifically about marginal EtGers; I think the average EtGer is doing super useful stuff.
From talking to Matt Wage a few times I got the impression that he spends the equivalent of a few full time work weeks per year figuring out where to donate. Requiring potential donors to spend that much time seems like a flaw in the system, and EA ventures seems to be addressing it.
I don’t know the whole story, but Matt Wage kept close tabs on FLI, and gave a substantial amount of money at a well-chosen time, which helped make the AI conference planning go more smoothly.
This is what I was trying to get at with http://acesounderglass.com/2015/05/11/map-of-open-spaces-in-effective-altruism/ . I don’t think the number of unsolved problems is at all well publicized.
I liked this article! There may be enough forum-goers that haven’t seen it (as I hadn’t) that it would be worth cross-posting.
Thanks :) I plan to do so as soon as I have the karma
A note on what we mean when we talk about “marginal EtGers”.
In some sense all EtGers look marginal, in that they could shift the margins by moving onto direct work. But there’s a coordination issue. Really the people who have the highest comparative advantage at EtG should be pursuing that, and whether we have the right balance determines where the cut-off should be between people choosing EtG or direct work. “Marginal EtGers” are people who only just decided on EtG. They could be people already in EtG careers, but they will more often be people who haven’t started, because experience and specialisation shifts your comparative advantage.
I expect this is all exactly as you were thinking, but I’ve been confused about this before so the clarification seemed like it might be useful for somebody!
I’m only twenty-two years old, and I haven’t completed a university or college certification yet. When I first encountered 80,000 Hours and effective altruism, I opted for earning to give because I didn’t think of myself as having many skills. I don’t know what soft or general skills I’ll learn in various careers by the time I’m thirty or forty, but I know the names of jobs which earn lots of money. Earning to give is what seems available to me. I’m aiming for it because it’s the only thing I can concretely imagine myself doing right now. I think this might be the case for lots of young(er) effective altruists, so I think Owen’s correct.
Didn’t we all already mean that when we said ‘marginal EtGers’? Like—the people whose decision to be in an earning-to-give career rather than charity is marginal? And I agree that it applies more frequently to early-career-stage. But yes, I agree that anyone could theoretically do a little less earning and a little more volunteering for example.
We probably did, but the meaning of “marginal EtGers” should be context-dependent[1], so it seemed worth clarification.
[1] For example if we’re talking about the value of persuading people to re-cast their professional career as earning-to-give, we could want to refer to people who were only just persuaded, or people in different areas who might be reached by expanding the efforts—either of which is a different margin.
Thanks for clarifying! Yup, the coordination problem is pretty hard. (Personally, I actually basically have no idea whether I should actually consider myself a marginal EtGer, and don’t really know how to answer the parts of this question that require information about the rest of the EA community.)
Would you be the first to jump ship, around the middle or towards the end?
Has there been much thought or discussion put into the idea of making existing charities more effective? Sure there are lots of organizations out there that focus on making marketing more effective or getting more donors; but there seems to be a big whole in the market for people or organizations that work to turn current charities into ones we would consider effective. I’ve thought about this myself quite frequently and would be stoked to see something like this. Has this already been discussed elsewhere?
I haven’t seen this discussed online. When I met Holden Karnofsky, co-executive director of Givewell, I asked him if making existing charities more effective is work Givewell would consider getting into. He told me Givewell is not considering that, and they intend to stick to their work of charity evaluation. He believes making existing charities more effective would be a more difficult job than evaluating them.
I agree that making charities more effective is a difficult uphill battle. It is definitely easier and more beneficial in the short term to evaluate existing charities. It would be really great to see some sort of cost/benefit break down comparing the time and energy it would take to create a new charity vs retrofitting an existing one.
This seems like a very complex question but I think it may be valuable as a long-term strategy for EAs to look at. Ultimately the goal is to divert enough energy and attention to highly effective charities, which would ultimately mean that other charities notice what donors want. To my mind it would be really fantastic if at that moment there was an EA organization or group that could step in and help organizations look at how to become more effective. Again this seems to be a long way off, but something worth doing an analysis of.
On the face of it, the Carter Centre, Fred Hollows Foundation, and several other charities look like they are already doing fantastically cost effective projects, but that on the whole they don’t fair as well. On the face of it, it appears like large donors can request that certain things are done with their money (from having worked at WaterAid) and concerns about intra-organisation arbitrage might be over-stoked. I think you’re on the right lines there Syd! Perhaps this could be an article on its own?
Thanks Tom. I think I will pose this question in it’s own article, but wanted to get some initial feedback from people beforehand. So thanks :)
Thanks for posting this Ben, I’ve noticed some of the same things you mention.
I personally would be more motivated against earning to give if I heard an argument about why the “more money equals more stuff done” equation which seems to hold in the rest of our economy fails for charities.
Why isn’t there an equal presumption that “more people willing to try things equals more stuff done”? EtGers and org-starters are complementary goods. The point isn’t that EtG is not going to do anything, just that there might be other things that did even more.
Thanks Ben, maybe I should be more clear. The naïve argument for earning to give is something like: “I donated $100,000 last year; based on my skill set I would probably earn $50,000 working at a charity; therefore earning to give is twice as good.”
My presumption is that you think this argument gets something wrong, and I’m trying to figure out what it is.
(As Owen mentioned elsewhere: if you are talking about truly marginal EtGers then the argument doesn’t have much force, so I am perhaps unfairly putting words into your mouth by assuming you mean that people like me are incorrectly following arguments like the above.)
“I donated $100,000 last year; based on my skill set I would probably earn $50,000 working at a charity; therefore earning to give is twice as good.”
The naive argument definitely works if $50,000 will reliably hire someone as good as you would be at the job, where that someone would otherwise have been doing activities of little altruistic value.
That’s a bit of an odd situation. Say your programming skills are worth $200k on the open market, and a charity needs skills like those. Why can they successfully hire a programmer for $50k? They need some combination of a supply of candidates who are eager to work for them for non-monetary reasons and lowered standards. Sort of like how lots of people want to be musicians, so average musician wages are very low.
Basically the naive argument assumes that there is a large population ready to donate their time at discount rates and high productivity but not to earn and donate with comparable willingness/productivity. Earning to give then exploits the opportunity for cut-rate hiring created by others’ unreasonable enthusiasm for direct work.
And that’s true for many positions, especially for people who have particular advantages in earning income. But it will break down for less popular positions, which offer less non-monetary compensation: they may be higher risk, weirder, have poorer exit options, involve fewer warm fuzzies, require years of onerous preparation getting a PhD or proceeding on the tenure track, etc.
In some cases there are also other pressures to keep wages low and rely on employees voluntarily taking low wages, e.g. if you have several employees already doing that then hiring new ones at market wages can create workplace jealousy (which is a big deal, believed to be a major contributor to wage stickiness and unemployment).
Thanks Carl. Here’s a specific example: ACE spends about $100,000 per year on its five staff members. I don’t think all five are full-time paid, but still this implies a pretty low average salary.
While ACE probably has some need for software skills, my guess is that my labor would be significantly less valuable than the staff they currently have on board with backgrounds in statistics etc.
Does this imply that earning to give is a good career route for me? Or is there something I’m not thinking of?
I think the counterfactual shouldn’t be seen as being an employee, it should be seen as being a leader—that’s what’s wrong—that leaders aren’t really easily replacable. Starting a new charity or project that wouldn’t have happened otherwise should perhaps be the counterfactual. Now we can start talking about the relative scarcity of funds against ideas against quality execution and see where the gap is and send marginal EA talent in that direction? Doesn’t make that much sense to put your career into executing someone else’s plan that would have been done anyway so its a bit of a straw man against ETG?
Oh, yeah, that assumes that the marginal person who was hired by the charity (a) existed and (b) their salary was a good proxy for their value created. Those seem pretty shaky, since more people means more ability to raise funds from outside EA and my impression is that people working for nonprofits don’t pay super much attention to salary as a signal.
The argument you give proves far too much; for instance, it suggests that Holden and Elie should go back to finance.
I think EAs irrationally avoid giving to “second-best” charities (like GiveWell’s standouts) , but that’s a relatively weak impression. It might be helpful to talk more about top giving opportunities in a given moment/year, rather than talking about top charities, which can become less top as donations are made, until donating doesn’t feel so shiny anymore (also saying this as a random EA, not soon-to-be GW staffer).
Of course, it might be better to ask people to give later in general, but there’s no reason as far as I know to believe the best order would be ‘donate to room-for-funding-remaining top charity’ > ‘donate later’ > ‘donate to second-best charity.’
Also, as Eliezer and Jacy pointed out on Facebook, this sufficient funding argument is far less true of existential risk and animal-focused charities than global poverty ones (in fact, many of those are somewhat strapped for cash).
I thought the GW belief was that their top charities are still more effective than standouts. What are the arguments against this reluctance?
They do. My comment was in reference to the fact that the top charities may run out of room for funding. When that occurs, in my experience (some) EAs tend to forget about or avoid opportunities to help fund standout charities, which are still very although slightly less effective, out of a bias against “second-best” opportunities.
I’ve heard about 7 EAs discuss hesitating to donate to GW’s top charities because they thought the top charities would run out of room for funding without their contributions. They all decided to either give later, or give to a non-global poverty charity. None mentioned the possibility of donating to a standout charity.
This seems like an important overlooked point. I’ll try bringing this up if I’m ever present at such a conversation.
I don’t have any new considerations to add but I agree with a) there’s probably a relative oversupply of ‘EA money’ on the margin and b) there are various psychological reasons that would pull towards E2G over direct work.
Psychological reasons? Social status of high earning jobs? The dopaminergic reward of numbers on a page and the oxytocic reward of the expression of gratitude from those you fund? Certainty against uncertainty? Any others?
I think there are plausibly contrary explanations for some of these observations. for senior staff moving to Open Phil, it could be because Open Phil is younger, and its tasks are less structured. For top charities running out of room for more funding, this is only the top couple of GiveWell charities, and this needn’t apply to intergenerationally-altruistic charities. GiveWell has mentioned a couple of organisations that they would like to see, but it’s not as though finding such opportunities has yet become their main activity.
I think the general point is right though: Good Ventures has most of the cash that we need, and EA Ventures has some also, as do Jaan Tallin, Sam Harris, edit: Matt Wage and others. Most of the people who are clever enough to want to make epic charities are also clever enough to know that they can have a more secure and conventional life elsewhere. This can be solved by just starting epic charities anyway, and by accumulating more funding to push more marginal individuals to do the same.
What charity would you start?
Yeah, I agree; I think they’re suggestive rather than definitive.
Sorry—didn’t you previously say that you agreed marginal people should be focusing less on accumulating more funding? I think I’m missing a link somewhere here.
Good question! I suspect the fact that this is much less well-defined than “which org would you donate to” is one of the psychological factors in favor of EtG :P
EA charities seem sufficiently talent constrained at the moment that I think some organisations will want to take a combination of two different measures: increasing salaries and encouraging people to move across from ETG (or not enter ETG in the first place).
To avoid confusing people: my own annual contributions to charity are modest.
Wait, I meant Matt Wage. Why did I write Nick Beckstead???
Sounds reasonable—and if successful will come back to a funding constraint.
You want to put your most expensive resource at the bottle-neck as an efficiency heuristic.
Do people think the bottle neck is ideas, execution, or funding—or the infrastructure needed to facilitate one or more of those?
It seems a shame that such intelligent and amazing people in the EA movement are, on the whole, putting their most productive and creative years (25-35?) into EtG rather than building and delivering practical solutions to improve the world in the best way possible outside of AI/Xrisk and movement building—as I think there is a lot of learning value from this kind of thing!
EA groups specialised around key promising interests such as healthcare, AI, governance, animal welfare, poverty and development etc. that can learn together, network efficiently, keep track of projects, prioritise collective resources etc. might be a way forward. AI/X-risk and perhaps animal welfare appears from the outside to be more developed along these lines than the others? What are people’s thoughts?
I think animal welfare is more developed along these lines as well. Animal Ethics, the Foundational Research Institute, Animal Charity Evaluators, and Raising for Effective Giving are all organizations started by effective altruists with a total or partial focus on animal welfare. The coordinated efforts for animal welfare within effective altruism are newer, so you may not have heard of these organizations as much as MIRI or FLI. None of those organizations was founded earlier than late-2013. Hopefully more success is to come to and from them.
Thanks Evan—is it your impression that EAs across these activities are in touch and taking each other’s projects, strategies and learning into account when they plan their work?
Animal Charity Evaluators acts like the nexus between all these organizations. The newer ones founded by effective altruists don’t have enough of a track record to merit a recommendation from Animal Charity Evaluators yet. However, all these, and ACE’s top recommended charities, like Mercy For Animals and the Humane League, are in touch with each other. Nick Cooney, who works for Animal Charity Evaluators, has written a book about effective altruism this year, in addition to the one Peter Singer has written, and recommend each other’s books to the public.
So, yes, they’re definitely in touch with each other. One difference between existential risk reduction and animal activism is x-risk is smaller. X-risk reduction is so integrated with effective altruism that I think EA knows about everyone in x-risk reduction, and everyone in x-risk reduction knows about EA. However, animal activism is a much bigger movement than just what’s touched by effective altruism. Animal welfare/rights has much overlap with environmentalism, which might be the biggest social movement in the world, stretching even the definition of “movement”, really. The rest of animal activism seems so big it’s difficult for effective altruism to take into account the actions of the rest of that community, and EA itself is probably only a small part of animal activism.
For more information, I recommend contacting a director from the Board of ACE, such as Rob Wiblin, Brian Tomasik, or Jacy Anthis
Nick Cooney works for MFA. Although MFA is one of ACE’s top ranked charities.
The answer to this question depends somewhat on your focus area, but my experience so far has been that almost all the organizations I work with could use more money (including ACE, Animal Ethics, most Swiss EA projects, MIRI, FHI, and most object-level charities except those that get saturated by Good Ventures). Many of these groups need money more than talent right now.
I also think the people who can be hired for an opportunity cost of ~$50K aren’t 4 times less talented than those who can be hired for an opportunity cost of $200K. This belief is partly based on what I’ve seen at current charities and partly based on the premise that EAs aren’t as special as they can sometimes seem. When you do nonprofit startups, most of the time goes toward ordinary tasks that lots of people could do.
I enjoy working on nonprofit stuff, so I don’t find EtG psychologically more attractive, but after a lot of thinking about this question, I’ve tentatively concluded that I can make a bigger impact EtGing. I think this would be true even if I could only make ~$150K/year, which is too low as an estimate of long-term future earnings.
I broadly agree with this (and have some relevant posts in draft).
Thanks! Do you have a perspective on specific things you’d like to see more of instead?
At a broad level, I expect the ratio of (value of marginal earning-to-give):(value of marginal direct work) can’t get too distorted, because of mechanisms like:
Donors discuss publicly whether they feel the pool of giving opportunities is deep;
Charities talk publicly about whether they are more funding- or talent-constrained;
Charities raise or lower salaries, making direct work more or less appealing to people using that as a heuristic in choosing a career.
However this isn’t in conflict with your suggestion that at the margin now there may be slightly too much EtG. I’m not sure about that.
You mean like they are in the comments of this post? ;-)
(This reminds me of a similar conversation we had on this post...)
Do we know how sensitive recruiting is to salaries? I would have thought not very for direct work, because many people weren’t using salary as a heuristic.
I get the (purely anecdotal) impression that recruiting is sensitive to salaries in the sense that some people who would be good fits for EA charities automatically rule them out because the salaries are low enough that they would have to make undesirable time/money tradeoffs. However, it’s a bit of a tricky problem, because most nonprofits want to pay everyone roughly the same amount, so hiring one marginal person at say 20% more really means increasing all salaries by that much.
Another relevant factor is how much of a salary cut you’re looking at when moving from EtG to direct work. In for-profit organizations, the most competent people frequently get paid 3-10x as much as average. I don’t think a 3-10x disparity would be culturally acceptable in EA charities, which means that someone at the top essentially has to forgo a much higher percentage of their salary to do direct work.
Yes, exactly. :)
I can’t quantify this, but I can give anecdata which suggest “a bit”:
CEA has before been interested in people who we couldn’t attract for salary reasons;
I personally had reservations about how useful it would be for me to work at CEA. When I unpacked this I realised that I had been using salary as a partial heuristic for how much value I was adding, and it looked weak compared to postdocs (also not in a “optimised for making money” category). This was easier to lay aside after spotting, and I suspect that relatively few people use it as an explicit heuristic, but it is a fairly normal thing in society generally, so I wouldn’t be surprised if some other people were letting it enter into their decision making.
Even if there is an effect here, it could be that organisations can end up talent-constrained, so that it is hard/expensive to pay money for better staff. VipulNaik posted some analysis of this on LW; I also did some thinking about it before joining CEA.
Like Owen all I can offer is anecdata. I’ve worked in nonprofits or public sector jobs during my career and there is a serious brain drain problem. Again I don’t have specific numbers but it is talked about frequently, and definitely felt. I know several talented, thoughtful, hard working people who left nonprofit work because there was no money, and no expectation of this changing through their career.
In my experience there is actually the established norm that if you are asking for money on-par with what your position would earn in for-profit you are vilified. This is one reason I mentioned improving nonprofits in my previous comment. I have the impression that changing some of the cultural norms around nonprofit work would create the turnover necessary to lift an under-preforming organization into EA efficacy.
Why couldn’t CEA fundraise more to pay for better salaries? This sort of thing seems like a failure of too little ETG.
I’m not sure whether it should have happened in these cases (particularly if the total costs of offering high salaries spills over into higher salaries across the board), but yes, it was meant to be evidence that higher salaries could achieve at least somewhat better outcomes, a corollary of which is that the marginal value of EtG can’t diminish too severely.