Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Thanks for trying to get a clearer handle on this issue by splitting it up by cause area.
One gripe I have with this debate is the focus on EA orgs. Effective Altruism is or should be about doing the most good. Organisations which are explicitly labelled Effective Altruist are only a small part of that. Claiming that EA is now more talent constrained than funding constrained implicitly refers to Effective Altruist orgs being more talent than funding constrained.
Whether ‘doing the most good’ in the world is more talent than funding constrained is much harder to prove but is the actually important question.
If we focus the debate on EA orgs and our general vision as a movement on orgs that are labelled EA, the EA Community runs the risk of overlooking efforts and opportunities which aren’t branded EA.
Of course fixing global poverty takes more than ten people working on the problem. Filling the funding gap for GiveWell recommended charities won’t be enough to fix it either. Using EA branded framing isn’t special to you—but it can make us lose track of the bigger picture of all the problems that still need to be solved, and all the funding that is still needed for that.
If you want to focus on fixing global poverty, just because EA focuses on GW recommended charities doesn’t mean EtG is the best approach—how about training to be a development economist instead? The world still needs more than ten additional ones of that. (Edit: But it is not obvious to me whether global poverty as a whole is more talent or funding constrained—you’d need to poll leading people who actually work in the field, e.g. leading development economists or development professors.)
“Claiming that EA is now more talent constrained than funding constrained implicitly refers to Effective Altruist orgs being more talent than funding constrained.”
It would be true if that were what was meant, but the speaker might also mean that ‘anything which existing EA donors like Open Phil can be convinced to fund’ will also be(come) talent constrained.
Inasmuch as there are lots of big EA donors willing to change where they give, activities that aren’t branded as EA may still be latently talent constrained, if they can be identified.
The speaker might also think activities branded as EA are more effective than the alternatives, in which case the money/talent balance within those activities will be particularly important.
I think this is a bit unfair. I took the OP to be referring the previous discussion of this by 80k, which was specifically about EA orgs.
I had a similar reaction.
It was the choice of “Money gap—Large (~$86 million” in the summary that got me. It just seems immediately odd that if you think that Earning To Give to some global poverty charities is on a par with other common EA career choices in terms of marginal impact (i.e. assuming you think “poverty” should be on the table at all for us), that the size of this funding gap is the equivalent of ~$0.086pp for the bottom billion. And in fact the linked post gives a funding gap of something more like $400 million for GiveWell’s top charities alone (on top of expected funding from Good Ventures and donors who aren’t influenced by GiveWell), with GiveDirectly able to absorb “over 100 million dollars”. But it’s not so odd if you think that the expected value of donating to GiveWell-recommended charities is several orders of magnitude greater compared to the average global poverty charity. I’m aware that heavy-tailed distributions are probably at play here, but I’m very skeptical that GiveWell has found anywhere near the end of that tail (although I think they’re the best we have).
Regardless of what the author meant, I think I see this kind of thinking in EA fairly regularly, and it’s encouraged by giving the “neglectedness” criterion such prominence, perhaps unduly.
And yes, I also want to thank the author for encouraging people to think and talk about this in a more nuanced way.
Here’s what Lewis Bollard had to say about the talent vs. funding issue when asked about it on the 80,000 Hours podcast (in September 2017):
Robert Wiblin: My impression is that fa …. animal welfare organisations, at least the ones that I’m aware of, they are associated with Effective Altruism are often among the most funding constrained. That they often feel like they’re most limited by access to money. Does this suggest that people who are concerned with animal welfare should be more inclined to do earning to give and, perhaps, rather than work in the area, instead make money and give it away?
Lewis Bollard: I don’t think so. I think that that was true until two years ago, or it was true until eighteen months ago when we started ground making in this field. I think the situation has dramatically improved in terms of funding largely because of Open Phil. Entering this field, but also because there are a number of other very generous donors who’ve either entered the field or significantly increased their giving in the last two years.
Right now I think there is a bigger talent gap than financial gap for farm animal welfare groups. That’s not to say it will always be that way, and I certainly do think that someone whose aptitude or inclination is heavily toward earning to give, it could still well make sense. If someone has great quantitative skills and enjoys working at a hedge fund, then I would say earn to give. That could be still a really powerful way and we will more and more funders over time to continue scaling up the movement, but all things equal, I would encourage someone to focus more on the talent piece now because I do think that things have really flipped in the last few years, and I’m pretty optimistic that the funding will continue to grow in this space for animal welfare.
Robert Wiblin: What makes you confident about that? You don’t expect to be fired in the next few years?
Lewis Bollard: First, I hope I won’t be fired, but I think there’s a deep commitment from the Open Philanthropy Project to continue strong funding in this space, to continue funding on at least the level we’re funding currently and hopefully more.
I’ve also just seen a number of new large-ish funders coming online. Just in the last two years I would say the number of funders giving more than two hundred thousand dollars a year has doubled, and I’ve started to see real interest from some other major potential funders.
I think it’s natural that, as this issue has gained public prominence, so were there a lot of potential donors, or people who have great wealth, have realised that this is something important and this is something that they can make a great difference.
For the animal advocacy space, my anecdata suggest that the talent gap is in large part a product of funding constraints. Most animal charities pay rather poorly, even compared to other nonprofits.
Yes, each cause has different relative needs.
It’s also more precise and often clearer to talk about particular types of talent, rather than “talent” as a whole e.g. the AI safety space is highly constrained by people with deep expertise in machine learning and global poverty isn’t.
However, when we say “the landscape seem more talent constrained than funding constrained” what we typically mean is that given our view of cause priorities, EA aligned people can generally have a greater impact through direct work than earning to give, and I still think that’s the case.
In 2015 you (Benjamin) wrote a post which, if I’m reading it right, aspires to answer the same question, but is in very direct contradiction with the conclusions of your (Katherine’s) post regarding which causes are relatively talent constrained. I would be interested in hearing about the sources of this disagreement from both of you (Assuming it is a disagreement, and not just the fact that time has passed and things have changed, or an issue of metrics or semantics)
here is the relevant excerpt
https://80000hours.org/2015/11/why-you-should-focus-more-on-talent-gaps-not-funding-gaps/
It sounds like both of you (Katherine and Benjamin) agree that AI is “talent constrained”. Pretty straightforward, it’s hard to find sufficiently talented people with the specialized skills necessary.
It sounds like the two of you diverge on global poverty, for reasons that make sense to me.
Katherine’s analysis, as I understand it, is straightforwardly looking at what Givewell says the current global poverty funding gap is...which means that impact via talent basically relies on doing more good with the existing money, performing better than what is currently out there. (And how was your talent gap estimated? Is it just a counting up of the number of currently hiring open positions on the EA job board?)
Benjamin’s analysis, as I understand it, is that EA’s growing financial influence means that more money is going to come in pretty soon, and also that effective altruists are pretty good at redirecting outside funds to their causes (so, if you build good talent infrastructure and rigorously demonstrate impact and a funding gap, funding will come)
Is this a correct summary of your respective arguments? I understand how two people might come to different conclusions here, given the differing methods of estimating and depending on what they thought about EA’s ability to increase funding over time and close well demonstrated funding gaps.
(As an aside, Benjamin’s post and accompanying documents made some predictions about the next few years—can anyone link me to a retrospective regarding how those predictions have born out?)
It sounds like you diverge on animal rights, for reasons I would like to understand
Benjamin, it sounds like you / Joe Bockman are saying that ending factory farming is exceptional among popular EA causes in having more talent than they can hire and being in sore need of funding.
Whereas Katherine, it sounds like you’re saying that animal rights is particularly in need of talent relative to all the other cause areas you’ve mentioned here.
These seem like pretty diametrically opposed claims. Is this a real disagreement or have I misread? I’m not actually sure what the source of this disagreement is, other than Katherine and Joe having different intuitions, or bird’s eye views of different parts of the landscape? Has Joe written more on this topic? If it’s just a matter of two people’s intuitions, it doesn’t leave much room for evaluating either claim. (I get the sense that Katherine’s claim isn’t based on intuition, but the fact that EA animal organizations are currently expanding, which increases the estimated number of open job postings available in the near future. Is that correct?)
(Motivation: I’m reading this post now as part of the CE incubation program’s reading list, and felt surprised because the conclusions conflicted with my intuitions, some of which I think were originally formed by reading Benjamin’s posts a few years ago. As the program aims to set me on a path which will potentially help me cause redirection of funding, redirection of talent, create room for more talent, and/or create room for more funding within global poverty or animal issues, the answers to these questions may be of practical value to me.)
I’d be happy if either of you could weigh in on this / explain the nature and sources of disagreement (if there is in fact a disagreement) a bit more!
(PS—can I tag two people to be notified by a comment? Or are people notified about everything that occurs within their threads?)
What are your thoughts on this? https://80000hours.org/2015/11/why-you-should-focus-more-on-talent-gaps-not-funding-gaps/
In particular
> a cause supported by the community that seems more funding constrained than talent constrained – is ending factory farming. Jon Bockman of Animal Charity Evaluators, told me that vegan advocacy charities have lots of enthusiastic volunteers but not enough funds to hire them, meaning that funding is the greater bottleneck (unless you have the potential to be a leader and innovator in the movement).
Please see also my reply to Benjamin Todd’s comment for a longer version of this question, which I wanted to address to both of you, but I don’t think this forum has user tagging functionality.
Like you, at 80,000 Hours we view the relative impact of money vs talent to be specific to particular problems and potentially particular approaches too.
First you need to look for what activities you think are most impactful, and then see what your money can generate vs your time.
This statement could be interpreted as suggesting that people should use a two-step process: first, choose a problem based on how pressing it is and then second, decide how to contribute to solving that problem.* That two-step approach would be a bad idea because some people may be able to make a greater impact working on a less pressing problem if they are especially effective at addressing that problem. Because of this, information about how pressing different problems are relative to each other should not be used to choose a single problem; instead, it should be used as background information when comparing careers across problems.
*I doubt that’s what you actually meant since you wrote the linked article that discusses personal fit. But I figured some people might be unfamiliar with that article, so I thought it’d be worthwhile to note the issue.
Yes—the reason you need to look at a bunch of activities rather than just one activity, is that your personal fit, both in general, and between earning vs direct work, could materially reorder them.
If the AI safety/alignment community is altogether around 50 people, that’s a large relative gap. Depending on how you count it might be bigger than 50 people, but the talent gap seems large in relative terms in either case. :)
Thanks for this post, it was very insightful. Do you have any ideas on the talent/funding gap scenario for other EA cause areas like global priorities research (I believe this doesn’t come under meta EA), biosecurity, nuclear security, improving institutional decision making, etc?
If this is true, I just want to take a moment to celebrate that the EA movement has more or less doubled animal rights funding globally. That’s awesome!
This is very helpful. I would note that the Global Catastrophic Risk Institute does AI and is funding constrained. Of course it also does other X risk work, but I think it would be good to broaden your category to include this or have a separate category.