Specialization and Giving to Charity

Hereā€™s an argument against giving to charity I donā€™t really like and am not really sure why itā€™s wrong, and which I also havenā€™t thought that much about.

First, itā€™s not a super fun thing that the following statement is true, but basically every human enterprise looks like a Pareto distribution if you plot percentile-of-success-in-the-domain on the x-axis and metric-of-success on the y axis. In other words, for almost everything humans compete in, the difference in payoff between being at the 98th percentile and the 99th percentile is much larger than the difference between the 56th and 57th percentile.

Given this reality, how can we most effectively coordinate to improve the world?

It seems like hereā€™s what we shouldnā€™t do: everyone try to have the highest impact career possible while making the most amount of money possible and also contributing to several different cause areas while being an influential person in EA/ā€‹rationalist discourse. Optimizing to succeed in two or more meaningfully-different hierarchies probably means that youā€™ll do worse in each domain than if you had optimized exclusively for one thing. I suspect that many people wind up being 80th percentile on a handful of impact-relevant domains when they couldā€™ve been 99th percentile on one domain if they had let themselves be 30th percentile on all the others. Naively, it may appear better to be 80th percentile on a bunch of stuff, but itā€™s in fact far more impactful to have a community full of really specialized people, since, again, the difference between the 56th and 57th percentile is much smaller than the difference between the 98th and 99th percentile for basically everything.

So the issue of EA-not-needing-more-generalists isnā€™t just a supply and demand thing going on at this particular moment; I think itā€™s likely to always be true because of this icky Pareto stuff.

Perhaps we should instead optimize for having as many one-percenters in things as we can, even if the opportunity cost of becoming a one-percenter looks fairly high in another domain. I think that we can pretty safety assume that if everyone follows this strategy, weā€™ll essentially offset each othersā€™ opportunity costs and net much more goodness done. So tentatively, I think we should cultivate a norm of each-person-do-good-via-one-strategy.

(There are certainly cases where different domains are close enough together that excelling in one sets you up to grab a lot of ground in another with relatively low entry cost, and in these cases we should make an exception. But I think the above principle is usually vastly underappreciated when communities try to optimize for a common goal.)

So, should I stop donating to charity?

The difference that an early-career researcher can make by donating to charity looks pretty marginal relative to the difference an earn-to-give dude can make. If, as a general principle, researchers in EA stopped donating to charity, I suspect theyā€™d be able to advance their relative standing in their own careers by an additional percentile at least, by doing things like funding their own research projects, taking breaks from work to explore promising ideas, spending less time thinking about finances, etc. This is true for most careers where the path to impact is the work itself /ā€‹ the reputation or influence of the person doing the job.

The argument is to let the earn-to-give people do earn-to-give, and if thatā€™s not you, instead be super ā€œselfishā€ in pursuing whatever career goals will most improve your own path to impact. Similarly, earn-to-give people, stop bothering with any non-money-related ways of doing good, just stfu and get that bag or something, since you earning ever-so-slightly-more relative to other magnates is probably as good as hundreds of median EAs taking the GWWC pledge.

There might be other reasons to donate to charity, too, like costly signaling, donating as a commitment device, etc. But this feels a bit suspicious-convergence-y to me: I suspect that the 99th percentile strategies for these goals may be other things, and we should have low priors that a thing we used to do for a particular reason we then debunked is something we should continue to do now for a different reason, especially given the influence of motivated reasoning.