The Bittersweetness of Replaceability
Everybody’s replaceable
When I first became interested in effective altruist ideas, I was inspired by the power of one person to make a difference: “the life you can save”, as Peter Singer puts it. I planned to save lives by becoming an infectious disease researcher. So the first time I read about replaceability was a gut punch, when I realized that it would be futile for me to pursue a highly competitive biomedical research position, especially given that I was mediocre at wet lab research. In the best case, I would obtain a research position but would merely be replacing other applicants who were roughly as good as me. I became deeply depressed for a time, as I finished a degree that was no longer useful to me. After I graduated, I embarked on the frustrating, counterintuitive challenge of making a difference in a world in which everybody’s replaceable.
Replaceability evokes ambivalence: on the one hand, it makes one’s efforts to improve the world feel Sisyphean. On the other hand, replaceability is based on a fairly positive perception of the world, in which lots of people are pursuing worthwhile goals fairly well. Lately I’ve become more optimistic about the world, which makes me more inclined to believe in replaceability. This makes certain EA options appear less promising, but an understanding of incentives can lead to the identification of areas in which one can make an irreplaceable difference.
Donation saturation
Earning to give avoids some of the problems of replaceability, but a major challenge is finding an effective charity that needs donations. As GiveWell discusses, there are few promising charitable causes that are underfunded. GiveWell often mentions lack of room for more funding when it fails to recommend a charity. It’s somewhat easier to find funding gaps when considering more esoteric causes, such as existential risks, but as the GiveWell post mentions, even these areas receive considerable funding. A number of organizations work on global catastrophic risks. I’ve sometimes donated to EA organizations such as the Centre for Effective Altruism, only to feel a twinge of regret when their funding goals are quickly met or even exceeded.
“Do your job”
Because of the limitations of earning to give, I’ve considered pursuing a career that would give me control over large amounts of money, particularly scientific funding. But lately I’ve been having a mental conversation like this:
me: If I take a job funding scientific research, I can do a lot of good. I could potentially move tens of millions of dollars to effective research that helps people.
me: But isn’t “funding effective research that helps people” already the goal of NSF and other major funders?
me: Well I would do it better, because I’m awesome.
me: That’s awfully cocky of you. What’s something specific you’d change?
me: I would fund more pandemic prevention, for example.
me: The U.S. government already spends $1 billion a year on biosecurity, as the GiveWell blog post mentioned. For comparison, the budget of NSF is less than $8 billion. The government is also stockpiling vaccines to prevent pandemics. There might be more work to be done, but you’re running out of low-hanging fruit.
Many EA career ideas reduce to “Do your job. Do the shit out of it.” This is sound advice, but it’s not as radical or inspiring as one might hope.
Is everyone else an idiot?
If everyone else were completely incompetent, being good at one’s job could allow one to make a large difference in the world. The narrative that “everyone but me is an idiot” is popular among open mic comedians, and there are hints of this feeling in much of the rationalist literature. Proponents of this viewpoint often point to problems in scientific research, particularly the lack of reproducibility. While I agree that there are issues to be addressed in the research process, there are some people who are beginning to address them, as the GiveWell blog post discussed. Additionally, a single study doesn’t make or break a discipline. My experience in academia (I’m a bioinformatics PhD student) has been quite positive. Most published research in my field seems fairly good, and I think there are several factors that contribute:
Saturation: There are 1.47 million researchers in the U.S., and they occupy almost every niche imaginable. I recently began a project in which I was trying to develop an expectation-maximization algorithm to filter binding events from background noise in Hi-C data. It was as arcane as it sounds, but as I was in the middle of it, another group published a paper doing exactly what I was trying to do, with an approach far better than mine. Even if you believe that most people aren’t very clever, the sheer numbers mean that nearly every good idea is already taken.
Feedback: While an individual researcher has incentive to make her own research look good, there’s a pleasure to be had in destroying someone else’s research. This can be witnessed at the weekly lab meetings I attend, in which adults are nearly brought to tears. Though the system is brutal, it results in thoughtful criticism that greatly improves the quality of research.
System 2 thinking: Daniel Kahneman describes system 2 thinking as slow, analytical thinking. Many cognitive errors and biases result from the snap judgments of system 1 thinking. I’d expect scientists to use system 2 thinking, because they think about problems over long periods of time and have a lot of domain-specific knowledge.
How to be irreplaceable
Finding irreplaceable EA work requires identifying areas of society that have systematic implementation problems. Focusing on cognitive biases may not be the best strategy, given that certain areas of society, such as scientific research and the stock market, seem to have avoided the worst of these biases, even without explicit training. Instead, the Austrian school of economics can offer insight here. Austrian economists define rationality as actions that people take to pursue goals. Goals are not explicit but are revealed through behavior. For example, a commonly cited example of irrationality is that if people are made organ donors by default, most people will be organ donors, but if organ donation is not the default, most people will not be organ donors. An Austrian economist would say that this behavior reflects the fact that most people don’t want to research the pros and cons of the boxes they check on forms. Perhaps they value their time too much and have found that the default is usually decent enough. Even if they express different preferences in surveys, their “true” preferences are revealed by their behavior. Thus, most human behavior is rational. Even though this is a tautology, it’s useful in understanding human behavior by focusing on incentives and preferences rather than nebulous attempts to pin down the platonic ideal of rationality.
The following figure is often used by libertarian economists to explain problems with economic incentives. The incentives of the state are represented by the red box. Though governments do some good, they also commit some of the largest and most blatant misappropriations of resources, including policies such as the drug war that are downright harmful. The charitable donations of the average person would be in the top right corner. This characterization seems accurate given that charitable donation is relatively stingy (2% of GDP) but largely ineffective: the most popular charitable causes are education (much of this is wealthy universities catering to wealthy students) and religion, which together account for 45% of charitable giving.
However, in the strictest sense, scientific research and high-impact charities like the Gates Foundation would fall into the red box as well, and I’ve been arguing that there’s more efficiency in these areas than one might expect. Thus, I’d characterize the matrix as a spectrum, rather than being binary. My lab is not exactly spending our money and we’re not exactly spending it on ourselves. But we spend a lot of time thinking about how to spend the limited money we receive, and we have deep domain-specific knowledge of what we’re spending it on. Because we associate the money and our research so much with ourselves, we’re much closer to the top left corner than the U.S. president is when he creates a budget.
As I mentioned above, saturation, feedback, and system 2 thinking promote high-quality scientific research. How do charities and governments compare on these dimensions? The market for charities is fairly saturated, and there are several organizations that do high-quality research, engage in system 2 thinking, and provide feedback. On the other hand, charitable donations from ordinary people are not subject to system 2 thinking and appropriate feedback. An analogous situation exists in government policy: a saturated field of think tanks and wonks engages in high-quality analytical thinking and provides feedback on policy proposals. However, the world of governments is not saturated, and individual governments face little competition. The feedback that governments receive is often perverse: citizens want lower taxes and higher government spending. System 2 thinking is notoriously lacking in public policy.
Conclusions
Replaceability is a problem in almost every aspect of EA (though some EAs may see it as a weight off their shoulders). I feel slightly more favorably toward earning to give than I did previously, but I’m concerned about the lack of good giving opportunities. Direct EA work seems to be a good option, but there should be much more advocacy than research. EA organizations are probably reinventing the wheel through their research. Nitpickers will point out that advocacy itself is saturated and replaceable—this is probably true to an extent. Instead of advocating for specific policies, it may be better to focus on creating systems with favorable values and incentives. Compared to academic discussion of fallacies, an understanding of incentives can provide more insight into why systems fail. EA could benefit from the perspectives of libertarian economists.
Put another way: comparative advantage is hard. It is slightly easier in the charity world because the charity world is less competitive overall, but it is still hard. EA is relatively new and thus has found some underfunded areas, but we should expect the orgs that do well by current EA heuristics to eventually become saturated with funds because there will be a lot of people using EA heuristics to inform their giving. The truly best giving opportunities are expected to be at the frontier: causes where the skills needed to evaluate them are rare. This could be due to technical background, professional connections, etc.
I would propose that something that at least helps alleviate this is EAs pursuing higher variance causes. Low variance causes get snapped up pretty quickly, but there is remarkably little stomach among large donors for funding 20 causes and having 19 of them fail, even if the one that succeeds “pays for” the entire experiment in terms of impact.
Cause selection should help here too. If you find a cause that others don’t think clearly about, like Open Borders, or identify an area that is neglected because of newness—synthetic biology biosecurity, then you’re more likely to have a comparative advantage.
I’d argue that even within these areas, it would be hard to find low-hanging research fruit. It seems to me that even new technologies get saturated really quickly. In the bioinformatics research I mentioned, I was doing a fairly uncommon type of data analysis to answer a non-obvious question from a 6-year-old technology—and I still got scooped. My experience with the social sciences is more limited, but I once tried to write a paper exploring the contribution of increasing agricultural productivity to Thai GDP, only to find that the exact topic already had a paper.
So one of my major points is: EA organizations such as GiveWell may do too much original research and writing, when advocacy backed by literature reviews might be better.
If you want to find a comparative advantage in research, maybe breakthrough research, translational research (including efforts for commercialisation), literature reviews are more neglected?
If you want to take a hard line against the usefulness of research among moderately high-talent researchers, then you might argue that it’s better to leave academia for business or policy? Would you try to make that case? The thing is that we’re looking for a comparative advantage, we don’t necessarily need an absolute one (e.g. to be in the top X% at Y task).
I’m not sure it’s right to criticise GiveWell on this sort of topic, when GiveWell mostly focus on reviewing, summarising, integrating and applying existing literature more than creating new experiments and models as academics would focus on.
Yeah I’ll concede that GiveWell is fairly good in terms of not reinventing the wheel, though I think that they could be taking even more shortcuts.
It’s not that I think research is useless—I actually take a very positive view of scientific research as a whole, but this means that an individual is constrained to small marginal returns. Maybe we’re just going to have to accept that, instead of saying “scientific research is fundamentally broken and I’m gonna come in and change everything” the way I used to think. I still think scientific research is a good EA option, though I don’t think translational research is systematically underfunded. There are many incentives to have biomedical applications. In fact, some people argue basic research is underfunded compared to applied research. I’m not sure what you mean by “breakthrough research”.
If one does want to make a more radical difference, one needs to identify systems of incentives that do result in brokenness, like government spending as a whole.
Breakthrough research—this kind of thing: http://blog.givewell.org/2015/04/14/breakthrough-fundamental-science/.
Incentives causing brokenness—We’re going well beyond the subject of your post now, but science still seems to be broken in many ways. As you say, people are incentivised badly—to publish, rather than for the social good. People are incentivised to fund projects with sure incremental progress, much of which is not on the biggest problems facing humanity. Fields are fairly insular and inwards-looking. It takes too long for people to recognise paradigm-shifts. Replication is poor. Science infrastructure is neglected (e.g. LaTeX is old and has lots of room for improvement). Things like cognitive genomics are neglected because of political reasons. So there are a lot of different problems here, although they’re not obviously fixable by an individual motivated researcher. I guess we agree that a person in science might be well-served to try to combat these head-on, rather than just performing research that might be replaceable. I don’t agree that one has to zoom out all the way to government-spending to see problems to be fixed. That would seem to be an overcorrection.
The most prestigious publications like Nature and Science love to publish breakthroughs, but this also leads to sloppiness, like the paper on arsenic-based life that was retracted and the paper on induced stem cells that was retracted. When we see really bold publications like that, we always ask, “How long do you think till this is retracted?” On the other hand, there are also incentives to produce research with social benefit. It’s odd to complain about both breakthrough (i.e. basic, high-risk/high-reward) research and translational (i.e. applied, incremental) research being underfunded. You can’t have it both ways here.
There’s a similar contradiction in this: “It takes too long for people to recognise paradigm-shifts. Replication is poor.” The reason science is slow to change theories is because replication is poor. Individual studies have an inherent stochasticity, so one has to consider a body of work as a whole before being willing to shift the paradigm. (Note that Galileo’s and Mendel’s studies didn’t replicate initially.)
What I think are neglected are science that is ambitious, with low probability of success (e.g. Hsu’s cognitive genomics work, Ioannidis’ statistics work), and work bridging new research to humanitarian applications (e.g. using machine learning to classify medical images, or to detect online fraud or risks to security). These are overlapping sets.
In paradigm shifts, I mean adaptation to different ways of doing things. e.g. working to develop BCI and brain implants, or to apply deep learning in machine learning. These things have occurred too slowly. The replication problems are mostly in soft sciences like psychology, and arise from systemic problems with study pre-registration. The causation of these problems is somewhat entangled, but they’re not the exact same problem. Both should be fixed.
My point is: there are a range of important structural changes that effective altruists might want to make in science.
I think the main comparative advantage (= irreplaceability) of the typical EA comes not from superior technical skill but from motivation to improve the world (rather than make money, advance one’s career or feel happy). This means researching questions which are ethically important but not grant-sexy, donating to charities which are high-impact but don’t yield a lot of warm-fuzzies, promoting policy which goes against tribal canon etc.
Sometimes it’s not about promoting policy which goes against tribal canon. It can also be about promoting policy so technical and obtuse that virtually everyone else’s eyes glaze over when thinking about it, so they never pay it any mind.
The recent talk of not having good giving opportunities is confusing to me. Maybe there are limited opportunities to save a life at $3000, but there should be many opportunities to save a life at $10,000 (UNICEF? Oxfam?). This is still far better than developed country charities (where it costs millions of dollars to save a life), so it is still a great opportunity. As for global catastrophic risk, it is true that some areas receive a lot of funding. But then there is the Global Catastrophic Risk Institute that has an integrated assessment to look across all the global catastrophic risks and prioritize interventions that is largely unfunded (disclosure: I am a research associate at the Global Catastrophic Risk Institute). And in general, if we took seriously the value of future generations, we should be funding global catastrophic risk reduction orders of magnitude more than it is currently funded.
I completely agree.
+1
The idea that we could become saturated with money seems bizarre. It’s not like Givewell’s top charities have run out of RfMF, and even if they do, and all the top organisations are genuinely more talent- than money-constrained, it doesn’t follow that there’s a better option than putting money towards them.
You could potentially fund scholarships for people learning the skills they need—not that I would expect this to be a top-tier use of the money, but it seems likely to be as good or better use of your resources than either a) applying or studying for a highly ‘talent-constrained’ job which, if it’s that hard, there’s little reason to expect yourself to be be competent for, or b) sitting around and waiting for someone to show up.
In the future, effective altruism may need to become less risk averse. There is appeal in Givewell’s top recommendations for classic charitable causes because there is more-or-less a guarantee people will be helped as cash transfers, mosquito bed nets, or vaccinations are delivered. If effective altruism donates millions of dollars to an advocacy campaign which may fail, or fund a project with a decades-long trajectory difficult to predict in the present, it’s less certain someone in need will be as helped as we hoped for. There may also be a personal bias among individuals who want to feel as though their dollars donated definitely made a difference despite what mistakes other donors make. So, that could be a bias towards donating to the Against Malaria Foundation, or GiveDirectly. I know this worry has pulled on my heartstrings in the past, but I’m (slowly) overcoming it. I know of no data of how pervasive this bias may actually be among altruists.
One solution could be to separate ego and psychology from expected value. Part of effective altruism’s appeal is it can make one feel good about oneself. This can come from a confidence that donating to an effective charity which will definitely leads to lives saved feels better than feeling ambivalent about the value of private charity and its uncertainty. However, that’s a state of thinking effective altruism may need to return to. Plenty of wealthy philanthropists donate money to art museums and other institutions which won’t go on to save lives. These philanthropists still reap the status of donating, and feel good about themselves. There are activists and protest movements around the world proud of the work they do, but not all are guaranteed to work. If others can feel that way, then I think effective altruism can take on bigger risks with a chance of greater value as well, without us feeling bad about ourselves.
Another solution may be effective altruism scaling up what it approaches, and thinking bigger about what it can achieve. Good Ventures is more or less aligned with effective altruism, with other major philanthropists able to donate as much as the rest of this movement combined doing related work (e.g., Elon Musk). William MacAskill’s new book Doing Good Better was recently positively reviewed by Susan Desmond-Hellman, the CEO of the Bill and Melinda Gates Foundation. Further, catastrophic risks like Artificial Intelligence and pandemic biotechnology are now or may soon receive funding from the National Science Foundation and other governmental bodies in the United States.
These are all indicators that even if effective altruism doesn’t hit explosive growth in the next couple years, it still has a chance of affecting the biggest movers and donors in philanthropy. This doesn’t make much less imperative for individual donors the identified responsibility of saving lives by way of UNICEF or OxFam. Effective altruism may need change its pitch. Being an individual who saves hundreds of lives through earning to give or through regular donations might need give way to pursuing a more ambitious and less conventional career of being part of a more coordinated global network or actors whose greatest value is in doing work beyond the scope of individuals.
To find the best giving opportunities might require effective altruism pioneering new ways of finding them, or creating them itself.
What is good for the world is not necessarily good for our self-image as heroic do-gooders. It would be good for the world if there was lots of money already being directed to the most important things. It would be good for the world if there were enough folks with money and good intentions to ‘fill up’ any funding gaps promptly after they arise. It would be good for the world that things that really matter already attract the intellectual and creative energies of large numbers of extremely talented people.
Yet, as you say, the closer this is to how the world really is squeezes out opportunities for individual acts of heroism. Things have gone badly wrong if vast stakes hang in the balance of my behaviour or my bank account—the margin of ‘vitally important things for thousands of lives/the human species/everything that matters’ should be really well populated, and my impact should be fairly marginal.
(Doctors sometimes pompously remark that medicine is the only profession that works towards its own obsolescence. And there definitely is a common medical Bildungsroman of how a JD-like idealist wanting to save the world gets marinated in blood sweat and tears into a grizzled-but-heart-in-the-right-place Perry Cox figure. Doctors I know dislike medical heroism not only because it can twist judgement, and not only because it can be an exercise in demeaning vaingloriousness, but also because good medicine should make heroism unnecessary—the system for medics to ‘save the day’ shouldn’t be extraordinarily demanding, else saving the day would be extraordinarily rare.)
Speaking for me, insofar as optimism about the state of the world is in tension with optimism about the likely impact I could make, I find the former much more psychologically comforting than the latter. It is damning indictment on the human condition I really am one of the best candidates to save the world.
I would also much prefer if altruism was obsolete. You could watch your hero stories on TV and be done with it. :)
I see doctors more as organized rentseekers who peddle artificial scarcity. Not only don’t they invent the drugs they prescribe, they earn money because we need their permission before we can buy a prescription drug from the pharma industry. We can’t even sign a legal waiver to reject this paternalism.
Many times have I brought money into a doctor’s practice only to fetch the piece of paper that I needed to buy what I wanted to buy from other people.
Off topic: As a member of the aforementioned rent-seeking organisation, I might be biased, but I’m fairly pro- regulating access to medical substances via prescriptions or similar. The value a doctor (hopefully) provides when prescribing is that they will select the right medication, and I’d back them to get this right significantly more often than (even well informed) laypeople. Maybe a libertarian would be happy with legal waivers etc. to get access to medical substances (let an individual decide the risks that are tolerable, and let the market set the price on how much value expert prescribing adds!), but most folks might be happy for some paternalism to protect people from the dangers—even it means savvy educated laypeople suffer some costs. Besides, even if you could waiver away your own damages, most countries have some degree of socialized medicine which are obliged to treat you, and the costs to health services of medication errors are already significant. I’d guess letting amateurs have a go would increase this still further.
Yeah, it could make sense to move lots of medications down a notch on the restriction scale based on practical libertarian arguments but doing away with prescriptions altogether seems very net harmful.
Surely anyone save an absolutist non-EA (non-consequentialist) libertarian would grant that; but equally, surely it does make sense to move lots of medications down a notch on the restriction scale. See Slate Star Codex on the FDA, 23 and me, meds with tiny chances of huge harms compared to antidepressants with high chances of libido reduction, etc.
Put yet another way: overcrowdedness is a significant concern. Perhaps you assign it a higher weighting within the ‘overcrowdedness/ importance / tractability’ tripartite than the average EA. If so, why not trade it off for the latter two: you could examine only moderately important careers—ones that receive little/no EA attention—where the average employee is much less talented than you. Or you could dedicate yourself to solving a seemingly intractable problem—it’s high risk but that’s precisely why it might be overlooked as Romeo points out.
Of course, if you think replaceability issues are truly ubiquitous, then even these suggestions are moot.
Greg conveys similar feelings about experiencing a downwards correction in his estimated effectiveness: http://effective-altruism.com/ea/iy/lognormal_lamentations/
Thanks for the article. I think the conclusions about advocacy are important. Looked at another way, there are a lot of obviously very bad things about the world and a lot that could be done differently. The problem is people don’t necessarily want to do things differently because of the incentives they face. The question is, who holds the reigns of power in any given situation and how can you beat them without too much collatoral damage.
FYI, I don’t think that you should credit ‘libertarian’ or ‘Austrian’ fringe political economists with thinking about incentives, or homo-economicus, or revealed preference—they’re all standard traditional economics.