Cooperative or Competitive Altruism, and Antisocial Counterfactuals

“We don’t usually think of achievements in terms of what would have happened otherwise, but we should. What matters is not who does good but whether good is done; and the measure of how much good you achieve is the difference between what happens as a result of your actions and what would have happened anyway.”—William MacAskill, Doing Good Better

Counterfactual thinking is fundamental to economic thinking, and this approach has been incorporated into Effective Altruism. The actual impact of your choices is based on what changed, not what happened. Per the forum wiki summary, “Counterfactual reasoning involves scenarios that will occur if an agent chooses a certain action, or that would have occurred if an agent had chosen an action they did not.”

In this post, I’ll argue that when counterfactual reasoning is applied the way Effective Altruist decisions and funding occurs in practice, there is a preventable anti-cooperative bias that is being created, and that this is making us as a movement less impactful than we could be.

Myopia and Hyperopia

To consider this, we need to revisit a fundamental assumption about how to think about impact, in light of the fact that individuals and groups can, should, and often do cooperate—both explicitly and implicitly. And to revisit the assumption, I want to note two failure modes for counterfactual reasoning.

First, myopic reasoning. Being a doctor saves lives, but arguably has little counterfactual value. And myopic reasoning can be far worse than this. As Benjamin Todd suggests imagining, “you’re at the scene of an accident and you see an injured person. In your enthusiasm to help, you push the paramedics out of the way and perform CPR on the injured person yourself. You’re successful and bring them back to consciousness, but because you’re less well-trained than the paramedics, you cause permanent damage to the person’s spine.” Obviously, counterfactual thinking, appreciating that without your “help” the outcome will be better, can prevent this type of myopia.

But there is a second failure mode, which I’ll call hyperopic reasoning. Myopia is nearsightedness, the inability to see things far away, and hyperopia is farsightedness, the inability to see things nearby. What does this look like in reasoning about outcomes, and how could it fail?

In December 2020, the United Kingdom and Sweden granted emergency approval for the Oxford–AstraZeneca vaccine, while Germany and the United States authorized the Pfizer-BioNTech vaccine, and the United States authorized the Moderna vaccine as well. While all of these were significantly better than the earliest vaccine, from Sinopharm in China, or the slightly later CoronaVac or Sputnik V, the counterfactual value of each vaccine seems comparatively miniscule because there were two other western vaccines that would have been approved without the development of this third option[1].

The Incidental Economist puts it clearly:

When you want to know the causal effect of an intervention (policy change, medical treatment, whatever) on something, you need to compare two states of the world: the world in which the intervention occurred and the world in which it did not. The latter is the counterfactual world… What we really want to know is how the world is different due to the intervention and only the intervention.

This is viewing the counterfactual from a single viewpoint. There is no shared credit for AstraZeneca, Pfizer, and Moderna, because each counterfactual is ignoring the nearby efforts, focused only on the global ones. But we certainly don’t think that the value was zero—so what are we doing wrong?

We can try to resolve this without accounting for cooperation. Prior to the creation of the vaccines, it was unclear which would succeed. Each team has a prior potential to create a vaccine, and some chance that the others would fail. If each team has an uncorrelated 50% chance of succeeding, and with three teams, the probability of being the sole group with a vaccine is therefore 12.5% - justifying the investment. But that means the combined value of all three was only 37.5% of the value of success, and if we invested on that basis, we would underestimate the value for each. In this case, the value of a vaccine was plausibly in the hundreds of billions of dollars, so it’s enough—but it’s still wrong as a general rule.

Cooperation Within Effective Altruism, and Without

I think this problem is dealt with within the Effective Altruism movement partially by viewing it as a single actor[2]. Within Effective Altruism, people are remarkably cooperative, willing to forgo credit and avoid duplicating effort. If we’re collectively doing two projects with similar goals, each of which may fail, we can view both as part of a strategy for ensuring the goals are achieved.

And there’s a value in competition and multiple approaches to the same question, and this means some degree of internal competition is good. It’s certainly useful that there is more than one funder, so that decisions are made independently, despite the costs imposed. It’s probably good that there are two organizations doing career advising, instead of just having 80,000 Hours.

In business, this approach, with multiple firms engaged in related markets, Could be be called coopetition—cooperation on some things and competition on others. But the coopetition approach is imperfect. There is a natural tendency for people to seek credit for their ideas, and that often leads to suboptimal outcomes. (Which we can compensate for, at least partially.) Similarly, there is a cost to coordination, so even when people would like to cooperate, they may not find ways to do so. (But providing a list of ideas for others to pursue and helping launch projects designed to fill needs is useful.)

Where Effective Altruism more clearly falls down is external cooperation. Much of what Effective Altruism does is external-focused. (Biosecurity, Alternative protein, Global poverty reduction, and similar. Everything but community building, essentially.) Unfortunately, most of the world isn’t allied with Effective Altruism, or is only implicitly cooperating. But most of the world at least generally shares its goals, if not its priorities. And that is unlikely to change in the near-term.

So who should get credit for what gets done? This matters—if we’re not careful with our counterfactual reasoning, we could overcount or undercount our contribution, and make sub-optimal decisions.

Shapley Value

Lloyd S. Shapley gave us a fundamental tool for cooperation, the Shapley value, which is the (indisputably correct[3]) way to think about counterfactual value in scenarios with cooperation. If you don’t know what it is, read or skim Nuño Sempere’s post, and come back afterwards. But in short, in the same way that considering counterfactuals fixes myopia by considering what happens if you do nothing, Shapley values fix hyperopia by giving credit to everyone involved in cooperatively providing value.

And a large part of the value provided by Effective Altruism is only cooperative. We donate bed-nets, which get distributed by local partners, on the basis of data about malaria rates collected by health officials, when someone else is paying for the costs of the distribution, to individuals who need to do all the work of using the bed-nets. Then we walk away saying (hyperopically,) we saved a life for $5,000, ignoring every other part of the complex system enabling our donation to be effective. And that is not to say it’s not an effective use of money! In fact, it’s incredibly effective, even in Shapley-value terms. But we’re over-allocating credit to ourselves.

The implicit question is whether we view the rest of the world as moral actors, or moral patients—and the latter is the direct cause of (the unfortunately named) White Savior Complex. This is the view that altruists decisions matter because they are the people responsible, and those they save are simply passive recipients. And (outside of certain types of animal welfare,) this view is clearly both wrong, and harmful.

“Non-EA Money”

I’ve used this phrase before, and heard others use it as well. The idea is that we’d generally prefer someone else spends money on something, because that leaves more money for us, counterfactually, to do good. But this is a anti-cooperative attitude, and when we have goals like pandemic prevention which are widely shared, the goal should be to maximize the good done, not the good that Effective Altruism does, or claims credit for. Sometimes, that means cooperating on use of funds, rather than maximizing counterfactual impact of our donation.

Does this matter? I think so, and not just in terms of potentially misallocating our funds across interventions. (Which by itself should worry us!) It also matters because it makes our decisions anti-cooperative. We’re trying to leverage other spenders into spending money, rather than cooperating to maximize total good done.

For example, Givewell recently looked at likelihood of crowding out funding, with an explicitly adversarial model; if the Global Fund and/​or PMI are providing less funding because of their donations, they don’t view their own funding as counterfactual. And when we only take credit for counterfactual impact instead of Shapley impact, we’re being hyperopic.

I’m unsure if there is a simple solution to this, since Shapley values require understanding not just your own strategy, but the strategy of others, which is information we don’t have. I do think that it needs more explicit consideration—because as EA becomes more globally important, and coordination with other groups becomes increasingly valuable, playing the current implicitly anti-competitive strategy is going to be a very bad way to improve the world.

  1. ^

    This example was inspired by Nuño Sempere’s example of Leibniz and Newton both inventing calculus—which is arguably a far better example, because there was no advantage of having more manufacturing of the vaccine.

  2. ^

    Despite the fact that it really, (really, really) isn’t just one thing. Cooperation thankfully triumphs over even a remarkable degree of incoherence.

  3. ^

    It is the uniquely determined way to allocate value across cooperating actors, in a mathematical sense, given decisions about how credit should be allocated like whether to allocate credit over groups or individuals, and whether to split by group, or by time spent, or some other metric.