How can you compare helping two different people in different ways?

When people ask what aspiring effective altruists work on, I often start by saying that we do research into how you can help others the most. For example, GiveWell has found that distributing some 600 bed nets, at a cost of $3,000, can prevent one infant dying of malaria. For the same price, they have also found you could deliver 6,000 deworming treatments that work for around a year.

A common question at this point is ‘how can you compare the value of helping these different people in these different ways?’ Even if the numbers are accurate, how could anyone determine which of these two possible donations helps others the most?

I can’t offer a philosophically rigorous answer here, but I can tell you how I personally approach this puzzle. I ask myself the question:

  • Which would I prefer, if, after making the decision, I were equally likely to become any one of the people affected, and experience their lives as they would? [1]

Let’s work through this example. First, we’ll make the number of people we are considering a manageable number: for $5, I could offer 10 children deworming treatments, or alternatively offer 1 child a bed-net, which has a 1 in 600 chance of saving their life. To make this decision, I should compare three options:

  1. I don’t donate, and so none of the 11 children receive any help

  2. Ten of the children receive deworming treatment, but the other one goes without a bed-net

  3. The one child receives a bed-net, but the other ten go without deworming

If I didn’t know which of these 11 children I was about to become, which choice would be more appealing?
Obviously 2 and 3 are better than 1 (no help), but deciding between 2 and 3 is not so simple. I am confident that a malaria net is more helpful than a deworming tablet, but it is ten times more useful?
This question has the virtue of:
  • Being ‘fair’, because in theory everyone’s interests are given ‘equal consideration

  • Putting the focus on how much the recipients’ value the help, rather than how you feel about it as a donor

  • Motivating you to actually try to figure out the answer, by putting you in the shoes of the people you are trying to help.

You’ll notice that this approach looks a lot like the veil of ignorance, a popular method among moral philosophers for determine whether a process or outcome is ‘just’. It should also be very appealing to any consequentialist who cares about ‘wellbeing’, and thinks everyone’s interests ought to be weighed equally. [2] It also looks very much like the ancient instruction to “love your neighbor as yourself”.
In my experience, this thought experiment pushes you towards asking good concrete questions like:
  • How much would deworming improve my quality of life immediately, and then in the long term?

  • How harmful is it for an infant to die? How painful is it to suffer from a case of malaria?

  • What risk of death might I be willing to tolerate to get the long-term health and incomes gains offered by deworming?

  • And so on.

    I find the main weakness of applying this approach is that thousands of people might be affected in some way by a decision. For instance, we should not only consider the harm to young children who die of preventable diseases, but also the grief and hardship experienced by their families as a result. But that’s just the start: health treatments delivered today will change the rate of economic development in a country and therefore the quality of life of all future generations. A big part of the case for deworming is that it improves nutrition, and thereby raises education levels and incomes for people when they are adults—benefits that are then passed on to their children and their children’s children.
    This doesn’t make this question the wrong one to ask, but rather that tracking and weighing the impact on the hundreds of people who might be affected by an action is beyond what most of us can do in a casual way. However, I find you can still make useful progress by thinking through and adding up the impacts on paper, or in a spreadsheet. [3] When you apply this approach, it is usually possible to narrow down your choices to just a few options, though in my experience you may then not have enough information to confidently decide among that remaining handful.
    --
    [1] A very similar, probably equivalent, question is: Which would I prefer if, after making the decision, I then had to sequentially experience the remaining lives of everyone affected by both options?
    [2] One weakness is that this question is ambiguous about how to deal with interventions that change who exists (for instance, policies that raise or lower birth rates). If you assume that you must become someone—non-existence is not an option—you would end up adopting the ‘average view’, which actually has no supporters in moral philosophy. If you simply ignored anyone whose existence was conditional on your decision, you would be adopting the ‘person affecting view’, which itself has serious problems. If you do include those people in the population of people you could become, and add ‘non-existence’ as the alternative for the choices which cause those people not to exist, you would be adopting the ‘total view’.
    [3] Alternatively, if you were convinced that these long-term prosperity effects were the most important impact, and were similarly valuable across countries, you could try to estimate the increase in the rate of economic growth per $100 invested in different projects, and just seek to maximise that.