How can you compare helping two different people in different ways?
A common question at this point is ‘how can you compare the value of helping these different people in these different ways?’ Even if the numbers are accurate, how could anyone determine which of these two possible donations helps others the most?
I can’t offer a philosophically rigorous answer here, but I can tell you how I personally approach this puzzle. I ask myself the question:
Which would I prefer, if, after making the decision, I were equally likely to become any one of the people affected, and experience their lives as they would? [1]
Let’s work through this example. First, we’ll make the number of people we are considering a manageable number: for $5, I could offer 10 children deworming treatments, or alternatively offer 1 child a bed-net, which has a 1 in 600 chance of saving their life. To make this decision, I should compare three options:
I don’t donate, and so none of the 11 children receive any help
Ten of the children receive deworming treatment, but the other one goes without a bed-net
The one child receives a bed-net, but the other ten go without deworming
Being ‘fair’, because in theory everyone’s interests are given ‘equal consideration’
Putting the focus on how much the recipients’ value the help, rather than how you feel about it as a donor
Motivating you to actually try to figure out the answer, by putting you in the shoes of the people you are trying to help.
How much would deworming improve my quality of life immediately, and then in the long term?
How harmful is it for an infant to die? How painful is it to suffer from a case of malaria?
What risk of death might I be willing to tolerate to get the long-term health and incomes gains offered by deworming?
And so on.
This is a neat approach, Rob, and some form of it seems likely to be one of the best ways of thinking about this. I think the emphasis on putting yourself in the shoes of those you’re trying to help rather than acting for yourself is particularly valuable. I think there is one extra difficulty that you haven’t mentioned, though, which is to do with people having other preferences than yours.
Even if I’m able to work out that, given a random chance of being one of the participants I would prefer 2 to 3, it doesn’t necessarily follow that 2 is preferable to 3 in an objective sense. It is interesting to imagine what the participants themselves would choose behind your veil (if they were fully informed about the tradeoffs etc.).
In many cases, one finds that people tend to think that their own condition is less bad than people who don’t have the condition do. (That is, if you ask sighted people how bad it would be to be blind they say it would be much worse than blind people do when asked.) This suggests that, behind a veil of ignorance where self-interest is not at play, those at risk of malaria but not worms might regard treating worms as most important and those at risk of worms but not malaria would treat malaria. It seems hard to know whom to prioritise then.
There’s also the eternal problem with imagining what one would choose—people often choose poorly. I assume you’re making some sort of assumptions choosing under the best possible conditions. It may be, though, that your values depend on your decision-making conditions.
Of course, you still have to choose and like you say it’s clear that 2 and 3 are both preferable to 1. I think this tool will get you answers most of the time, and can focus your mind on important questions, but there’s a intrinsic uncertainty (or maybe indeterminateness) about the ordering.
I would go for:
1) use their preferences and experiences (pretend you don’t know what you personally want)
2) imagine you knew everything you could about the impacts.
Which I think is considered the standard approach when thinking behind a veil.
As you say, you might find it hard to do 1) properly, but I think that effect is small in the scheme of things. It’s also better than not trying at all!
“This suggests that, behind a veil of ignorance where self-interest is not at play, those at risk of malaria but not worms might regard treating worms as most important and those at risk of worms but not malaria would treat malaria.”
Wouldn’t they then cancel out if you took the average of the two when deciding?
I know you qualify this process as you own heuristic rather than a philosophical justification, but I fail to see the value of empathetic projection in this case which, in practice, is an invite for all sorts of biases. To state just two points: (i) imagining the experiential world of someone else isn’t the same, or anywhere near to, experientially being someone else; (ii) it is not obvious that the imagined person’s emotional and value set have any normative force as to what distributions we should favour in the world, i.e. X preferring Y to Z is not a normative argument for privileging Y over Z.
In Rawls’ original position, judgement is exercised by a representative invested with a books-worth of qualifications as to why its conclusions are normatively important, i.e. Rawls tries to exactly model the person as free and equal in circumstances of fairness (it has frequently been argued, quite correctly, that Rawls’ OP is superfluous to Rawls’ actual argument, for the terms of agreement are well-defined outside of it). In the case of your procedure, judgement is exercised by whoever happens to be using it.
IMO, the possibility of normative interpersonal comparisons requires at least: (i) that we can justify a delimited bundle of goods as normatively prior to other goods; (ii) that those goods, within and between themselves, are formally commensurable; (iii) that we can produce a cardinal measure of those goods in the real-world; (iv) that we use that measure effectively to calculate correlations between the presence of those goods and the interventions in which we are interested; (v) that we complement this intervention efficacy with non-intervention variables, i.e. if intervention X yields 5 goods and intervention Y 10 goods, but we can deliver 2.5 X at the price of 1 Y in circumstance Z, then in circumstance Z we should prioritise X intervention.
I’m sure that, firstly, you know this better and more comprehensively than I, and secondly, that this process itself is a highly ineffective (i.e. resource-consuming) means of proceeding with interpersonal comparisons unless massively scaled. That said, I don’t see why it shouldn’t be a schematic ideal against which to exercise our non-ideal judgements. Your heuristic might roughly help (iii), and in this respect might be very helpful at the stage of first-evaluations, but there is more exacting means, and four other stages, besides.