The premise that a naive utilitarian calculus that abstracts and dehumanises individuals by presenting them as numbers in an equation unmoored from reality is a useful or ethical way to frame the question of how “best” to help people. As I’ve said in another comment, the trolley problem was meant as a stimulus to discussion, not as a guide for making policy decisions around public transport systems.
EDIT: I realise that this description may come across as harsh on a forum populated almost entirely by utilitarians, but I felt that it was important to be clear about the exact nature of my objection. My position is that I agree that utilitarianism should be a tool in our ethical toolkit, but I disagree that it is the tool that we should reach for exclusively, or even first of all.
I suppose that part of my point is that we may not be discussing whether or not it makes sense to help more people over less. We may be discussing how we can help people who are most in need, who may cost more or less to help than other people.
I’ve claimed that naive utilitarian calculus is simply not that useful in guiding actual policy decisions. Those decisions—which happen every day in aid organisations—need to include a much wider range of factors than just numbers.
If we keep it in the realm of thought experiments, it’s a simple question and an obvious answer. But do you really believe that the philosophical thought experiment maps smoothly and clearly to the real world problem?
‘But do you really believe that the philosophical thought experiment maps smoothly and clearly to the real world problem?’
No, of course not. But in assessing the real world problem, you seemed to be relying on some sort of claim that sometimes it better to help less people if it means a fairer distribution of help. So I was raising a problem for that view: if you think it is sometimes better to distribute money to more countries even though it helps less people, then either that is always better in any possible circumstance, realistic or otherwise or its sometimes better and sometimes not depending on circumstance. Then the thought experiment comes in to show that there are possible, albeit not very realistic circumstances where it clearly isn’t better. So that shows one of the two options available to someone with your view is wrong.
Then, I challenged the other option that it is sometimes better and sometimes not, but the thought experiment wasn’t doing any work there. Instead, I just asked what you think determines when it is better to distribute the money more evenly between countries versus when it is better to just help the most people, and implied that this is a hard question to answer. As it happens, I don’t actually think that this view is definitely wrong, and you have hinted at a good answer, namely that we should sometimes help less people in order to prioritize the absolutely worst off. But I think it is a genuine problem for views like this that its always going to look a bit hazy what determines exactly how much you should prioritize the best of, and the view does seem to imply there must be an answer to that.
I think we need to get away from “countries” as a frame—the thought experiment is the same whether it’s between countries, within a country, or even within a community. So my claim is not that “it is sometimes better to distribute money to more countries even though it helps less people”.
If we take the Bangladeshi school thought experiment—that with available funding, you can educate either 1000 boys or 800 girls, because girls face more barriers to access education—my claim is obviously not that “it is sometimes better to distribute money to more genders even though it helps less people”. You could definitely describe it that way—just as Chappell describes Goldring’s statement—but that is clearly not the basis of the decision itself, which is more concerned with relative needs in an equity framework.
You are right to describe my basis for making decisions as context-specific. It is therefore fair to say that I believe that in some circumstances it is morally justified to help fewer people if those people are in greater need. The view that this is *always* better is clearly wrong, but I don’t make that assessment on the basis of the thought experiment, but on the basis that moral decisions are almost always context-specific and often fuzzy around the edges.
So while I agree that it is always going to look a bit hazy what determines your priorities, I don’t see it as a problem, but simply as the background against which decisions need to be made. Would you agree that one of the appeals of utilitarianism is that it claims to resolve at least some of that haziness?
‘Would you agree that one of the appeals of utilitarianism is that it claims to resolve at least some of that haziness?‘
Yes, indeed, I think I agree with everything in this last post. In general non-utilitarian views tend to capture more of what we actually care about at the cost of making more distinctions that look arbitrary or hard to justify on reflection. It’s a hard question how to trade off between these things. Though be careful not to make the mistake of thinking utilitarianism implies that the facts about what empirical effects an action will have are simple: it says nothing about that at all.
Or at least, I think that, technically speaking, it is true that “it is sometimes better to distribute money to more genders even though it helps less people” is something you believe, but that’s a highly misleading way of describing your view: i.e. likely to make a reasonable person who takes it at face value believe other things about you and your view that are false.
I think the countries thing probably got this conversation off on the wrong foot, because EAs have very strong opposition to the idea that national boundaries ever have moral significance. But it was probably the fault of Richard’s original article that the conversation started there, since the charitable reading of Goldring was that he was making a point about prioritizing the worst off and using an example with countries to illustrate that, not saying that it’s inherently more fair to distribute resources across more countries.
My guess (though it is only a guess) is that if you ask Will MacAskill he’ll tell you that at least in an artificial case where you can either help a million people who are very badly off, or a million and one people who are much better off by the same amount, you ought to help the worse off people. It’s hard to see how he could deny that, given that he recommends giving some weight to all reasonable moral views in your decision-making, prioritizing the worse off is reasonable, and in this sort of case, helping the worse off people is much better if we ought to prioritize the worse off, while helping the million and one is only a very small amount better on the view where you ought just to help the most people.
Note by the way that you can actually have the ‘always bring about the biggest benefit when distributing resources view, without worrying about prioritizing the worst off’ view and still reject utiltarianism overall. For example, its consistent with “help more people rather than less when the benefit per person is the same size” that you value things other than happiness/suffering or preference satisfaction, that you believe it is sometimes wrong to violate rights in order to bring about the best outcome etc.
Likewise I think I agree with everything in this post. I appreciate that you took the time to engage with this discussion, and for finding grounds for agreement at least around the hazy edges.
Wait I just want to make an object level objection for the third party readers that most policy-making is guided by cost-benefit analysis and the assigning of value of statistical life (VSL) in most liberal democracies.
What is “the premise” that you reject?
The premise that a naive utilitarian calculus that abstracts and dehumanises individuals by presenting them as numbers in an equation unmoored from reality is a useful or ethical way to frame the question of how “best” to help people. As I’ve said in another comment, the trolley problem was meant as a stimulus to discussion, not as a guide for making policy decisions around public transport systems.
EDIT: I realise that this description may come across as harsh on a forum populated almost entirely by utilitarians, but I felt that it was important to be clear about the exact nature of my objection. My position is that I agree that utilitarianism should be a tool in our ethical toolkit, but I disagree that it is the tool that we should reach for exclusively, or even first of all.
How can we discuss whether or not it makes sense to help more people over less without discussing cases where more/less people are helped?
I suppose that part of my point is that we may not be discussing whether or not it makes sense to help more people over less. We may be discussing how we can help people who are most in need, who may cost more or less to help than other people.
I’ve claimed that naive utilitarian calculus is simply not that useful in guiding actual policy decisions. Those decisions—which happen every day in aid organisations—need to include a much wider range of factors than just numbers.
If we keep it in the realm of thought experiments, it’s a simple question and an obvious answer. But do you really believe that the philosophical thought experiment maps smoothly and clearly to the real world problem?
‘But do you really believe that the philosophical thought experiment maps smoothly and clearly to the real world problem?’
No, of course not. But in assessing the real world problem, you seemed to be relying on some sort of claim that sometimes it better to help less people if it means a fairer distribution of help. So I was raising a problem for that view: if you think it is sometimes better to distribute money to more countries even though it helps less people, then either that is always better in any possible circumstance, realistic or otherwise or its sometimes better and sometimes not depending on circumstance. Then the thought experiment comes in to show that there are possible, albeit not very realistic circumstances where it clearly isn’t better. So that shows one of the two options available to someone with your view is wrong.
Then, I challenged the other option that it is sometimes better and sometimes not, but the thought experiment wasn’t doing any work there. Instead, I just asked what you think determines when it is better to distribute the money more evenly between countries versus when it is better to just help the most people, and implied that this is a hard question to answer. As it happens, I don’t actually think that this view is definitely wrong, and you have hinted at a good answer, namely that we should sometimes help less people in order to prioritize the absolutely worst off. But I think it is a genuine problem for views like this that its always going to look a bit hazy what determines exactly how much you should prioritize the best of, and the view does seem to imply there must be an answer to that.
I think we need to get away from “countries” as a frame—the thought experiment is the same whether it’s between countries, within a country, or even within a community. So my claim is not that “it is sometimes better to distribute money to more countries even though it helps less people”.
If we take the Bangladeshi school thought experiment—that with available funding, you can educate either 1000 boys or 800 girls, because girls face more barriers to access education—my claim is obviously not that “it is sometimes better to distribute money to more genders even though it helps less people”. You could definitely describe it that way—just as Chappell describes Goldring’s statement—but that is clearly not the basis of the decision itself, which is more concerned with relative needs in an equity framework.
You are right to describe my basis for making decisions as context-specific. It is therefore fair to say that I believe that in some circumstances it is morally justified to help fewer people if those people are in greater need. The view that this is *always* better is clearly wrong, but I don’t make that assessment on the basis of the thought experiment, but on the basis that moral decisions are almost always context-specific and often fuzzy around the edges.
So while I agree that it is always going to look a bit hazy what determines your priorities, I don’t see it as a problem, but simply as the background against which decisions need to be made. Would you agree that one of the appeals of utilitarianism is that it claims to resolve at least some of that haziness?
‘Would you agree that one of the appeals of utilitarianism is that it claims to resolve at least some of that haziness?‘
Yes, indeed, I think I agree with everything in this last post. In general non-utilitarian views tend to capture more of what we actually care about at the cost of making more distinctions that look arbitrary or hard to justify on reflection. It’s a hard question how to trade off between these things. Though be careful not to make the mistake of thinking utilitarianism implies that the facts about what empirical effects an action will have are simple: it says nothing about that at all.
Or at least, I think that, technically speaking, it is true that “it is sometimes better to distribute money to more genders even though it helps less people” is something you believe, but that’s a highly misleading way of describing your view: i.e. likely to make a reasonable person who takes it at face value believe other things about you and your view that are false.
I think the countries thing probably got this conversation off on the wrong foot, because EAs have very strong opposition to the idea that national boundaries ever have moral significance. But it was probably the fault of Richard’s original article that the conversation started there, since the charitable reading of Goldring was that he was making a point about prioritizing the worst off and using an example with countries to illustrate that, not saying that it’s inherently more fair to distribute resources across more countries.
As a further point: EAs who are philosophers likely are aware, when they are being careful and reflective, that some people reasonably think that it is better to help a person the worse off they are, since the philosopher Derek Parfit, who is one of the intellectual founders of EA, invented a particular famous variant of that view: https://oxfordre.com/politics/politics/view/10.1093/acrefore/9780190228637.001.0001/acrefore-9780190228637-e-232
My guess (though it is only a guess) is that if you ask Will MacAskill he’ll tell you that at least in an artificial case where you can either help a million people who are very badly off, or a million and one people who are much better off by the same amount, you ought to help the worse off people. It’s hard to see how he could deny that, given that he recommends giving some weight to all reasonable moral views in your decision-making, prioritizing the worse off is reasonable, and in this sort of case, helping the worse off people is much better if we ought to prioritize the worse off, while helping the million and one is only a very small amount better on the view where you ought just to help the most people.
Note by the way that you can actually have the ‘always bring about the biggest benefit when distributing resources view, without worrying about prioritizing the worst off’ view and still reject utiltarianism overall. For example, its consistent with “help more people rather than less when the benefit per person is the same size” that you value things other than happiness/suffering or preference satisfaction, that you believe it is sometimes wrong to violate rights in order to bring about the best outcome etc.
Likewise I think I agree with everything in this post. I appreciate that you took the time to engage with this discussion, and for finding grounds for agreement at least around the hazy edges.
Thanks to you and @Dr. David Mathers for this useful discussion.
Wait I just want to make an object level objection for the third party readers that most policy-making is guided by cost-benefit analysis and the assigning of value of statistical life (VSL) in most liberal democracies.
To clarify your objection: such policy-making is guided by, but not solely determined by, such approaches.