I’m not a utilitarian, so I reject the premise of this question when presented in the abstract as it is here. Effectiveness for me is an instrumental value, so I would need to have a clearer picture of the operating environments in both countries and the funding environment at the global level before I would be able to answer it.
Just because you’re not a utilitarian doesn’t mean you can reject the premise of the question. Deontologists have the same problem with trade offs! The premise of the question is one even the Oxfam report accepts. I also don’t think you know what an instrumental value is. I think you keep throwing the term out but don’t understand what it means in terms of how it is frames the instrumental empirical question in a way that other values dissolve.
Can you give me an argument for why I can’t reject the premise of the question, rather than just telling me I can’t? I’ve explained why I reject it in these comments. Goldring “accepts” the premise only in the sense that he’s attending an event which is based entirely on that premise, and has had that premise forced onto him through the rhetorical trick which I described in my reply to Chappell.
I think you’re partly right about my confusion about instrumental values. Now that I reconsider, the humanitarian principles are a strange mix of instrumental and intrinsic values; regardless, effectiveness remains solely an instrumental value. Perhaps you could explain what you mean by “other values dissolve”?
Trade-offs inhere in all ethical systems so “rejecting utilitarianism” doesn’t do the work you think it does. The values you listed up in the thread that “inhere” in
The actual premise you’re you’re rejecting is one you rely on, that of equal moral consideration of peoples. Each time you manipulate the ratio of tradeoff by rejecting “cost-effectiveness” you are breaking treating people as morally equivalent.
Reasons you actually can reject the premise:
Actions that are upside bargains. E.g. break the trade off by having both options done but this is not the nature of aid as it currently is.
I think what you think you’re doing by saying you’re not a utilitarian is saying that you care about things EAs don’t care about in the impact of aid. But even with other values you create different ratios of trade offs and Pareto Optimality such that you’re always trading off something even if it’s not utilitarianism. It’s still something that is a cost and something that is a benefit. There’s no rhetorical trick here just the fungible nature of cash. The fact that cost effectiveness isn’t an intrinsic value is what makes it a deciding force in the ratio of trade offs in other values.
Can you explain what you mean by “There’s no rhetorical trick here just the fungible nature of cash”? In practice cost effectiveness is a deciding force but not the deciding force.
I think what you’re saying. There are a plurality of values that EAs don’t seem to care about that are deeply important and are skipped over through naive utilitarianism. These values cannot be measured through cost-effectiveness because they are deeply ingrained in the human experience.
The stronger version that I think you’re trying to elucidate but are unable to clearly is that cost-effectiveness can be inversely correlated with another value that is “more” determinant on a moral level. E.g. North Koreans cost a lot more to help than Nigerians with malaria but their cost effectiveness difficulty inheres in their situation and injustice in and of itself.
What I am saying is that insofar as we’re in the realm of charity and budgets and financial tradeoffs it doesn’t matter what your intrinsic value commitments are. There are choices that produce more of that value or less of that value which is what the concept of cost effectiveness is. Thus, it is a crux no matter what intrinsic value system you pick. Even deontology has these issues which I noted in my first response to you.
Thanks, yes. I think I’m elucidating it pretty clearly, but perhaps I’m wrong!
As I’ve said, I’m not denying that cost effectiveness is a determinant in decision-making—it plainly is a determinant, and an important one. What I am claiming is that it is not the primary determinant in decision-making, and simple calculus (as in the original thought experiment) is not really useful for decision-making.
The premise I reject is not that there are always trade-offs, but that a naive utilitarian calculus that abstracts and dehumanises individuals by presenting them as numbers in an equation unmoored from reality is a useful or ethical way to frame the question of how “best” to help people.
The premise that a naive utilitarian calculus that abstracts and dehumanises individuals by presenting them as numbers in an equation unmoored from reality is a useful or ethical way to frame the question of how “best” to help people. As I’ve said in another comment, the trolley problem was meant as a stimulus to discussion, not as a guide for making policy decisions around public transport systems.
EDIT: I realise that this description may come across as harsh on a forum populated almost entirely by utilitarians, but I felt that it was important to be clear about the exact nature of my objection. My position is that I agree that utilitarianism should be a tool in our ethical toolkit, but I disagree that it is the tool that we should reach for exclusively, or even first of all.
I suppose that part of my point is that we may not be discussing whether or not it makes sense to help more people over less. We may be discussing how we can help people who are most in need, who may cost more or less to help than other people.
I’ve claimed that naive utilitarian calculus is simply not that useful in guiding actual policy decisions. Those decisions—which happen every day in aid organisations—need to include a much wider range of factors than just numbers.
If we keep it in the realm of thought experiments, it’s a simple question and an obvious answer. But do you really believe that the philosophical thought experiment maps smoothly and clearly to the real world problem?
‘But do you really believe that the philosophical thought experiment maps smoothly and clearly to the real world problem?’
No, of course not. But in assessing the real world problem, you seemed to be relying on some sort of claim that sometimes it better to help less people if it means a fairer distribution of help. So I was raising a problem for that view: if you think it is sometimes better to distribute money to more countries even though it helps less people, then either that is always better in any possible circumstance, realistic or otherwise or its sometimes better and sometimes not depending on circumstance. Then the thought experiment comes in to show that there are possible, albeit not very realistic circumstances where it clearly isn’t better. So that shows one of the two options available to someone with your view is wrong.
Then, I challenged the other option that it is sometimes better and sometimes not, but the thought experiment wasn’t doing any work there. Instead, I just asked what you think determines when it is better to distribute the money more evenly between countries versus when it is better to just help the most people, and implied that this is a hard question to answer. As it happens, I don’t actually think that this view is definitely wrong, and you have hinted at a good answer, namely that we should sometimes help less people in order to prioritize the absolutely worst off. But I think it is a genuine problem for views like this that its always going to look a bit hazy what determines exactly how much you should prioritize the best of, and the view does seem to imply there must be an answer to that.
I think we need to get away from “countries” as a frame—the thought experiment is the same whether it’s between countries, within a country, or even within a community. So my claim is not that “it is sometimes better to distribute money to more countries even though it helps less people”.
If we take the Bangladeshi school thought experiment—that with available funding, you can educate either 1000 boys or 800 girls, because girls face more barriers to access education—my claim is obviously not that “it is sometimes better to distribute money to more genders even though it helps less people”. You could definitely describe it that way—just as Chappell describes Goldring’s statement—but that is clearly not the basis of the decision itself, which is more concerned with relative needs in an equity framework.
You are right to describe my basis for making decisions as context-specific. It is therefore fair to say that I believe that in some circumstances it is morally justified to help fewer people if those people are in greater need. The view that this is *always* better is clearly wrong, but I don’t make that assessment on the basis of the thought experiment, but on the basis that moral decisions are almost always context-specific and often fuzzy around the edges.
So while I agree that it is always going to look a bit hazy what determines your priorities, I don’t see it as a problem, but simply as the background against which decisions need to be made. Would you agree that one of the appeals of utilitarianism is that it claims to resolve at least some of that haziness?
‘Would you agree that one of the appeals of utilitarianism is that it claims to resolve at least some of that haziness?‘
Yes, indeed, I think I agree with everything in this last post. In general non-utilitarian views tend to capture more of what we actually care about at the cost of making more distinctions that look arbitrary or hard to justify on reflection. It’s a hard question how to trade off between these things. Though be careful not to make the mistake of thinking utilitarianism implies that the facts about what empirical effects an action will have are simple: it says nothing about that at all.
Or at least, I think that, technically speaking, it is true that “it is sometimes better to distribute money to more genders even though it helps less people” is something you believe, but that’s a highly misleading way of describing your view: i.e. likely to make a reasonable person who takes it at face value believe other things about you and your view that are false.
I think the countries thing probably got this conversation off on the wrong foot, because EAs have very strong opposition to the idea that national boundaries ever have moral significance. But it was probably the fault of Richard’s original article that the conversation started there, since the charitable reading of Goldring was that he was making a point about prioritizing the worst off and using an example with countries to illustrate that, not saying that it’s inherently more fair to distribute resources across more countries.
My guess (though it is only a guess) is that if you ask Will MacAskill he’ll tell you that at least in an artificial case where you can either help a million people who are very badly off, or a million and one people who are much better off by the same amount, you ought to help the worse off people. It’s hard to see how he could deny that, given that he recommends giving some weight to all reasonable moral views in your decision-making, prioritizing the worse off is reasonable, and in this sort of case, helping the worse off people is much better if we ought to prioritize the worse off, while helping the million and one is only a very small amount better on the view where you ought just to help the most people.
Note by the way that you can actually have the ‘always bring about the biggest benefit when distributing resources view, without worrying about prioritizing the worst off’ view and still reject utiltarianism overall. For example, its consistent with “help more people rather than less when the benefit per person is the same size” that you value things other than happiness/suffering or preference satisfaction, that you believe it is sometimes wrong to violate rights in order to bring about the best outcome etc.
Likewise I think I agree with everything in this post. I appreciate that you took the time to engage with this discussion, and for finding grounds for agreement at least around the hazy edges.
Wait I just want to make an object level objection for the third party readers that most policy-making is guided by cost-benefit analysis and the assigning of value of statistical life (VSL) in most liberal democracies.
I’m not a utilitarian, so I reject the premise of this question when presented in the abstract as it is here. Effectiveness for me is an instrumental value, so I would need to have a clearer picture of the operating environments in both countries and the funding environment at the global level before I would be able to answer it.
Just because you’re not a utilitarian doesn’t mean you can reject the premise of the question. Deontologists have the same problem with trade offs! The premise of the question is one even the Oxfam report accepts. I also don’t think you know what an instrumental value is. I think you keep throwing the term out but don’t understand what it means in terms of how it is frames the instrumental empirical question in a way that other values dissolve.
Can you give me an argument for why I can’t reject the premise of the question, rather than just telling me I can’t? I’ve explained why I reject it in these comments. Goldring “accepts” the premise only in the sense that he’s attending an event which is based entirely on that premise, and has had that premise forced onto him through the rhetorical trick which I described in my reply to Chappell.
I think you’re partly right about my confusion about instrumental values. Now that I reconsider, the humanitarian principles are a strange mix of instrumental and intrinsic values; regardless, effectiveness remains solely an instrumental value. Perhaps you could explain what you mean by “other values dissolve”?
Reasons why you can’t reject the premise:
Trade-offs inhere in all ethical systems so “rejecting utilitarianism” doesn’t do the work you think it does. The values you listed up in the thread that “inhere” in
The actual premise you’re you’re rejecting is one you rely on, that of equal moral consideration of peoples. Each time you manipulate the ratio of tradeoff by rejecting “cost-effectiveness” you are breaking treating people as morally equivalent.
Reasons you actually can reject the premise:
Actions that are upside bargains. E.g. break the trade off by having both options done but this is not the nature of aid as it currently is.
I think what you think you’re doing by saying you’re not a utilitarian is saying that you care about things EAs don’t care about in the impact of aid. But even with other values you create different ratios of trade offs and Pareto Optimality such that you’re always trading off something even if it’s not utilitarianism. It’s still something that is a cost and something that is a benefit. There’s no rhetorical trick here just the fungible nature of cash. The fact that cost effectiveness isn’t an intrinsic value is what makes it a deciding force in the ratio of trade offs in other values.
Can you explain what you mean by “There’s no rhetorical trick here just the fungible nature of cash”? In practice cost effectiveness is a deciding force but not the deciding force.
I think what you’re saying. There are a plurality of values that EAs don’t seem to care about that are deeply important and are skipped over through naive utilitarianism. These values cannot be measured through cost-effectiveness because they are deeply ingrained in the human experience.
The stronger version that I think you’re trying to elucidate but are unable to clearly is that cost-effectiveness can be inversely correlated with another value that is “more” determinant on a moral level. E.g. North Koreans cost a lot more to help than Nigerians with malaria but their cost effectiveness difficulty inheres in their situation and injustice in and of itself.
What I am saying is that insofar as we’re in the realm of charity and budgets and financial tradeoffs it doesn’t matter what your intrinsic value commitments are. There are choices that produce more of that value or less of that value which is what the concept of cost effectiveness is. Thus, it is a crux no matter what intrinsic value system you pick. Even deontology has these issues which I noted in my first response to you.
Thanks, yes. I think I’m elucidating it pretty clearly, but perhaps I’m wrong!
As I’ve said, I’m not denying that cost effectiveness is a determinant in decision-making—it plainly is a determinant, and an important one. What I am claiming is that it is not the primary determinant in decision-making, and simple calculus (as in the original thought experiment) is not really useful for decision-making.
The premise I reject is not that there are always trade-offs, but that a naive utilitarian calculus that abstracts and dehumanises individuals by presenting them as numbers in an equation unmoored from reality is a useful or ethical way to frame the question of how “best” to help people.
What is “the premise” that you reject?
The premise that a naive utilitarian calculus that abstracts and dehumanises individuals by presenting them as numbers in an equation unmoored from reality is a useful or ethical way to frame the question of how “best” to help people. As I’ve said in another comment, the trolley problem was meant as a stimulus to discussion, not as a guide for making policy decisions around public transport systems.
EDIT: I realise that this description may come across as harsh on a forum populated almost entirely by utilitarians, but I felt that it was important to be clear about the exact nature of my objection. My position is that I agree that utilitarianism should be a tool in our ethical toolkit, but I disagree that it is the tool that we should reach for exclusively, or even first of all.
How can we discuss whether or not it makes sense to help more people over less without discussing cases where more/less people are helped?
I suppose that part of my point is that we may not be discussing whether or not it makes sense to help more people over less. We may be discussing how we can help people who are most in need, who may cost more or less to help than other people.
I’ve claimed that naive utilitarian calculus is simply not that useful in guiding actual policy decisions. Those decisions—which happen every day in aid organisations—need to include a much wider range of factors than just numbers.
If we keep it in the realm of thought experiments, it’s a simple question and an obvious answer. But do you really believe that the philosophical thought experiment maps smoothly and clearly to the real world problem?
‘But do you really believe that the philosophical thought experiment maps smoothly and clearly to the real world problem?’
No, of course not. But in assessing the real world problem, you seemed to be relying on some sort of claim that sometimes it better to help less people if it means a fairer distribution of help. So I was raising a problem for that view: if you think it is sometimes better to distribute money to more countries even though it helps less people, then either that is always better in any possible circumstance, realistic or otherwise or its sometimes better and sometimes not depending on circumstance. Then the thought experiment comes in to show that there are possible, albeit not very realistic circumstances where it clearly isn’t better. So that shows one of the two options available to someone with your view is wrong.
Then, I challenged the other option that it is sometimes better and sometimes not, but the thought experiment wasn’t doing any work there. Instead, I just asked what you think determines when it is better to distribute the money more evenly between countries versus when it is better to just help the most people, and implied that this is a hard question to answer. As it happens, I don’t actually think that this view is definitely wrong, and you have hinted at a good answer, namely that we should sometimes help less people in order to prioritize the absolutely worst off. But I think it is a genuine problem for views like this that its always going to look a bit hazy what determines exactly how much you should prioritize the best of, and the view does seem to imply there must be an answer to that.
I think we need to get away from “countries” as a frame—the thought experiment is the same whether it’s between countries, within a country, or even within a community. So my claim is not that “it is sometimes better to distribute money to more countries even though it helps less people”.
If we take the Bangladeshi school thought experiment—that with available funding, you can educate either 1000 boys or 800 girls, because girls face more barriers to access education—my claim is obviously not that “it is sometimes better to distribute money to more genders even though it helps less people”. You could definitely describe it that way—just as Chappell describes Goldring’s statement—but that is clearly not the basis of the decision itself, which is more concerned with relative needs in an equity framework.
You are right to describe my basis for making decisions as context-specific. It is therefore fair to say that I believe that in some circumstances it is morally justified to help fewer people if those people are in greater need. The view that this is *always* better is clearly wrong, but I don’t make that assessment on the basis of the thought experiment, but on the basis that moral decisions are almost always context-specific and often fuzzy around the edges.
So while I agree that it is always going to look a bit hazy what determines your priorities, I don’t see it as a problem, but simply as the background against which decisions need to be made. Would you agree that one of the appeals of utilitarianism is that it claims to resolve at least some of that haziness?
‘Would you agree that one of the appeals of utilitarianism is that it claims to resolve at least some of that haziness?‘
Yes, indeed, I think I agree with everything in this last post. In general non-utilitarian views tend to capture more of what we actually care about at the cost of making more distinctions that look arbitrary or hard to justify on reflection. It’s a hard question how to trade off between these things. Though be careful not to make the mistake of thinking utilitarianism implies that the facts about what empirical effects an action will have are simple: it says nothing about that at all.
Or at least, I think that, technically speaking, it is true that “it is sometimes better to distribute money to more genders even though it helps less people” is something you believe, but that’s a highly misleading way of describing your view: i.e. likely to make a reasonable person who takes it at face value believe other things about you and your view that are false.
I think the countries thing probably got this conversation off on the wrong foot, because EAs have very strong opposition to the idea that national boundaries ever have moral significance. But it was probably the fault of Richard’s original article that the conversation started there, since the charitable reading of Goldring was that he was making a point about prioritizing the worst off and using an example with countries to illustrate that, not saying that it’s inherently more fair to distribute resources across more countries.
As a further point: EAs who are philosophers likely are aware, when they are being careful and reflective, that some people reasonably think that it is better to help a person the worse off they are, since the philosopher Derek Parfit, who is one of the intellectual founders of EA, invented a particular famous variant of that view: https://oxfordre.com/politics/politics/view/10.1093/acrefore/9780190228637.001.0001/acrefore-9780190228637-e-232
My guess (though it is only a guess) is that if you ask Will MacAskill he’ll tell you that at least in an artificial case where you can either help a million people who are very badly off, or a million and one people who are much better off by the same amount, you ought to help the worse off people. It’s hard to see how he could deny that, given that he recommends giving some weight to all reasonable moral views in your decision-making, prioritizing the worse off is reasonable, and in this sort of case, helping the worse off people is much better if we ought to prioritize the worse off, while helping the million and one is only a very small amount better on the view where you ought just to help the most people.
Note by the way that you can actually have the ‘always bring about the biggest benefit when distributing resources view, without worrying about prioritizing the worst off’ view and still reject utiltarianism overall. For example, its consistent with “help more people rather than less when the benefit per person is the same size” that you value things other than happiness/suffering or preference satisfaction, that you believe it is sometimes wrong to violate rights in order to bring about the best outcome etc.
Likewise I think I agree with everything in this post. I appreciate that you took the time to engage with this discussion, and for finding grounds for agreement at least around the hazy edges.
Thanks to you and @Dr. David Mathers for this useful discussion.
Wait I just want to make an object level objection for the third party readers that most policy-making is guided by cost-benefit analysis and the assigning of value of statistical life (VSL) in most liberal democracies.
To clarify your objection: such policy-making is guided by, but not solely determined by, such approaches.