I think we need to get away from “countries” as a frame—the thought experiment is the same whether it’s between countries, within a country, or even within a community. So my claim is not that “it is sometimes better to distribute money to more countries even though it helps less people”.
If we take the Bangladeshi school thought experiment—that with available funding, you can educate either 1000 boys or 800 girls, because girls face more barriers to access education—my claim is obviously not that “it is sometimes better to distribute money to more genders even though it helps less people”. You could definitely describe it that way—just as Chappell describes Goldring’s statement—but that is clearly not the basis of the decision itself, which is more concerned with relative needs in an equity framework.
You are right to describe my basis for making decisions as context-specific. It is therefore fair to say that I believe that in some circumstances it is morally justified to help fewer people if those people are in greater need. The view that this is *always* better is clearly wrong, but I don’t make that assessment on the basis of the thought experiment, but on the basis that moral decisions are almost always context-specific and often fuzzy around the edges.
So while I agree that it is always going to look a bit hazy what determines your priorities, I don’t see it as a problem, but simply as the background against which decisions need to be made. Would you agree that one of the appeals of utilitarianism is that it claims to resolve at least some of that haziness?
‘Would you agree that one of the appeals of utilitarianism is that it claims to resolve at least some of that haziness?‘
Yes, indeed, I think I agree with everything in this last post. In general non-utilitarian views tend to capture more of what we actually care about at the cost of making more distinctions that look arbitrary or hard to justify on reflection. It’s a hard question how to trade off between these things. Though be careful not to make the mistake of thinking utilitarianism implies that the facts about what empirical effects an action will have are simple: it says nothing about that at all.
Or at least, I think that, technically speaking, it is true that “it is sometimes better to distribute money to more genders even though it helps less people” is something you believe, but that’s a highly misleading way of describing your view: i.e. likely to make a reasonable person who takes it at face value believe other things about you and your view that are false.
I think the countries thing probably got this conversation off on the wrong foot, because EAs have very strong opposition to the idea that national boundaries ever have moral significance. But it was probably the fault of Richard’s original article that the conversation started there, since the charitable reading of Goldring was that he was making a point about prioritizing the worst off and using an example with countries to illustrate that, not saying that it’s inherently more fair to distribute resources across more countries.
My guess (though it is only a guess) is that if you ask Will MacAskill he’ll tell you that at least in an artificial case where you can either help a million people who are very badly off, or a million and one people who are much better off by the same amount, you ought to help the worse off people. It’s hard to see how he could deny that, given that he recommends giving some weight to all reasonable moral views in your decision-making, prioritizing the worse off is reasonable, and in this sort of case, helping the worse off people is much better if we ought to prioritize the worse off, while helping the million and one is only a very small amount better on the view where you ought just to help the most people.
Note by the way that you can actually have the ‘always bring about the biggest benefit when distributing resources view, without worrying about prioritizing the worst off’ view and still reject utiltarianism overall. For example, its consistent with “help more people rather than less when the benefit per person is the same size” that you value things other than happiness/suffering or preference satisfaction, that you believe it is sometimes wrong to violate rights in order to bring about the best outcome etc.
Likewise I think I agree with everything in this post. I appreciate that you took the time to engage with this discussion, and for finding grounds for agreement at least around the hazy edges.
I think we need to get away from “countries” as a frame—the thought experiment is the same whether it’s between countries, within a country, or even within a community. So my claim is not that “it is sometimes better to distribute money to more countries even though it helps less people”.
If we take the Bangladeshi school thought experiment—that with available funding, you can educate either 1000 boys or 800 girls, because girls face more barriers to access education—my claim is obviously not that “it is sometimes better to distribute money to more genders even though it helps less people”. You could definitely describe it that way—just as Chappell describes Goldring’s statement—but that is clearly not the basis of the decision itself, which is more concerned with relative needs in an equity framework.
You are right to describe my basis for making decisions as context-specific. It is therefore fair to say that I believe that in some circumstances it is morally justified to help fewer people if those people are in greater need. The view that this is *always* better is clearly wrong, but I don’t make that assessment on the basis of the thought experiment, but on the basis that moral decisions are almost always context-specific and often fuzzy around the edges.
So while I agree that it is always going to look a bit hazy what determines your priorities, I don’t see it as a problem, but simply as the background against which decisions need to be made. Would you agree that one of the appeals of utilitarianism is that it claims to resolve at least some of that haziness?
‘Would you agree that one of the appeals of utilitarianism is that it claims to resolve at least some of that haziness?‘
Yes, indeed, I think I agree with everything in this last post. In general non-utilitarian views tend to capture more of what we actually care about at the cost of making more distinctions that look arbitrary or hard to justify on reflection. It’s a hard question how to trade off between these things. Though be careful not to make the mistake of thinking utilitarianism implies that the facts about what empirical effects an action will have are simple: it says nothing about that at all.
Or at least, I think that, technically speaking, it is true that “it is sometimes better to distribute money to more genders even though it helps less people” is something you believe, but that’s a highly misleading way of describing your view: i.e. likely to make a reasonable person who takes it at face value believe other things about you and your view that are false.
I think the countries thing probably got this conversation off on the wrong foot, because EAs have very strong opposition to the idea that national boundaries ever have moral significance. But it was probably the fault of Richard’s original article that the conversation started there, since the charitable reading of Goldring was that he was making a point about prioritizing the worst off and using an example with countries to illustrate that, not saying that it’s inherently more fair to distribute resources across more countries.
As a further point: EAs who are philosophers likely are aware, when they are being careful and reflective, that some people reasonably think that it is better to help a person the worse off they are, since the philosopher Derek Parfit, who is one of the intellectual founders of EA, invented a particular famous variant of that view: https://oxfordre.com/politics/politics/view/10.1093/acrefore/9780190228637.001.0001/acrefore-9780190228637-e-232
My guess (though it is only a guess) is that if you ask Will MacAskill he’ll tell you that at least in an artificial case where you can either help a million people who are very badly off, or a million and one people who are much better off by the same amount, you ought to help the worse off people. It’s hard to see how he could deny that, given that he recommends giving some weight to all reasonable moral views in your decision-making, prioritizing the worse off is reasonable, and in this sort of case, helping the worse off people is much better if we ought to prioritize the worse off, while helping the million and one is only a very small amount better on the view where you ought just to help the most people.
Note by the way that you can actually have the ‘always bring about the biggest benefit when distributing resources view, without worrying about prioritizing the worst off’ view and still reject utiltarianism overall. For example, its consistent with “help more people rather than less when the benefit per person is the same size” that you value things other than happiness/suffering or preference satisfaction, that you believe it is sometimes wrong to violate rights in order to bring about the best outcome etc.
Likewise I think I agree with everything in this post. I appreciate that you took the time to engage with this discussion, and for finding grounds for agreement at least around the hazy edges.
Thanks to you and @Dr. David Mathers for this useful discussion.