Can you give me an argument for why I can’t reject the premise of the question, rather than just telling me I can’t? I’ve explained why I reject it in these comments. Goldring “accepts” the premise only in the sense that he’s attending an event which is based entirely on that premise, and has had that premise forced onto him through the rhetorical trick which I described in my reply to Chappell.
I think you’re partly right about my confusion about instrumental values. Now that I reconsider, the humanitarian principles are a strange mix of instrumental and intrinsic values; regardless, effectiveness remains solely an instrumental value. Perhaps you could explain what you mean by “other values dissolve”?
Trade-offs inhere in all ethical systems so “rejecting utilitarianism” doesn’t do the work you think it does. The values you listed up in the thread that “inhere” in
The actual premise you’re you’re rejecting is one you rely on, that of equal moral consideration of peoples. Each time you manipulate the ratio of tradeoff by rejecting “cost-effectiveness” you are breaking treating people as morally equivalent.
Reasons you actually can reject the premise:
Actions that are upside bargains. E.g. break the trade off by having both options done but this is not the nature of aid as it currently is.
I think what you think you’re doing by saying you’re not a utilitarian is saying that you care about things EAs don’t care about in the impact of aid. But even with other values you create different ratios of trade offs and Pareto Optimality such that you’re always trading off something even if it’s not utilitarianism. It’s still something that is a cost and something that is a benefit. There’s no rhetorical trick here just the fungible nature of cash. The fact that cost effectiveness isn’t an intrinsic value is what makes it a deciding force in the ratio of trade offs in other values.
Can you explain what you mean by “There’s no rhetorical trick here just the fungible nature of cash”? In practice cost effectiveness is a deciding force but not the deciding force.
I think what you’re saying. There are a plurality of values that EAs don’t seem to care about that are deeply important and are skipped over through naive utilitarianism. These values cannot be measured through cost-effectiveness because they are deeply ingrained in the human experience.
The stronger version that I think you’re trying to elucidate but are unable to clearly is that cost-effectiveness can be inversely correlated with another value that is “more” determinant on a moral level. E.g. North Koreans cost a lot more to help than Nigerians with malaria but their cost effectiveness difficulty inheres in their situation and injustice in and of itself.
What I am saying is that insofar as we’re in the realm of charity and budgets and financial tradeoffs it doesn’t matter what your intrinsic value commitments are. There are choices that produce more of that value or less of that value which is what the concept of cost effectiveness is. Thus, it is a crux no matter what intrinsic value system you pick. Even deontology has these issues which I noted in my first response to you.
Thanks, yes. I think I’m elucidating it pretty clearly, but perhaps I’m wrong!
As I’ve said, I’m not denying that cost effectiveness is a determinant in decision-making—it plainly is a determinant, and an important one. What I am claiming is that it is not the primary determinant in decision-making, and simple calculus (as in the original thought experiment) is not really useful for decision-making.
The premise I reject is not that there are always trade-offs, but that a naive utilitarian calculus that abstracts and dehumanises individuals by presenting them as numbers in an equation unmoored from reality is a useful or ethical way to frame the question of how “best” to help people.
Can you give me an argument for why I can’t reject the premise of the question, rather than just telling me I can’t? I’ve explained why I reject it in these comments. Goldring “accepts” the premise only in the sense that he’s attending an event which is based entirely on that premise, and has had that premise forced onto him through the rhetorical trick which I described in my reply to Chappell.
I think you’re partly right about my confusion about instrumental values. Now that I reconsider, the humanitarian principles are a strange mix of instrumental and intrinsic values; regardless, effectiveness remains solely an instrumental value. Perhaps you could explain what you mean by “other values dissolve”?
Reasons why you can’t reject the premise:
Trade-offs inhere in all ethical systems so “rejecting utilitarianism” doesn’t do the work you think it does. The values you listed up in the thread that “inhere” in
The actual premise you’re you’re rejecting is one you rely on, that of equal moral consideration of peoples. Each time you manipulate the ratio of tradeoff by rejecting “cost-effectiveness” you are breaking treating people as morally equivalent.
Reasons you actually can reject the premise:
Actions that are upside bargains. E.g. break the trade off by having both options done but this is not the nature of aid as it currently is.
I think what you think you’re doing by saying you’re not a utilitarian is saying that you care about things EAs don’t care about in the impact of aid. But even with other values you create different ratios of trade offs and Pareto Optimality such that you’re always trading off something even if it’s not utilitarianism. It’s still something that is a cost and something that is a benefit. There’s no rhetorical trick here just the fungible nature of cash. The fact that cost effectiveness isn’t an intrinsic value is what makes it a deciding force in the ratio of trade offs in other values.
Can you explain what you mean by “There’s no rhetorical trick here just the fungible nature of cash”? In practice cost effectiveness is a deciding force but not the deciding force.
I think what you’re saying. There are a plurality of values that EAs don’t seem to care about that are deeply important and are skipped over through naive utilitarianism. These values cannot be measured through cost-effectiveness because they are deeply ingrained in the human experience.
The stronger version that I think you’re trying to elucidate but are unable to clearly is that cost-effectiveness can be inversely correlated with another value that is “more” determinant on a moral level. E.g. North Koreans cost a lot more to help than Nigerians with malaria but their cost effectiveness difficulty inheres in their situation and injustice in and of itself.
What I am saying is that insofar as we’re in the realm of charity and budgets and financial tradeoffs it doesn’t matter what your intrinsic value commitments are. There are choices that produce more of that value or less of that value which is what the concept of cost effectiveness is. Thus, it is a crux no matter what intrinsic value system you pick. Even deontology has these issues which I noted in my first response to you.
Thanks, yes. I think I’m elucidating it pretty clearly, but perhaps I’m wrong!
As I’ve said, I’m not denying that cost effectiveness is a determinant in decision-making—it plainly is a determinant, and an important one. What I am claiming is that it is not the primary determinant in decision-making, and simple calculus (as in the original thought experiment) is not really useful for decision-making.
The premise I reject is not that there are always trade-offs, but that a naive utilitarian calculus that abstracts and dehumanises individuals by presenting them as numbers in an equation unmoored from reality is a useful or ethical way to frame the question of how “best” to help people.