To answer my own question, I personally assign some weight to all of these positions. I find the fourth—that I owe special duties to my near and dear—particularly plausible. However I don’t find it plausible that I owe special duties to my fellow citizens (at least not to an extent that should stop me donating everything over a certain amount to the global poor). I also think that we should take the third sort of position extra seriously, and avoid taking actions that are actively wrong on popular non-consequentialist theories. An additional reasons for this is that there are often good but subtle consequentialist grounds for avoiding these actions, and in my experience some consequentialists are insufficiently sensitive to them.
One thing to consider about (2) is that there are also non-consequentialist reasons to treat non-human animals better than we treat humans (relative to their interests). As one example, because humans have long treated animals unjustly, reasons of reciprocity require us to discount human interests relative to theirs. So that might push the opposite way as discounting animal interests due to moral uncertainty.
″ I think it’s quite plausible that common non-consequentialist positions would support much stronger stances on non-human animals, for example, because they object to acts that constitute active harm and oppression of innocent victims etc. It’s at least partly for this reason that some animal advocates have taken to self-consciously employing deontological criticisms of non-human animal suffering, that they ostensibly don’t themselves believe to be true, as I understand it. ”
In some cases “special duties” to family can be derived as a heuristic for utilitarianism. As a family member, you probably aren’t replaceable, families tend to expect help from their members, and families are predisposed to reciprocate altruism: for many people there is a large chance of high negative utility both to yourself and family if you ignore your family. The consequences to you could be negative enough to make you less effective as an altruist in general.
For example, if you are a college student interested in EA and your parents stop paying for your degree, now you will have much less money to donate, and much less time to study if you have to pick up a job to pay your way though school.
Taking care of parents when they get older might also seem fairly non-consequentialist, but if there is a large inheritance at stake it could be the case that taking good care of your family is the highest utility thing for you to do.
As for kids, in some cases it may be possible that raising them to become effective altruists is the highest leverage thing to do. John Stuart Mill for example was raised in this manner… though I am sure he may have been quite miserable in the process:
True, but I’d assume you’d agree that non-consequentialist who allow for special duties have different and potentially stronger, more overriding reason.
John Stuart Mill for example was raised in this manner… though I am sure he may have been quite miserable in the process
Indeed, he had a breakdown which he put down to his upbringing, though I don’t know if it was primarily due to the utilitarian aspects of this. If I recall correctly, the (deeply uncharitable) parody of such an upbringing in Dickens’ Hard Times was based on Mill.
This strikes me as a highly wishful and ad hoc adaptation of utilitarianism to pre-given moral dispositions, and personally, as something of a reductio.
Are you honestly suggesting the following as an inter-personal or intra-personal justification?:
“Taking care of parents when they get older might also seem fairly non-consequentialist, but if there is a large inheritance at stake it could be the case that taking good care of your family is the highest utility thing for you to do.”
It follows, I suppose, if there is no inheritance at stake, that you should let them rot.
How do you justify utilitarianism? I can only hope not via intuitionism.
These are heuristics for specialized cases. In most cases you can do far more good elsewhere than you can do for your family. The case with Mill is a case where you are developing a child to help many more than you could, the case with parents is likewise a case where you are helping them to help many others via donating more than you could on your own. If we are being Kantian about this, the parents still aren’t being used merely as a means because their own happiness matters and is a part of the consideration.
In cases where helping your parents helps only your parents, why not help someone else who you could help more effectively? There are more appalling counter-factual cases than letting parents rot, such as letting 10 times as many non-related people rot.
I think a fairly small set of axioms can be used to justify utilitarianism. This should get you pretty close:
-Only consequences matter.
-The only consequences that matter are experiences.
-Experiences that are preferred by beings are positive in value.
-Experiences that are avoided by beings are negative in value.
It is certainly possible to disagree with these statements though, and those who agree with them might justify them based on intuitions coming from thought experiments.
Most think that one’s reason for action should be one’s actual reason for action, rather than a sophistic rationalisation of a pre-given reason. There’s no reason to adopt those ‘axioms’ independent of adopting those axioms; they certainly, as stated, have no impersonal and objective force. Insofar as that reason is mere intuition, which I see no reason for respecting, then clearly your axioms are insufficient with regard to any normal person—indeed, the entire post-Rawlsian establishment of Anglophone political theory is based exactly on the comparatively moving intuit of placing the right prior to the good.
“In cases where helping your parents helps only your parents, why not help someone else who you could help more effectively?”
That rhetorically begs the question of the evaluative content of help, or that helping persons is of especial value.
Does anything have impersonal and objective force? I am rather confused as to what you are comparing to that is better. If you are just talking about forcing people to believe things, that doesn’t necessarily have anything to do with what it true. If you were just comparing to Rawls, why should I accept Rawls’ formulation of the right as being prior or independent from the good? You can use Rawls’ veil of ignorance thought experiment to support utilitarianism (1), so I don’t see how Rawls can really be a counter objection, or specifically how Rawls’ arguments don’t rely on evoking intuitions. I may be misunderstanding the last sentence of your first paragraph though, so I do think it is possible that you have an argument which will change my mind.
I haven’t seen someone attack the VNM axioms as there are plenty of non-consequentialists who think there are good reasons for believing them. I have a feeling you are really attacking the other presented axioms, but not these.
“sophistic rationalisation of a pre-given reason”
This is a pretty uncharitable jump to accusation. The statements I listed above above are not immune to attack, when convinced to drop an axiom or to adopt a different one, the ethics system advocated will change. I had different values before I became utilitarian, and my beliefs about what was of utility changed based on changes in the axioms I used to derive it.
When I was a preference utilitarian, I came across a thought experiment about imagining a preference which has no consequences in terms of experience when it is satisfied. It didn’t seem like such preferences could matter: therefore I was no longer a preference utilitarian. There wasn’t a pre-given reason, though intuitions were used.
If you do think there is a way to derive a good ethical theory which does not rely on appealing to intuitions at some point in the argument, I would be very interested in hearing it. =)
(note from earlier)
(1) Consider what world a self benefiting being would make with the veil of ignorance. The most rational thing based on it’s goals is to maximize expected benefit: which will align exactly with what some utilitarians will argue for.
To answer my own question, I personally assign some weight to all of these positions. I find the fourth—that I owe special duties to my near and dear—particularly plausible. However I don’t find it plausible that I owe special duties to my fellow citizens (at least not to an extent that should stop me donating everything over a certain amount to the global poor). I also think that we should take the third sort of position extra seriously, and avoid taking actions that are actively wrong on popular non-consequentialist theories. An additional reasons for this is that there are often good but subtle consequentialist grounds for avoiding these actions, and in my experience some consequentialists are insufficiently sensitive to them.
One thing to consider about (2) is that there are also non-consequentialist reasons to treat non-human animals better than we treat humans (relative to their interests). As one example, because humans have long treated animals unjustly, reasons of reciprocity require us to discount human interests relative to theirs. So that might push the opposite way as discounting animal interests due to moral uncertainty.
David Moss also mentions this in the Facebook thread:
″ I think it’s quite plausible that common non-consequentialist positions would support much stronger stances on non-human animals, for example, because they object to acts that constitute active harm and oppression of innocent victims etc. It’s at least partly for this reason that some animal advocates have taken to self-consciously employing deontological criticisms of non-human animal suffering, that they ostensibly don’t themselves believe to be true, as I understand it. ”
In some cases “special duties” to family can be derived as a heuristic for utilitarianism. As a family member, you probably aren’t replaceable, families tend to expect help from their members, and families are predisposed to reciprocate altruism: for many people there is a large chance of high negative utility both to yourself and family if you ignore your family. The consequences to you could be negative enough to make you less effective as an altruist in general.
For example, if you are a college student interested in EA and your parents stop paying for your degree, now you will have much less money to donate, and much less time to study if you have to pick up a job to pay your way though school.
Taking care of parents when they get older might also seem fairly non-consequentialist, but if there is a large inheritance at stake it could be the case that taking good care of your family is the highest utility thing for you to do.
As for kids, in some cases it may be possible that raising them to become effective altruists is the highest leverage thing to do. John Stuart Mill for example was raised in this manner… though I am sure he may have been quite miserable in the process:
http://en.wikipedia.org/wiki/John_Stuart_Mill#Biography
True, but I’d assume you’d agree that non-consequentialist who allow for special duties have different and potentially stronger, more overriding reason.
Indeed, he had a breakdown which he put down to his upbringing, though I don’t know if it was primarily due to the utilitarian aspects of this. If I recall correctly, the (deeply uncharitable) parody of such an upbringing in Dickens’ Hard Times was based on Mill.
This strikes me as a highly wishful and ad hoc adaptation of utilitarianism to pre-given moral dispositions, and personally, as something of a reductio.
Are you honestly suggesting the following as an inter-personal or intra-personal justification?:
“Taking care of parents when they get older might also seem fairly non-consequentialist, but if there is a large inheritance at stake it could be the case that taking good care of your family is the highest utility thing for you to do.”
It follows, I suppose, if there is no inheritance at stake, that you should let them rot.
How do you justify utilitarianism? I can only hope not via intuitionism.
These are heuristics for specialized cases. In most cases you can do far more good elsewhere than you can do for your family. The case with Mill is a case where you are developing a child to help many more than you could, the case with parents is likewise a case where you are helping them to help many others via donating more than you could on your own. If we are being Kantian about this, the parents still aren’t being used merely as a means because their own happiness matters and is a part of the consideration.
In cases where helping your parents helps only your parents, why not help someone else who you could help more effectively? There are more appalling counter-factual cases than letting parents rot, such as letting 10 times as many non-related people rot.
I think a fairly small set of axioms can be used to justify utilitarianism. This should get you pretty close:
-Only consequences matter.
-The only consequences that matter are experiences.
-Experiences that are preferred by beings are positive in value.
-Experiences that are avoided by beings are negative in value.
-VNM axioms
It is certainly possible to disagree with these statements though, and those who agree with them might justify them based on intuitions coming from thought experiments.
Most think that one’s reason for action should be one’s actual reason for action, rather than a sophistic rationalisation of a pre-given reason. There’s no reason to adopt those ‘axioms’ independent of adopting those axioms; they certainly, as stated, have no impersonal and objective force. Insofar as that reason is mere intuition, which I see no reason for respecting, then clearly your axioms are insufficient with regard to any normal person—indeed, the entire post-Rawlsian establishment of Anglophone political theory is based exactly on the comparatively moving intuit of placing the right prior to the good.
“In cases where helping your parents helps only your parents, why not help someone else who you could help more effectively?”
That rhetorically begs the question of the evaluative content of help, or that helping persons is of especial value.
Does anything have impersonal and objective force? I am rather confused as to what you are comparing to that is better. If you are just talking about forcing people to believe things, that doesn’t necessarily have anything to do with what it true. If you were just comparing to Rawls, why should I accept Rawls’ formulation of the right as being prior or independent from the good? You can use Rawls’ veil of ignorance thought experiment to support utilitarianism (1), so I don’t see how Rawls can really be a counter objection, or specifically how Rawls’ arguments don’t rely on evoking intuitions. I may be misunderstanding the last sentence of your first paragraph though, so I do think it is possible that you have an argument which will change my mind.
I haven’t seen someone attack the VNM axioms as there are plenty of non-consequentialists who think there are good reasons for believing them. I have a feeling you are really attacking the other presented axioms, but not these.
“sophistic rationalisation of a pre-given reason” This is a pretty uncharitable jump to accusation. The statements I listed above above are not immune to attack, when convinced to drop an axiom or to adopt a different one, the ethics system advocated will change. I had different values before I became utilitarian, and my beliefs about what was of utility changed based on changes in the axioms I used to derive it.
When I was a preference utilitarian, I came across a thought experiment about imagining a preference which has no consequences in terms of experience when it is satisfied. It didn’t seem like such preferences could matter: therefore I was no longer a preference utilitarian. There wasn’t a pre-given reason, though intuitions were used.
If you do think there is a way to derive a good ethical theory which does not rely on appealing to intuitions at some point in the argument, I would be very interested in hearing it. =)
(note from earlier) (1) Consider what world a self benefiting being would make with the veil of ignorance. The most rational thing based on it’s goals is to maximize expected benefit: which will align exactly with what some utilitarians will argue for.