This strikes me as a highly wishful and ad hoc adaptation of utilitarianism to pre-given moral dispositions, and personally, as something of a reductio.
Are you honestly suggesting the following as an inter-personal or intra-personal justification?:
“Taking care of parents when they get older might also seem fairly non-consequentialist, but if there is a large inheritance at stake it could be the case that taking good care of your family is the highest utility thing for you to do.”
It follows, I suppose, if there is no inheritance at stake, that you should let them rot.
How do you justify utilitarianism? I can only hope not via intuitionism.
These are heuristics for specialized cases. In most cases you can do far more good elsewhere than you can do for your family. The case with Mill is a case where you are developing a child to help many more than you could, the case with parents is likewise a case where you are helping them to help many others via donating more than you could on your own. If we are being Kantian about this, the parents still aren’t being used merely as a means because their own happiness matters and is a part of the consideration.
In cases where helping your parents helps only your parents, why not help someone else who you could help more effectively? There are more appalling counter-factual cases than letting parents rot, such as letting 10 times as many non-related people rot.
I think a fairly small set of axioms can be used to justify utilitarianism. This should get you pretty close:
-Only consequences matter.
-The only consequences that matter are experiences.
-Experiences that are preferred by beings are positive in value.
-Experiences that are avoided by beings are negative in value.
It is certainly possible to disagree with these statements though, and those who agree with them might justify them based on intuitions coming from thought experiments.
Most think that one’s reason for action should be one’s actual reason for action, rather than a sophistic rationalisation of a pre-given reason. There’s no reason to adopt those ‘axioms’ independent of adopting those axioms; they certainly, as stated, have no impersonal and objective force. Insofar as that reason is mere intuition, which I see no reason for respecting, then clearly your axioms are insufficient with regard to any normal person—indeed, the entire post-Rawlsian establishment of Anglophone political theory is based exactly on the comparatively moving intuit of placing the right prior to the good.
“In cases where helping your parents helps only your parents, why not help someone else who you could help more effectively?”
That rhetorically begs the question of the evaluative content of help, or that helping persons is of especial value.
Does anything have impersonal and objective force? I am rather confused as to what you are comparing to that is better. If you are just talking about forcing people to believe things, that doesn’t necessarily have anything to do with what it true. If you were just comparing to Rawls, why should I accept Rawls’ formulation of the right as being prior or independent from the good? You can use Rawls’ veil of ignorance thought experiment to support utilitarianism (1), so I don’t see how Rawls can really be a counter objection, or specifically how Rawls’ arguments don’t rely on evoking intuitions. I may be misunderstanding the last sentence of your first paragraph though, so I do think it is possible that you have an argument which will change my mind.
I haven’t seen someone attack the VNM axioms as there are plenty of non-consequentialists who think there are good reasons for believing them. I have a feeling you are really attacking the other presented axioms, but not these.
“sophistic rationalisation of a pre-given reason”
This is a pretty uncharitable jump to accusation. The statements I listed above above are not immune to attack, when convinced to drop an axiom or to adopt a different one, the ethics system advocated will change. I had different values before I became utilitarian, and my beliefs about what was of utility changed based on changes in the axioms I used to derive it.
When I was a preference utilitarian, I came across a thought experiment about imagining a preference which has no consequences in terms of experience when it is satisfied. It didn’t seem like such preferences could matter: therefore I was no longer a preference utilitarian. There wasn’t a pre-given reason, though intuitions were used.
If you do think there is a way to derive a good ethical theory which does not rely on appealing to intuitions at some point in the argument, I would be very interested in hearing it. =)
(note from earlier)
(1) Consider what world a self benefiting being would make with the veil of ignorance. The most rational thing based on it’s goals is to maximize expected benefit: which will align exactly with what some utilitarians will argue for.
This strikes me as a highly wishful and ad hoc adaptation of utilitarianism to pre-given moral dispositions, and personally, as something of a reductio.
Are you honestly suggesting the following as an inter-personal or intra-personal justification?:
“Taking care of parents when they get older might also seem fairly non-consequentialist, but if there is a large inheritance at stake it could be the case that taking good care of your family is the highest utility thing for you to do.”
It follows, I suppose, if there is no inheritance at stake, that you should let them rot.
How do you justify utilitarianism? I can only hope not via intuitionism.
These are heuristics for specialized cases. In most cases you can do far more good elsewhere than you can do for your family. The case with Mill is a case where you are developing a child to help many more than you could, the case with parents is likewise a case where you are helping them to help many others via donating more than you could on your own. If we are being Kantian about this, the parents still aren’t being used merely as a means because their own happiness matters and is a part of the consideration.
In cases where helping your parents helps only your parents, why not help someone else who you could help more effectively? There are more appalling counter-factual cases than letting parents rot, such as letting 10 times as many non-related people rot.
I think a fairly small set of axioms can be used to justify utilitarianism. This should get you pretty close:
-Only consequences matter.
-The only consequences that matter are experiences.
-Experiences that are preferred by beings are positive in value.
-Experiences that are avoided by beings are negative in value.
-VNM axioms
It is certainly possible to disagree with these statements though, and those who agree with them might justify them based on intuitions coming from thought experiments.
Most think that one’s reason for action should be one’s actual reason for action, rather than a sophistic rationalisation of a pre-given reason. There’s no reason to adopt those ‘axioms’ independent of adopting those axioms; they certainly, as stated, have no impersonal and objective force. Insofar as that reason is mere intuition, which I see no reason for respecting, then clearly your axioms are insufficient with regard to any normal person—indeed, the entire post-Rawlsian establishment of Anglophone political theory is based exactly on the comparatively moving intuit of placing the right prior to the good.
“In cases where helping your parents helps only your parents, why not help someone else who you could help more effectively?”
That rhetorically begs the question of the evaluative content of help, or that helping persons is of especial value.
Does anything have impersonal and objective force? I am rather confused as to what you are comparing to that is better. If you are just talking about forcing people to believe things, that doesn’t necessarily have anything to do with what it true. If you were just comparing to Rawls, why should I accept Rawls’ formulation of the right as being prior or independent from the good? You can use Rawls’ veil of ignorance thought experiment to support utilitarianism (1), so I don’t see how Rawls can really be a counter objection, or specifically how Rawls’ arguments don’t rely on evoking intuitions. I may be misunderstanding the last sentence of your first paragraph though, so I do think it is possible that you have an argument which will change my mind.
I haven’t seen someone attack the VNM axioms as there are plenty of non-consequentialists who think there are good reasons for believing them. I have a feeling you are really attacking the other presented axioms, but not these.
“sophistic rationalisation of a pre-given reason” This is a pretty uncharitable jump to accusation. The statements I listed above above are not immune to attack, when convinced to drop an axiom or to adopt a different one, the ethics system advocated will change. I had different values before I became utilitarian, and my beliefs about what was of utility changed based on changes in the axioms I used to derive it.
When I was a preference utilitarian, I came across a thought experiment about imagining a preference which has no consequences in terms of experience when it is satisfied. It didn’t seem like such preferences could matter: therefore I was no longer a preference utilitarian. There wasn’t a pre-given reason, though intuitions were used.
If you do think there is a way to derive a good ethical theory which does not rely on appealing to intuitions at some point in the argument, I would be very interested in hearing it. =)
(note from earlier) (1) Consider what world a self benefiting being would make with the veil of ignorance. The most rational thing based on it’s goals is to maximize expected benefit: which will align exactly with what some utilitarians will argue for.