I think this is an important point. People might want to start with additional or just different axioms, including, as you say, avoiding the repugnant conclusion, and if they can’t all together be consistent, then this theorem may unjustifiably privilege a specific subset of those axioms.
I do think this is an argument for utilitarianism, but more like in the sense of “This is a reason to be a utilitarian, but other reasons might outweigh it.” I think it does have some normative weight in this way.
Also, independence of irrelevant alternatives is safer to give up than transitivity, and might accomplish most of what you want. See my other comment.
Thanks for the pointer to “independence of irrelevant alternatives.”
I’m curious to know how you think about “some normative weight.” I think of these arguments as being about mathematical systems that do not describe humans, hence no normative weight. Do you think of them as being about mathematical systems that *somewhat* describe humans, hence *some* normative weight?
I think if you believe the conditions of the theorem are all plausible or desirable and so give them some weight, then you should give the conclusion some weight, too.
For example, it’s unlikely to be the case that anyone’s ethical rankings actually satisfy the vNM rationality conditions in practice, but if you give any weight to the claims that we should have ethical rankings that are complete, continuous with respect to probabilities (which are assumed to work in the standard way), satisfy the independence of irrelevant alternatives and avoid all theoretical (weak) Dutch books, and also give weight to the combination of these conditions at once*, then the Dutch book results give you reason to believe you should satisfy the vNM rationality axioms, since if you don’t, you can get (weakly) Dutch booked in theory. I think you should be at least as sympathetic to the conclusion of a theorem as you are to the combination of all of its assumptions, if you accept the kind of deductive logic used in the proofs.
I think if you believe the conditions of the theorem are all plausible or desirable and so give them some weight, then you should give the conclusion some weight, too.
This makes sense, but the type of things that tend to convince me to believe in an ethical theory generally depend a lot on how much I resonate with the main claims of the theory. When I look at the premises in this theorem, none of them seem to be type of things that I care about.
On the other hand, pointing out that utilitarians care about people and animals, and they want them to be as happy as possible (and free, or with agency, desire satisfaction) that makes me happy to endorse the theory. When I think about all people and animals being happy and free from pain in a utilitarian world, I get a positive feeling. When I think about “Total utilitarians are the only ones that satisfy these three assumptions” I don’t get the same positive feeling.
When it comes to ethics, it’s the emotional arguments that really win me over.
This makes sense, but the type of things that tend to convince me to believe in an ethical theory generally depend a lot on how much I resonate with the main claims of the theory. When I look at the premises in this theorem, none of them seem to be type of things that I care about.
If you want to deal with moral uncertainty with credences, you could assign each of the 3 major assumptions an independent credence of 50%, so this argument would tell you should be utilitarian with credence at least 123=18=12.5%. (Assigning independent credences might not actually make sense, in case you have to deal with contradictions with other assumptions.)
On the other hand, pointing out that utilitarians care about people and animals, and they want them to be as happy as possible (and free, or with agency, desire satisfaction) that makes me happy to endorse the theory. When I think about all people and animals being happy and free from pain in a utilitarian world, I get a positive feeling.
Makes sense. For what it’s worth, this seems basically compatible with any theory which satisfies the Pareto principle, and I’d imagine you’d also want it to be impartial (symmetry). If you also assume real-valued utilities, transitivity, independence of irrelevant alternatives, continuity and independence of unconcerned agents, you get something like utilitarianism again. In my view, independence of unconcerned agents is doing most of the work here, though.
I think this is an important point. People might want to start with additional or just different axioms, including, as you say, avoiding the repugnant conclusion, and if they can’t all together be consistent, then this theorem may unjustifiably privilege a specific subset of those axioms.
I do think this is an argument for utilitarianism, but more like in the sense of “This is a reason to be a utilitarian, but other reasons might outweigh it.” I think it does have some normative weight in this way.
Also, independence of irrelevant alternatives is safer to give up than transitivity, and might accomplish most of what you want. See my other comment.
Thanks for the pointer to “independence of irrelevant alternatives.”
I’m curious to know how you think about “some normative weight.” I think of these arguments as being about mathematical systems that do not describe humans, hence no normative weight. Do you think of them as being about mathematical systems that *somewhat* describe humans, hence *some* normative weight?
I think if you believe the conditions of the theorem are all plausible or desirable and so give them some weight, then you should give the conclusion some weight, too.
For example, it’s unlikely to be the case that anyone’s ethical rankings actually satisfy the vNM rationality conditions in practice, but if you give any weight to the claims that we should have ethical rankings that are complete, continuous with respect to probabilities (which are assumed to work in the standard way), satisfy the independence of irrelevant alternatives and avoid all theoretical (weak) Dutch books, and also give weight to the combination of these conditions at once*, then the Dutch book results give you reason to believe you should satisfy the vNM rationality axioms, since if you don’t, you can get (weakly) Dutch booked in theory. I think you should be at least as sympathetic to the conclusion of a theorem as you are to the combination of all of its assumptions, if you accept the kind of deductive logic used in the proofs.
*I might be missing more important conditions.
This makes sense, but the type of things that tend to convince me to believe in an ethical theory generally depend a lot on how much I resonate with the main claims of the theory. When I look at the premises in this theorem, none of them seem to be type of things that I care about.
On the other hand, pointing out that utilitarians care about people and animals, and they want them to be as happy as possible (and free, or with agency, desire satisfaction) that makes me happy to endorse the theory. When I think about all people and animals being happy and free from pain in a utilitarian world, I get a positive feeling. When I think about “Total utilitarians are the only ones that satisfy these three assumptions” I don’t get the same positive feeling.
When it comes to ethics, it’s the emotional arguments that really win me over.
If you want to deal with moral uncertainty with credences, you could assign each of the 3 major assumptions an independent credence of 50%, so this argument would tell you should be utilitarian with credence at least 123=18=12.5%. (Assigning independent credences might not actually make sense, in case you have to deal with contradictions with other assumptions.)
Makes sense. For what it’s worth, this seems basically compatible with any theory which satisfies the Pareto principle, and I’d imagine you’d also want it to be impartial (symmetry). If you also assume real-valued utilities, transitivity, independence of irrelevant alternatives, continuity and independence of unconcerned agents, you get something like utilitarianism again. In my view, independence of unconcerned agents is doing most of the work here, though.