Why do you consider completeness self-evident? (Or continuity, although I’m more sympathetic to that one.)
Also, it’s important not to conflate “given these axioms, your preferences can be represented as maximizing expected utility w.r.t. some utility function” with “given these axioms [and a precise probability distribution representing your beliefs], you ought to make decisions by maximizing expected value, where ‘value’ is given by the axiology you actually endorse.” I’d recommend this paper on the topic (especially Sec. 4), and Sec. 2.2 here.
I think completeness is self-evident because “the individual must express some preference or indifference”. Reality forces them to do so. For example, if they donate to organisation A over B, at least implicitly, they imply donating to A is as good or better than donating to B. If they decide to keep the money for personal consumption, at least implicitly, they imply that is as good or better than donating.
I believe continuity is self-evident because rejecting it implies seemingly non-sensical decisions. For example, if one prefers 100 $ over 10 $, and this over 1 $, continuity says there is a probability p such that one is indifferent between 10 $ and a lottery involving a probability p of winning 1 $, and 1 - p of winning 100 $. One would prefer the lottery with p = 0 over 10 $, because then one would be certain to win 100 $. One would prefer 10 $ over the lottery with p = 1, because then one would be certain to win 1 $. If there was not a tipping point between preferring the lottery or 10 $, one would have to be insensitive to an increased probability of an outcome better than 10 $ (100 $), and a decreased probability of an outcome worse than 10 $ (1 $), which I see as non-sensical.
Thanks! I’ll just respond re: completeness for now.
When we ask “why should we maximize EV,” we’re interested in the reasons for our choices. Recognizing that I’m forced by reality to either donate or not-donate doesn’t help me answer whether it’s rational to strictly prefer donating, strictly prefer not-donating, be precisely indifferent, or none of the above.
Incomplete preferences have at least one qualitatively different property from complete ones, described here, and reality doesn’t force you to violate this property.
Not that you’re claiming this directly, but just to flag, because in my experience people often conflate these things: Even if in some sense your all-things-considered preferences need to be complete, this doesn’t mean your preferences w.r.t. your first-order axiology need to be complete. For example, take the donation case. You might be very sympathetic to a total utilitarian axiology, but when deciding whether to donate, your evaluation of the total utilitarian betterness-under-uncertainty of one option vs. another doesn’t need to be complete. You might, say, just rule out options that are stochastically dominated w.r.t. total utility, and then decide among the remaining options based on non-consequentialist considerations. (More on this idea here.)
2. Incomplete preferences have at least one qualitatively different property from complete ones, described here, and reality doesn’t force you to violate this property.
I read the section you linked, and I understand preferential gaps are the property of incomplete preferences which you are referring to. I do not think preferential gaps make sense in principle. If one was exactly indifferent between 2 outcomes, I believe any improvement/worsening of one of them must make one prefer one of the outcomes over the other. At the same time, if one is roughly indifferent between 2 outcomes, a sufficiently small improvement/worsening of one of them will still lead to one being practically indifferent between them. For example, although I think i) 1 $ plus a chance of 10^-100 of 1 $ is clearly better than ii) 1 $, I am practically indifferent between i) and ii), because the value of 10^-100 $ is negligible.
3. Not that you’re claiming this directly, but just to flag, because in my experience people often conflate these things: Even if in some sense your all-things-considered preferences need to be complete, this doesn’t mean your preferences w.r.t. your first-order axiology need to be complete.
Both are complete for me, as I fully endorse expectational total hedonistic utilitarianism (ETHU) in principle. In practice, I think it is useful to rely on heuristics from other moral theories to make better decisions under ETHU. I believe the categorical imperative is a great one, for example, although it is very central to deontology.
To be clear, “preferential gap” in the linked article just means incomplete preferences. The property in question is insensitivity to mild sweetening.
If one was exactly indifferent between 2 outcomes, I believe any improvement/worsening of one of them must make one prefer one of the outcomes over the other
But that’s exactly the point — incompleteness is not equivalent to indifference, because when you have an incomplete preference between 2 outcomes it’s not the case that a mild improvement/worsening makes you have a strict preference. I don’t understand what you think doesn’t “make sense in principle” about insensitivity to mild sweetening.
I fully endorse expectational total hedonistic utilitarianism (ETHU) in principle
As in you’re 100% certain, and wouldn’t put weight on other considerations even as a tiebreaker? That seems extreme. (If, say, you became convinced all your options were incomparable from an ETHU perspective because of cluelessness, you would presumably still all-things-considered-prefer not to do something that injures yourself for no reason.)
As in you’re 100% certain, and wouldn’t put weight on other considerations even as a tiebreaker?
Yes.
(If, say, you became convinced all your options were incomparable from an ETHU perspective because of cluelessness, you would presumably still all-things-considered-prefer not to do something that injures yourself for no reason.)
Injuring myself can very easily be assessed under ETHU. It directly affects my mental states, and those of others via decreasing my productivity.
Why do you consider completeness self-evident? (Or continuity, although I’m more sympathetic to that one.)
Also, it’s important not to conflate “given these axioms, your preferences can be represented as maximizing expected utility w.r.t. some utility function” with “given these axioms [and a precise probability distribution representing your beliefs], you ought to make decisions by maximizing expected value, where ‘value’ is given by the axiology you actually endorse.” I’d recommend this paper on the topic (especially Sec. 4), and Sec. 2.2 here.
Hi Anthony,
I think completeness is self-evident because “the individual must express some preference or indifference”. Reality forces them to do so. For example, if they donate to organisation A over B, at least implicitly, they imply donating to A is as good or better than donating to B. If they decide to keep the money for personal consumption, at least implicitly, they imply that is as good or better than donating.
I believe continuity is self-evident because rejecting it implies seemingly non-sensical decisions. For example, if one prefers 100 $ over 10 $, and this over 1 $, continuity says there is a probability p such that one is indifferent between 10 $ and a lottery involving a probability p of winning 1 $, and 1 - p of winning 100 $. One would prefer the lottery with p = 0 over 10 $, because then one would be certain to win 100 $. One would prefer 10 $ over the lottery with p = 1, because then one would be certain to win 1 $. If there was not a tipping point between preferring the lottery or 10 $, one would have to be insensitive to an increased probability of an outcome better than 10 $ (100 $), and a decreased probability of an outcome worse than 10 $ (1 $), which I see as non-sensical.
Thanks! I’ll just respond re: completeness for now.
When we ask “why should we maximize EV,” we’re interested in the reasons for our choices. Recognizing that I’m forced by reality to either donate or not-donate doesn’t help me answer whether it’s rational to strictly prefer donating, strictly prefer not-donating, be precisely indifferent, or none of the above.
Incomplete preferences have at least one qualitatively different property from complete ones, described here, and reality doesn’t force you to violate this property.
Not that you’re claiming this directly, but just to flag, because in my experience people often conflate these things: Even if in some sense your all-things-considered preferences need to be complete, this doesn’t mean your preferences w.r.t. your first-order axiology need to be complete. For example, take the donation case. You might be very sympathetic to a total utilitarian axiology, but when deciding whether to donate, your evaluation of the total utilitarian betterness-under-uncertainty of one option vs. another doesn’t need to be complete. You might, say, just rule out options that are stochastically dominated w.r.t. total utility, and then decide among the remaining options based on non-consequentialist considerations. (More on this idea here.)
Thanks, Anthony.
I read the section you linked, and I understand preferential gaps are the property of incomplete preferences which you are referring to. I do not think preferential gaps make sense in principle. If one was exactly indifferent between 2 outcomes, I believe any improvement/worsening of one of them must make one prefer one of the outcomes over the other. At the same time, if one is roughly indifferent between 2 outcomes, a sufficiently small improvement/worsening of one of them will still lead to one being practically indifferent between them. For example, although I think i) 1 $ plus a chance of 10^-100 of 1 $ is clearly better than ii) 1 $, I am practically indifferent between i) and ii), because the value of 10^-100 $ is negligible.
Both are complete for me, as I fully endorse expectational total hedonistic utilitarianism (ETHU) in principle. In practice, I think it is useful to rely on heuristics from other moral theories to make better decisions under ETHU. I believe the categorical imperative is a great one, for example, although it is very central to deontology.
To be clear, “preferential gap” in the linked article just means incomplete preferences. The property in question is insensitivity to mild sweetening.
But that’s exactly the point — incompleteness is not equivalent to indifference, because when you have an incomplete preference between 2 outcomes it’s not the case that a mild improvement/worsening makes you have a strict preference. I don’t understand what you think doesn’t “make sense in principle” about insensitivity to mild sweetening.
As in you’re 100% certain, and wouldn’t put weight on other considerations even as a tiebreaker? That seems extreme. (If, say, you became convinced all your options were incomparable from an ETHU perspective because of cluelessness, you would presumably still all-things-considered-prefer not to do something that injures yourself for no reason.)
Yes.
Injuring myself can very easily be assessed under ETHU. It directly affects my mental states, and those of others via decreasing my productivity.