When you’ve read enough heuristics and biases research, and enough coherence and uniqueness proofs for Bayesian probabilities and expected utility, and you’ve seen the “Dutch book” and “money pump” effects that penalize trying to handle uncertain outcomes any other way, then you don’t see the preference reversals in the Allais Paradox as revealing some incredibly deep moral truth about the intrinsic value of certainty. It just goes to show that the brain doesn’t goddamn multiply.
The primitive, perceptual intuitions that make a choice “feel good” don’t handle probabilistic pathways through time very skillfully, especially when the probabilities have been expressed symbolically rather than experienced as a frequency. So you reflect, devise more trustworthy logics, and think it through in words.
When you see people insisting that no amount of money whatsoever is worth a single human life, and then driving an extra mile to save $10; or when you see people insisting that no amount of money is worth a decrement of health, and then choosing the cheapest health insurance available; then you don’t think that their protestations reveal some deep truth about incommensurable utilities.
Part of it, clearly, is that primitive intuitions don’t successfully diminish the emotional impact of symbols standing for small quantities—anything you talk about seems like “an amount worth considering.”
And part of it has to do with preferring unconditional social rules to conditional social rules. Conditional rules seem weaker, seem more subject to manipulation. If there’s any loophole that lets the government legally commit torture, then the government will drive a truck through that loophole.
So it seems like there should be an unconditional social injunction against preferring money to life, and no “but” following it. Not even “but a thousand dollars isn’t worth a 0.0000000001% probability of saving a life.” Though the latter choice, of course, is revealed every time we sneeze without calling a doctor.
The rhetoric of sacredness gets bonus points for seeming to express an unlimited commitment, an unconditional refusal that signals trustworthiness and refusal to compromise. So you conclude that moral rhetoric espouses qualitative distinctions, because espousing a quantitative tradeoff would sound like you were plotting to defect.
On such occasions, people vigorously want to throw quantities out the window, and they get upset if you try to bring quantities back in, because quantities sound like conditions that would weaken the rule.
But you don’t conclude that there are actually two tiers of utility with lexical ordering. You don’t conclude that there is actually an infinitely sharp moral gradient, some atom that moves a Planck distance (in our continuous physical universe) and sends a utility from zero to infinity. You don’t conclude that utilities must be expressed using hyper-real numbers. Because the lower tier would simply vanish in any equation. It would never be worth the tiniest effort to recalculate for it. All decisions would be determined by the upper tier, and all thought spent thinking about the upper tier only, if the upper tier genuinely
had lexical priority.
As Peter Norvig once pointed out, if Asimov’s robots had strict priority for the First Law of Robotics (“A robot shall not harm a human being, nor through inaction allow a human being to come to harm”) then no robot’s behavior would ever show any sign of the other two Laws; there would always be some tiny First Law factor that would be sufficient to determine the decision.
Whatever value is worth thinking about at all must be worth trading off against all other values worth thinking about, because thought itself is a limited resource that must be traded off. When you reveal a value, you reveal a utility.
I don’t say that morality should always be simple. I’ve already said that the meaning of music is more than happiness alone, more than just a pleasure center lighting up. I would rather see music composed by people than by nonsentient machine learning algorithms, so that someone should have the joy of composition; I care about the journey, as well as the destination. And I am ready to hear if you tell me that the value of music is deeper, and involves more complications, than I realize—that the valuation of this one event is more complex than I know.
But that’s for one event. When it comes to multiplying by quantities and probabilities, complication is to be avoided—at least if you care more about the destination than the journey. When you’ve reflected on enough intuitions, and corrected enough absurdities, you start to see a common denominator, a meta-principle at work, which one might phrase as “Shut up and multiply.” Where music is concerned, I care about the journey. When lives are at stake, I shut up and multiply.
It is more important that lives be saved, than that we conform to any particular ritual in saving them. And the optimal path to that destination is governed by laws that are simple, because they are math. And that’s why I’m a utilitarian—at least when I am doing something that is overwhelmingly more important than my own feelings about it—which is most of the time, because there are not many utilitarians, and many things left undone.
… Also, just to be clear—since this seems to be a weirdly common misconception—acting like an expected value maximizer is totally different from utilitarianism. EV maximizing is a thing wherever you consistently care enough about your actions’ consequences; utilitarianism is specifically the idea that the thing people should (act as though they) care about is how good things are for everyone, impartially.
But often people argue against the consequentialism aspect of utilitarianism and the consequent willingness to quantitatively compare different goods, rather than arguing against the altruism aspect or the egalitarianism; hence the two ideas get blurred together a bit in the above, even though you can certainly maximize expected utility for conceptions of “utility” that are partial to your own interests, your friends’, etc.
For more background on what I mean by ‘any policy of caring a lot about strangers will tend to recommend behavior reminiscent of expected value maximization, the more so the more steadfast and strong the caring is’, see e.g. ‘Coherent decisions imply a utility funtion’ and The “Intuitions” Behind “Utilitarianism”:
… Also, just to be clear—since this seems to be a weirdly common misconception—acting like an expected value maximizer is totally different from utilitarianism. EV maximizing is a thing wherever you consistently care enough about your actions’ consequences; utilitarianism is specifically the idea that the thing people should (act as though they) care about is how good things are for everyone, impartially.
But often people argue against the consequentialism aspect of utilitarianism and the consequent willingness to quantitatively compare different goods, rather than arguing against the altruism aspect or the egalitarianism; hence the two ideas get blurred together a bit in the above, even though you can certainly maximize expected utility for conceptions of “utility” that are partial to your own interests, your friends’, etc.