I’m pretty confused about the argument made by this post. Pascal’s Mugging seems like a legitimately important objection to expected value based decision theory, and all of these thought experiments are basically flavours of that. This post feels like it’s just imposing scorn on that idea without making an actual argument?
I think “utilitarianism says seemingly weird shit when given large utilities and tiny probabilities” is one of the most important objections.
Is your complaint that this is an isolated demand for rigor?
I’m not well versed enough in higher mathematics to be confident in this, but it seems to me like these objections to utilitarianism are attacking it by insisting it solve problems it’s not designed to handle. We can define a “finite utilitarianism,” for example, where only finite quantities of utility are considered. In these cases, the St. Petersburg Paradox has a straightforward answer, which is that we are happy to take the positive expected value gamble, because occasionally it will pay off.
This brings utilitarianism closer to engineering than to math. In engineering, we use mathematical models to approximate the behavior of materials. However, we obtain the math models from empirical observation, and we understand that these models break down at a certain point. Despite this, they are extremely useful models for almost any practical circumstance the engineer might confront. Likewise for utilitarianism.
It’s fine to argue about transcendent, infinite utilitarianism. But both sides of the debate should be explicitly clear that it’s this very particular distilled flavor of moral philosophy that they are debating. This would be akin perhaps to Einstein proving that Newton’s laws break down near the speed of light. If we are debating over whether we have a moral imperative to create a galaxy-spanning civilization as soon as possible, or by contrast to systematically wipe out all the net-negative life in the universe, then these issues with pushing the limits of utilitariansim are in force. If we are debating a more ordinary moral question, such as whether or not to go to war, practical finite utilitarianism works fine.
These thought experiments also implicitly distort it by asking us to substitute some concrete good, such as saving lives or making money, for what utilitarianism actually cares about, which is utility. Our utility does not necessarily scale linearly with number of lives saved or amount of money made. But because utility is so abstract, it is hard even for a utilitarian to feel in their gut that twice the utility is twice as good, even though this is true by definition.
Utilitarianism’s foil, deontology, makes crazy-sounding claims about relatively ordinary conundrums, such as “you can’t lie to the axe murderer about where his victim is hiding.” Cooking up insane situations where utilitarianism sounds just as crazy is sort of moral DARVO.
OK, that seems like a pretty reasonable position. Thoough if we’re restricting ourselves to everyday situations it feels a bit messy—naive utilitarianism implies things like lying a bunch or killing people in contrived situations, and I think the utility maximising decision is actually to be somewhat deontologist.
More importantly though, people do use utilitarianism in contexts with very large amounts of utility and small probabilities—see strong longtermism and the astronomical waste arguments. I think this is an important and action relevant thing, influencing a bunch of people in EA, and that criticising this is a meaningful critique of utilitarianism, not a weird contrived thought experiment
naive utilitarianism implies things like lying a bunch or killing people in contrived situations
I don’t know what “naive” utilitarianism is. Some possibilities include:
Making incorrect predictions about the net effects of your behavior on future world states, due to the ways that utilitarian concepts might misguide your epistemics.
Having different interpretations of the same outcomes from a more “sophisticated” moral thinker.
I would argue that (1) is basically an epistemic problem, not a moral one. If the major concern with utilitarian concepts is that it makes people make inaccurate predictions about how their behaviors will affect the future, that is an empirical psychological problem and needs to be dealt with separately from utilitarian concepts as tools for moral reasoning.
(2) is an argument from authority.
Please let me know if you were referencing some other concern than the two I’ve speculated about here; I assume I have probably missed your point!
and I think the utility maximising decision is actually to be somewhat deontologist.
I don’t know what “be somewhat deontologist” means to you. I do think that if the same behavior is motivated by multiple contrasting moral frameworks (i.e. by deontology and utilitarianism), that suggests it is “morally robust” and more attractive for that reason.
However, being a deontologist and not a utilitarian is only truly meaningful when the two moral frameworks would lead us to different decisions. In these circumstances, it is by definition not the utility maximizing decision to be deontologist.
If I had to guess at your meaning, it’s that “deontologist” is a psychological state, close to a personality trait or identity. Hence, it is primarily something that you can “be,” and something that you can be “somewhat” in a meaningful way. Being a deontological sort of person makes you do things that a utilitarian calculus might approve of.
More importantly though, people do use utilitarianism in contexts with very large amounts of utility and small probabilities—see strong longtermism and the astronomical waste arguments.
I agree that people do attempt to apply utilitarian concepts to make an argument for avoiding astronomical waste.
I think this is an important and action relevant thing, influencing a bunch of people in EA
I agree that if a moral argument is directing significant human endeavors, that makes it important to consider.
and that criticising this is a meaningful critique of utilitarianism
This is where I disagree with (my interpretation of) you.
I think of moral questions as akin to engineering problems.
Occasionally, it turns out that a “really big” or “really small” version of a familiar tool or material is the perfect solution for a novel engineering challenge. The Great Wall of China is an example.
Other times, we need to implement a familiar concept using unfamiliar technology, such as “molecular tweezers” or “solar sails.”
Still other times, the engineering challenge is remote enough that we have to invent a whole new category of tool, using entirely new technologies, in order to solve it.
Utilitarianism, deontology, virtue ethics, nihilism, relativism, and other frameworks all offer us “moral tools” and “moral concepts” that we can use to analyze and interpret novel “moral engineering challenges,” like the question of whether and how to steer sentient beings toward expansion throughout the lightcone.
When these tools, as we apply them today, fail to solve these novel moral conundrums in a satisfying way, that suggests some combination of their limitations, our own flawed application of them, and perhaps the potential for some new moral tools that we haven’t hit on yet.
Failure to fully solve these novel problems isn’t a “critique” of these moral tools, any more than a collapsed bridge is a “critique” of the crane that was used to build it.
The tendency to frame moral questions, like astronomical waste, as opportunities to pit one moral framework against another and see which comes out the victor, strikes me as a strange practice.
Imagine that we are living in an early era, in which there is much debate and uncertainty about whether or not it is morally good to kill heathens. Heathens are killed routinely, but people talk a lot about whether or not this is a good thing.
However, every time the subject of heathen-killing comes up, the argument quickly turns to a debate over whether the Orthodox or the Anti-Orthodox moral framework gives weirder results in evaluating the heathen-killing question. All the top philosophers from both schools of thought think of the heathen-killing question as showing up the strengths and weaknesses of the two philosophical schools.
I propose that it would be silly to participate in the Orthodox vs. Anti-Orthodox debate. Instead, I would prefer to focus on understanding the heathen-killing question from both schools of thought, and also try to rope in other perspectives: economic, political, technological, cultural, and historical. I would want to meet some heathens and some heathen-killers. I would try to get the facts on the ground. Who is leading the next war party? How will the spoils be divided up? Who has lost a loved one in the battles with the heathens? Are there any secret heathens around in our own side?
This research strikes me as far more interesting, and far more useful in working toward a resolution of the heathen-killing question, than perpetuating the Orthodox vs. Anti-Orthodox debate.
By the same token, I propose that we stop interpreting astronomical waste and similar moral conundrums as opportunities to debate the merits of utilitarianism vs. deontology vs. other schools of thought. Instead, let’s try and obtain a multifaceted, “foxy” view of the issue. I suspect that these controversial questions will begin to dissolve as we gather more information from a wider diversity of departments and experiences than we have at present.
You don’t need explicit infinities to get weird things out of utilitarianism. Strong Longtermism is already an example of how the tiny probability that your action affects a huge number of (people?) dominates the expected value of your actions in the eyes of some prominent EAs.
You don’t need explicit infinities to get weird things out of utilitarianism.
I agree with you. Weirdness, though, is a far softer “critique” than the clear paradoxes that result from explicit infinities. And high-value low-probability moral tradeoffs aren’t even all that weird.
We need information in order to have an expected value. We can be utilitarians who deny that sufficient information is available to justify a given high-value low-probability tradeoff. Some of the critiques of “weird” longtermism lose their force once we clarify either a) that we’re ~totally uncertain about the valence of the action under consideration relative to the next-best alternative, and hence the moral conundrum is really an epistemic conundrum, or b) that we actually are very confident about its moral valence and opportunity cost, in which case the weirdness evaporates.
Consider a physicist who realizes there’s a very low but nonzero chance that detonating the first atom bomb will light the atmosphere on fire, yet who also believes that every day it doesn’t get dropped on Japan extends WWII and leads to more deaths on all sides on net. For this physicist, it might still make perfect sense to spend a year testing and checking to resolve this small chance that the nuclear bomb doesn’t work. I think this is not a “weird” decision from the perspective of most people, whether or not we assume the physicist is objectively correct about the epistemic aspect of the tradeoff.
To a certain extent, it’s utilitarianism that invites these potential critiques. If a theory says that probabilities/expected value are integral to figuring out what to do, then questions looking at very large or very small probabilities/expected value is fair game. And looking at extreme and near-extreme cases is a legitimate philosophical heuristic.
Correct me if I am wrong, but I don’t neccessarily see St Petersberg Paradox as being the same as Pascals mugging. The latter is a criticism of speculation, and the former is more of an intuitive critique against expected value theory
I’m pretty confused about the argument made by this post. Pascal’s Mugging seems like a legitimately important objection to expected value based decision theory, and all of these thought experiments are basically flavours of that. This post feels like it’s just imposing scorn on that idea without making an actual argument?
I think “utilitarianism says seemingly weird shit when given large utilities and tiny probabilities” is one of the most important objections.
Is your complaint that this is an isolated demand for rigor?
I’m not well versed enough in higher mathematics to be confident in this, but it seems to me like these objections to utilitarianism are attacking it by insisting it solve problems it’s not designed to handle. We can define a “finite utilitarianism,” for example, where only finite quantities of utility are considered. In these cases, the St. Petersburg Paradox has a straightforward answer, which is that we are happy to take the positive expected value gamble, because occasionally it will pay off.
This brings utilitarianism closer to engineering than to math. In engineering, we use mathematical models to approximate the behavior of materials. However, we obtain the math models from empirical observation, and we understand that these models break down at a certain point. Despite this, they are extremely useful models for almost any practical circumstance the engineer might confront. Likewise for utilitarianism.
It’s fine to argue about transcendent, infinite utilitarianism. But both sides of the debate should be explicitly clear that it’s this very particular distilled flavor of moral philosophy that they are debating. This would be akin perhaps to Einstein proving that Newton’s laws break down near the speed of light. If we are debating over whether we have a moral imperative to create a galaxy-spanning civilization as soon as possible, or by contrast to systematically wipe out all the net-negative life in the universe, then these issues with pushing the limits of utilitariansim are in force. If we are debating a more ordinary moral question, such as whether or not to go to war, practical finite utilitarianism works fine.
These thought experiments also implicitly distort it by asking us to substitute some concrete good, such as saving lives or making money, for what utilitarianism actually cares about, which is utility. Our utility does not necessarily scale linearly with number of lives saved or amount of money made. But because utility is so abstract, it is hard even for a utilitarian to feel in their gut that twice the utility is twice as good, even though this is true by definition.
Utilitarianism’s foil, deontology, makes crazy-sounding claims about relatively ordinary conundrums, such as “you can’t lie to the axe murderer about where his victim is hiding.” Cooking up insane situations where utilitarianism sounds just as crazy is sort of moral DARVO.
OK, that seems like a pretty reasonable position. Thoough if we’re restricting ourselves to everyday situations it feels a bit messy—naive utilitarianism implies things like lying a bunch or killing people in contrived situations, and I think the utility maximising decision is actually to be somewhat deontologist.
More importantly though, people do use utilitarianism in contexts with very large amounts of utility and small probabilities—see strong longtermism and the astronomical waste arguments. I think this is an important and action relevant thing, influencing a bunch of people in EA, and that criticising this is a meaningful critique of utilitarianism, not a weird contrived thought experiment
I don’t know what “naive” utilitarianism is. Some possibilities include:
Making incorrect predictions about the net effects of your behavior on future world states, due to the ways that utilitarian concepts might misguide your epistemics.
Having different interpretations of the same outcomes from a more “sophisticated” moral thinker.
I would argue that (1) is basically an epistemic problem, not a moral one. If the major concern with utilitarian concepts is that it makes people make inaccurate predictions about how their behaviors will affect the future, that is an empirical psychological problem and needs to be dealt with separately from utilitarian concepts as tools for moral reasoning.
(2) is an argument from authority.
Please let me know if you were referencing some other concern than the two I’ve speculated about here; I assume I have probably missed your point!
I don’t know what “be somewhat deontologist” means to you. I do think that if the same behavior is motivated by multiple contrasting moral frameworks (i.e. by deontology and utilitarianism), that suggests it is “morally robust” and more attractive for that reason.
However, being a deontologist and not a utilitarian is only truly meaningful when the two moral frameworks would lead us to different decisions. In these circumstances, it is by definition not the utility maximizing decision to be deontologist.
If I had to guess at your meaning, it’s that “deontologist” is a psychological state, close to a personality trait or identity. Hence, it is primarily something that you can “be,” and something that you can be “somewhat” in a meaningful way. Being a deontological sort of person makes you do things that a utilitarian calculus might approve of.
I agree that people do attempt to apply utilitarian concepts to make an argument for avoiding astronomical waste.
I agree that if a moral argument is directing significant human endeavors, that makes it important to consider.
This is where I disagree with (my interpretation of) you.
I think of moral questions as akin to engineering problems.
Occasionally, it turns out that a “really big” or “really small” version of a familiar tool or material is the perfect solution for a novel engineering challenge. The Great Wall of China is an example.
Other times, we need to implement a familiar concept using unfamiliar technology, such as “molecular tweezers” or “solar sails.”
Still other times, the engineering challenge is remote enough that we have to invent a whole new category of tool, using entirely new technologies, in order to solve it.
Utilitarianism, deontology, virtue ethics, nihilism, relativism, and other frameworks all offer us “moral tools” and “moral concepts” that we can use to analyze and interpret novel “moral engineering challenges,” like the question of whether and how to steer sentient beings toward expansion throughout the lightcone.
When these tools, as we apply them today, fail to solve these novel moral conundrums in a satisfying way, that suggests some combination of their limitations, our own flawed application of them, and perhaps the potential for some new moral tools that we haven’t hit on yet.
Failure to fully solve these novel problems isn’t a “critique” of these moral tools, any more than a collapsed bridge is a “critique” of the crane that was used to build it.
The tendency to frame moral questions, like astronomical waste, as opportunities to pit one moral framework against another and see which comes out the victor, strikes me as a strange practice.
Imagine that we are living in an early era, in which there is much debate and uncertainty about whether or not it is morally good to kill heathens. Heathens are killed routinely, but people talk a lot about whether or not this is a good thing.
However, every time the subject of heathen-killing comes up, the argument quickly turns to a debate over whether the Orthodox or the Anti-Orthodox moral framework gives weirder results in evaluating the heathen-killing question. All the top philosophers from both schools of thought think of the heathen-killing question as showing up the strengths and weaknesses of the two philosophical schools.
I propose that it would be silly to participate in the Orthodox vs. Anti-Orthodox debate. Instead, I would prefer to focus on understanding the heathen-killing question from both schools of thought, and also try to rope in other perspectives: economic, political, technological, cultural, and historical. I would want to meet some heathens and some heathen-killers. I would try to get the facts on the ground. Who is leading the next war party? How will the spoils be divided up? Who has lost a loved one in the battles with the heathens? Are there any secret heathens around in our own side?
This research strikes me as far more interesting, and far more useful in working toward a resolution of the heathen-killing question, than perpetuating the Orthodox vs. Anti-Orthodox debate.
By the same token, I propose that we stop interpreting astronomical waste and similar moral conundrums as opportunities to debate the merits of utilitarianism vs. deontology vs. other schools of thought. Instead, let’s try and obtain a multifaceted, “foxy” view of the issue. I suspect that these controversial questions will begin to dissolve as we gather more information from a wider diversity of departments and experiences than we have at present.
You don’t need explicit infinities to get weird things out of utilitarianism. Strong Longtermism is already an example of how the tiny probability that your action affects a huge number of (people?) dominates the expected value of your actions in the eyes of some prominent EAs.
I agree with you. Weirdness, though, is a far softer “critique” than the clear paradoxes that result from explicit infinities. And high-value low-probability moral tradeoffs aren’t even all that weird.
We need information in order to have an expected value. We can be utilitarians who deny that sufficient information is available to justify a given high-value low-probability tradeoff. Some of the critiques of “weird” longtermism lose their force once we clarify either a) that we’re ~totally uncertain about the valence of the action under consideration relative to the next-best alternative, and hence the moral conundrum is really an epistemic conundrum, or b) that we actually are very confident about its moral valence and opportunity cost, in which case the weirdness evaporates.
Consider a physicist who realizes there’s a very low but nonzero chance that detonating the first atom bomb will light the atmosphere on fire, yet who also believes that every day it doesn’t get dropped on Japan extends WWII and leads to more deaths on all sides on net. For this physicist, it might still make perfect sense to spend a year testing and checking to resolve this small chance that the nuclear bomb doesn’t work. I think this is not a “weird” decision from the perspective of most people, whether or not we assume the physicist is objectively correct about the epistemic aspect of the tradeoff.
To a certain extent, it’s utilitarianism that invites these potential critiques. If a theory says that probabilities/expected value are integral to figuring out what to do, then questions looking at very large or very small probabilities/expected value is fair game. And looking at extreme and near-extreme cases is a legitimate philosophical heuristic.
Correct me if I am wrong, but I don’t neccessarily see St Petersberg Paradox as being the same as Pascals mugging. The latter is a criticism of speculation, and the former is more of an intuitive critique against expected value theory