OK, that seems like a pretty reasonable position. Thoough if we’re restricting ourselves to everyday situations it feels a bit messy—naive utilitarianism implies things like lying a bunch or killing people in contrived situations, and I think the utility maximising decision is actually to be somewhat deontologist.
More importantly though, people do use utilitarianism in contexts with very large amounts of utility and small probabilities—see strong longtermism and the astronomical waste arguments. I think this is an important and action relevant thing, influencing a bunch of people in EA, and that criticising this is a meaningful critique of utilitarianism, not a weird contrived thought experiment
naive utilitarianism implies things like lying a bunch or killing people in contrived situations
I don’t know what “naive” utilitarianism is. Some possibilities include:
Making incorrect predictions about the net effects of your behavior on future world states, due to the ways that utilitarian concepts might misguide your epistemics.
Having different interpretations of the same outcomes from a more “sophisticated” moral thinker.
I would argue that (1) is basically an epistemic problem, not a moral one. If the major concern with utilitarian concepts is that it makes people make inaccurate predictions about how their behaviors will affect the future, that is an empirical psychological problem and needs to be dealt with separately from utilitarian concepts as tools for moral reasoning.
(2) is an argument from authority.
Please let me know if you were referencing some other concern than the two I’ve speculated about here; I assume I have probably missed your point!
and I think the utility maximising decision is actually to be somewhat deontologist.
I don’t know what “be somewhat deontologist” means to you. I do think that if the same behavior is motivated by multiple contrasting moral frameworks (i.e. by deontology and utilitarianism), that suggests it is “morally robust” and more attractive for that reason.
However, being a deontologist and not a utilitarian is only truly meaningful when the two moral frameworks would lead us to different decisions. In these circumstances, it is by definition not the utility maximizing decision to be deontologist.
If I had to guess at your meaning, it’s that “deontologist” is a psychological state, close to a personality trait or identity. Hence, it is primarily something that you can “be,” and something that you can be “somewhat” in a meaningful way. Being a deontological sort of person makes you do things that a utilitarian calculus might approve of.
More importantly though, people do use utilitarianism in contexts with very large amounts of utility and small probabilities—see strong longtermism and the astronomical waste arguments.
I agree that people do attempt to apply utilitarian concepts to make an argument for avoiding astronomical waste.
I think this is an important and action relevant thing, influencing a bunch of people in EA
I agree that if a moral argument is directing significant human endeavors, that makes it important to consider.
and that criticising this is a meaningful critique of utilitarianism
This is where I disagree with (my interpretation of) you.
I think of moral questions as akin to engineering problems.
Occasionally, it turns out that a “really big” or “really small” version of a familiar tool or material is the perfect solution for a novel engineering challenge. The Great Wall of China is an example.
Other times, we need to implement a familiar concept using unfamiliar technology, such as “molecular tweezers” or “solar sails.”
Still other times, the engineering challenge is remote enough that we have to invent a whole new category of tool, using entirely new technologies, in order to solve it.
Utilitarianism, deontology, virtue ethics, nihilism, relativism, and other frameworks all offer us “moral tools” and “moral concepts” that we can use to analyze and interpret novel “moral engineering challenges,” like the question of whether and how to steer sentient beings toward expansion throughout the lightcone.
When these tools, as we apply them today, fail to solve these novel moral conundrums in a satisfying way, that suggests some combination of their limitations, our own flawed application of them, and perhaps the potential for some new moral tools that we haven’t hit on yet.
Failure to fully solve these novel problems isn’t a “critique” of these moral tools, any more than a collapsed bridge is a “critique” of the crane that was used to build it.
The tendency to frame moral questions, like astronomical waste, as opportunities to pit one moral framework against another and see which comes out the victor, strikes me as a strange practice.
Imagine that we are living in an early era, in which there is much debate and uncertainty about whether or not it is morally good to kill heathens. Heathens are killed routinely, but people talk a lot about whether or not this is a good thing.
However, every time the subject of heathen-killing comes up, the argument quickly turns to a debate over whether the Orthodox or the Anti-Orthodox moral framework gives weirder results in evaluating the heathen-killing question. All the top philosophers from both schools of thought think of the heathen-killing question as showing up the strengths and weaknesses of the two philosophical schools.
I propose that it would be silly to participate in the Orthodox vs. Anti-Orthodox debate. Instead, I would prefer to focus on understanding the heathen-killing question from both schools of thought, and also try to rope in other perspectives: economic, political, technological, cultural, and historical. I would want to meet some heathens and some heathen-killers. I would try to get the facts on the ground. Who is leading the next war party? How will the spoils be divided up? Who has lost a loved one in the battles with the heathens? Are there any secret heathens around in our own side?
This research strikes me as far more interesting, and far more useful in working toward a resolution of the heathen-killing question, than perpetuating the Orthodox vs. Anti-Orthodox debate.
By the same token, I propose that we stop interpreting astronomical waste and similar moral conundrums as opportunities to debate the merits of utilitarianism vs. deontology vs. other schools of thought. Instead, let’s try and obtain a multifaceted, “foxy” view of the issue. I suspect that these controversial questions will begin to dissolve as we gather more information from a wider diversity of departments and experiences than we have at present.
OK, that seems like a pretty reasonable position. Thoough if we’re restricting ourselves to everyday situations it feels a bit messy—naive utilitarianism implies things like lying a bunch or killing people in contrived situations, and I think the utility maximising decision is actually to be somewhat deontologist.
More importantly though, people do use utilitarianism in contexts with very large amounts of utility and small probabilities—see strong longtermism and the astronomical waste arguments. I think this is an important and action relevant thing, influencing a bunch of people in EA, and that criticising this is a meaningful critique of utilitarianism, not a weird contrived thought experiment
I don’t know what “naive” utilitarianism is. Some possibilities include:
Making incorrect predictions about the net effects of your behavior on future world states, due to the ways that utilitarian concepts might misguide your epistemics.
Having different interpretations of the same outcomes from a more “sophisticated” moral thinker.
I would argue that (1) is basically an epistemic problem, not a moral one. If the major concern with utilitarian concepts is that it makes people make inaccurate predictions about how their behaviors will affect the future, that is an empirical psychological problem and needs to be dealt with separately from utilitarian concepts as tools for moral reasoning.
(2) is an argument from authority.
Please let me know if you were referencing some other concern than the two I’ve speculated about here; I assume I have probably missed your point!
I don’t know what “be somewhat deontologist” means to you. I do think that if the same behavior is motivated by multiple contrasting moral frameworks (i.e. by deontology and utilitarianism), that suggests it is “morally robust” and more attractive for that reason.
However, being a deontologist and not a utilitarian is only truly meaningful when the two moral frameworks would lead us to different decisions. In these circumstances, it is by definition not the utility maximizing decision to be deontologist.
If I had to guess at your meaning, it’s that “deontologist” is a psychological state, close to a personality trait or identity. Hence, it is primarily something that you can “be,” and something that you can be “somewhat” in a meaningful way. Being a deontological sort of person makes you do things that a utilitarian calculus might approve of.
I agree that people do attempt to apply utilitarian concepts to make an argument for avoiding astronomical waste.
I agree that if a moral argument is directing significant human endeavors, that makes it important to consider.
This is where I disagree with (my interpretation of) you.
I think of moral questions as akin to engineering problems.
Occasionally, it turns out that a “really big” or “really small” version of a familiar tool or material is the perfect solution for a novel engineering challenge. The Great Wall of China is an example.
Other times, we need to implement a familiar concept using unfamiliar technology, such as “molecular tweezers” or “solar sails.”
Still other times, the engineering challenge is remote enough that we have to invent a whole new category of tool, using entirely new technologies, in order to solve it.
Utilitarianism, deontology, virtue ethics, nihilism, relativism, and other frameworks all offer us “moral tools” and “moral concepts” that we can use to analyze and interpret novel “moral engineering challenges,” like the question of whether and how to steer sentient beings toward expansion throughout the lightcone.
When these tools, as we apply them today, fail to solve these novel moral conundrums in a satisfying way, that suggests some combination of their limitations, our own flawed application of them, and perhaps the potential for some new moral tools that we haven’t hit on yet.
Failure to fully solve these novel problems isn’t a “critique” of these moral tools, any more than a collapsed bridge is a “critique” of the crane that was used to build it.
The tendency to frame moral questions, like astronomical waste, as opportunities to pit one moral framework against another and see which comes out the victor, strikes me as a strange practice.
Imagine that we are living in an early era, in which there is much debate and uncertainty about whether or not it is morally good to kill heathens. Heathens are killed routinely, but people talk a lot about whether or not this is a good thing.
However, every time the subject of heathen-killing comes up, the argument quickly turns to a debate over whether the Orthodox or the Anti-Orthodox moral framework gives weirder results in evaluating the heathen-killing question. All the top philosophers from both schools of thought think of the heathen-killing question as showing up the strengths and weaknesses of the two philosophical schools.
I propose that it would be silly to participate in the Orthodox vs. Anti-Orthodox debate. Instead, I would prefer to focus on understanding the heathen-killing question from both schools of thought, and also try to rope in other perspectives: economic, political, technological, cultural, and historical. I would want to meet some heathens and some heathen-killers. I would try to get the facts on the ground. Who is leading the next war party? How will the spoils be divided up? Who has lost a loved one in the battles with the heathens? Are there any secret heathens around in our own side?
This research strikes me as far more interesting, and far more useful in working toward a resolution of the heathen-killing question, than perpetuating the Orthodox vs. Anti-Orthodox debate.
By the same token, I propose that we stop interpreting astronomical waste and similar moral conundrums as opportunities to debate the merits of utilitarianism vs. deontology vs. other schools of thought. Instead, let’s try and obtain a multifaceted, “foxy” view of the issue. I suspect that these controversial questions will begin to dissolve as we gather more information from a wider diversity of departments and experiences than we have at present.