From the structure of your writing (moslty the high number of subtitles), I often wasn’t sure where you’re endorsing a specific approach versus just laying out what the options are and what people could do. (That’s probably fine because I anyway see the point of good philosophy as “clearly laying out the option space.”)
In any case, I think you hit on the things I also find relevant. E.g., even as a self-identifying moral anti-realist, I place a great deal of importance on “aim for simplicity (if possible/sensible)” in practice.
Some thoughts where I either disagree or have something important to add:
Another objection to 4. moral ambiguity, next to what you already listed under 4. i, is that sometimes the extension of an intuitive principle is itself ambiguous. For instance, consider the intuitive principle, “what we want to do to others is what is in their interests.” How do we extend that principle to situations where the number of others isn’t fixed? We now face multiple levels of underdefinedness (no wonder, then, that population ethics is considered difficult or controversial): (1) It’s under-defined how many new people with interests/goals there will be. (2) It’s under-defined which interests/goals a new person will have. (See here for an exploration of what this could imply.)
I endorse what you call “embracing ‘biases’” in some circumstances, but I would phrase that in a more appealing way. :) I know you put “biases” in quotation marks, but it still sounds a bit unappealing that way. The way I would put it: Morality is inherently underdefined (see, e.g., my previous bullet point), so we are faced with the option to either embrace that underdefinedness or go with a particular axiology not because it’s objectively justified, but sbecause we happen to care deeply about it. Instead of “embracing ‘biases,’” I’d call the latter “filling out moral underdefinedness by embracing strongly held intuitions.” (See also the subsection Anticipating objections (dialogue) in my post on moral uncertainty from an anti-realist perspective.)
What you describe as the particularist solution to moral uncertainty, is that really any different from the following: Imagine you have a “moral parliament” in your head filled with advocates for moral views and intuitions that you find overall very appealing and didn’t want to distill down any further. (Those advocates might be represented at different weights.) Whenever a tough decision comes up, you mentally simulate bargaining among those advocates where the ones who have a strong opininion on the matter in question will speak up the loudest and throw in a higher portion of their total bargaining allowance. This approach will tend to give you the same answer as the particularist one in practice(?), but it seems maybe a bit more principled in the way it’s described? Also, I want to flag that this isn’t just an approach to moral uncertainty—you can also view it as a full-blown normative theory in the light of undecidedness between theories.
“If we think moral realism is true, we’d expect the best theories of morality to be simple as simplicity is an epistemic virtue.” This is just a tangential point, but I’ve seen other people use this sort of reasoning as an argument for hedonist utilitarianism (because that view is particularly simple). I just want to flag that this line of argument doesn’t work because confident belief in moral realism and moral uncertainty don’t go together. In other words, the only worlds in which you’re justified to be a confident moral realist are worlds where you already know the complete moral reality. Basically, if you’re morally uncertain, you’re by necessity also metaethically uncertain, which means that you can’t just bet on pure simplicity with all your weight (to the point that you would bite large bullets that you otherwise—under anti-realism—wouldn’t bite). (Also, if someone wanted to bet on pure simplicity, I’d wager that tranquilism is simpler than hedonism—but again, I don’t think we should take aim-for-simplicity reasoning quite that far.)
Cool post!
From the structure of your writing (moslty the high number of subtitles), I often wasn’t sure where you’re endorsing a specific approach versus just laying out what the options are and what people could do. (That’s probably fine because I anyway see the point of good philosophy as “clearly laying out the option space.”)
In any case, I think you hit on the things I also find relevant. E.g., even as a self-identifying moral anti-realist, I place a great deal of importance on “aim for simplicity (if possible/sensible)” in practice.
Some thoughts where I either disagree or have something important to add:
Another objection to 4. moral ambiguity, next to what you already listed under 4. i, is that sometimes the extension of an intuitive principle is itself ambiguous. For instance, consider the intuitive principle, “what we want to do to others is what is in their interests.” How do we extend that principle to situations where the number of others isn’t fixed? We now face multiple levels of underdefinedness (no wonder, then, that population ethics is considered difficult or controversial):
(1) It’s under-defined how many new people with interests/goals there will be.
(2) It’s under-defined which interests/goals a new person will have.
(See here for an exploration of what this could imply.)
I endorse what you call “embracing ‘biases’” in some circumstances, but I would phrase that in a more appealing way. :) I know you put “biases” in quotation marks, but it still sounds a bit unappealing that way. The way I would put it:
Morality is inherently underdefined (see, e.g., my previous bullet point), so we are faced with the option to either embrace that underdefinedness or go with a particular axiology not because it’s objectively justified, but sbecause we happen to care deeply about it. Instead of “embracing ‘biases,’” I’d call the latter “filling out moral underdefinedness by embracing strongly held intuitions.” (See also the subsection Anticipating objections (dialogue) in my post on moral uncertainty from an anti-realist perspective.)
What you describe as the particularist solution to moral uncertainty, is that really any different from the following:
Imagine you have a “moral parliament” in your head filled with advocates for moral views and intuitions that you find overall very appealing and didn’t want to distill down any further. (Those advocates might be represented at different weights.) Whenever a tough decision comes up, you mentally simulate bargaining among those advocates where the ones who have a strong opininion on the matter in question will speak up the loudest and throw in a higher portion of their total bargaining allowance.
This approach will tend to give you the same answer as the particularist one in practice(?), but it seems maybe a bit more principled in the way it’s described?
Also, I want to flag that this isn’t just an approach to moral uncertainty—you can also view it as a full-blown normative theory in the light of undecidedness between theories.
“If we think moral realism is true, we’d expect the best theories of morality to be simple as simplicity is an epistemic virtue.” This is just a tangential point, but I’ve seen other people use this sort of reasoning as an argument for hedonist utilitarianism (because that view is particularly simple). I just want to flag that this line of argument doesn’t work because confident belief in moral realism and moral uncertainty don’t go together. In other words, the only worlds in which you’re justified to be a confident moral realist are worlds where you already know the complete moral reality. Basically, if you’re morally uncertain, you’re by necessity also metaethically uncertain, which means that you can’t just bet on pure simplicity with all your weight (to the point that you would bite large bullets that you otherwise—under anti-realism—wouldn’t bite). (Also, if someone wanted to bet on pure simplicity, I’d wager that tranquilism is simpler than hedonism—but again, I don’t think we should take aim-for-simplicity reasoning quite that far.)
Thanks for the nice comment. Yea, I think this was more of “laying out the option space.”
All very interesting points!