Could you explain what you mean by “open-ended normative uncertainty” and/or “open-ended notions of moral uncertainty”, as distinct from the more general concepts of normative/moral uncertainty?
Footnote 26 leaves me with the impression that perhaps you mean something like “uncertainty about what our fundamental goals should be, rather than uncertainty that’s just about what should follow from our fundamental goals”. But I’m not sure I’d call the latter type of uncertainty normative/moral uncertainty at all—it seems more like logical or empirical uncertainty.
(Or feel free to let this be answered by your future post on “moral reflection from an anti-realist perspective”.)
By “open-ended moral uncertainty” I mean being uncertain about one’s values without having in mind well-defined criteria (either implicit or explicit) for what constitutes a correct solution.
Footnote 26 leaves me with the impression that perhaps you mean something like “uncertainty about what our fundamental goals should be, rather than uncertainty that’s just about what should follow from our fundamental goals”. But I’m not sure I’d call the latter type of uncertainty normative/moral uncertainty at all—it seems more like logical or empirical uncertainty.
Yes, this captures it well. I’d say most of the usage of “moral uncertainty” in EA circles is at least in part open-ended, so this is in agreement with your intuition that maybe what I’m describing isn’t “normative uncertainty” at all. I think many effective altruists use “moral uncertainty” in a way that either fails to refer to anything meaningful, or it implies under-determined moral values. (I think this can often be okay. Our views on lots of things are under-determined and there isn’t necessarily anything wrong with that. But sometimes it can be bad to think that something is well-determined when it’s not.)
Now, I didn’t necessarily mean to suggest that the only defensible way to think that morality has enough “structure” to deserve the label “moral realism” is to advance an object-level normative theory that specifies every single possible detail. If someone subscribes to hedonistic total utilitarianism but leaves it under-defined to what degree bees can feel pleasure, maybe that still qualifies as moral realism. But if someone is so morally uncertain that they don’t know whether they favor preference utilitarianism or hedonistic utilitarianism, or whether they might favor some kind prioritarianism after all, or even something entirely different such as Kantianism, moral particularism, etc., then I would ask them: “Why do you think the question you’re asking yourself is well-defined? What are you uncertain about? Why do you expect there to be a speaker-independent solution to this question?”
To be clear, I’m not making an argument that one cannot be in a state of uncertainty between, for instance, preference utilitarianism versus hedonistic utilitarianism. I’m just saying that, as far as I can tell, the way to make this work satisfactorily would be based on anti-realist assumptions. The question we’re asking, in this case, isn’t “What’s the true moral theory?” but “Which moral theory would I come to endorse if I thought about this question more?”
This is an interesting perspective. I have indeed noticed for a while that my moral uncertainty has the very weird feature that I’m not even sure what shape or type of solution I’m after, or what criteria I’d evaluate it against. And this seems to mesh well with your comments about this seeming to be ill-defined, and a matter where people don’t even know what they’re uncertain about.
Thus far, I’ve basically responded to that issue with the thought that: “I’m extremely confused about lots of things, including things that I have reason to believe really do correspond to reality like quantum mechanics or the ‘beginning’ or ‘ending’ of the universe. So even if I’m extremely confused about this, maybe there’s still something real going on there that I’m uncertain about, rather than there just being nothing [in the sense of speaker-independent normativity] going on there.” (I”m aware that anti-realism doesn’t mean “there’s no normativity at all going on here”.)
But I definitely think that the case for believing in things like quantum mechanics despite not understanding them is much stronger than the case for believing in things like speaker-independent normativity despite not understanding it.
Also, just in case this wasn’t clear, by those sentences of mine that you quoted, I meant that I’m not sure I’d call “uncertainty that’s just about what should follow from our fundamental goals” normative/moral uncertainty, rather than logical or empirical uncertainty. I would call “uncertainty about what our fundamental goals should be” normative/moral uncertainty. (And then that’s subject to your criticisms.)
Thanks for this post.
Could you explain what you mean by “open-ended normative uncertainty” and/or “open-ended notions of moral uncertainty”, as distinct from the more general concepts of normative/moral uncertainty?
Footnote 26 leaves me with the impression that perhaps you mean something like “uncertainty about what our fundamental goals should be, rather than uncertainty that’s just about what should follow from our fundamental goals”. But I’m not sure I’d call the latter type of uncertainty normative/moral uncertainty at all—it seems more like logical or empirical uncertainty.
(Or feel free to let this be answered by your future post on “moral reflection from an anti-realist perspective”.)
Good question!
By “open-ended moral uncertainty” I mean being uncertain about one’s values without having in mind well-defined criteria (either implicit or explicit) for what constitutes a correct solution.
Yes, this captures it well. I’d say most of the usage of “moral uncertainty” in EA circles is at least in part open-ended, so this is in agreement with your intuition that maybe what I’m describing isn’t “normative uncertainty” at all. I think many effective altruists use “moral uncertainty” in a way that either fails to refer to anything meaningful, or it implies under-determined moral values. (I think this can often be okay. Our views on lots of things are under-determined and there isn’t necessarily anything wrong with that. But sometimes it can be bad to think that something is well-determined when it’s not.)
Now, I didn’t necessarily mean to suggest that the only defensible way to think that morality has enough “structure” to deserve the label “moral realism” is to advance an object-level normative theory that specifies every single possible detail. If someone subscribes to hedonistic total utilitarianism but leaves it under-defined to what degree bees can feel pleasure, maybe that still qualifies as moral realism. But if someone is so morally uncertain that they don’t know whether they favor preference utilitarianism or hedonistic utilitarianism, or whether they might favor some kind prioritarianism after all, or even something entirely different such as Kantianism, moral particularism, etc., then I would ask them: “Why do you think the question you’re asking yourself is well-defined? What are you uncertain about? Why do you expect there to be a speaker-independent solution to this question?”
To be clear, I’m not making an argument that one cannot be in a state of uncertainty between, for instance, preference utilitarianism versus hedonistic utilitarianism. I’m just saying that, as far as I can tell, the way to make this work satisfactorily would be based on anti-realist assumptions. The question we’re asking, in this case, isn’t “What’s the true moral theory?” but “Which moral theory would I come to endorse if I thought about this question more?”
This is an interesting perspective. I have indeed noticed for a while that my moral uncertainty has the very weird feature that I’m not even sure what shape or type of solution I’m after, or what criteria I’d evaluate it against. And this seems to mesh well with your comments about this seeming to be ill-defined, and a matter where people don’t even know what they’re uncertain about.
Thus far, I’ve basically responded to that issue with the thought that: “I’m extremely confused about lots of things, including things that I have reason to believe really do correspond to reality like quantum mechanics or the ‘beginning’ or ‘ending’ of the universe. So even if I’m extremely confused about this, maybe there’s still something real going on there that I’m uncertain about, rather than there just being nothing [in the sense of speaker-independent normativity] going on there.” (I”m aware that anti-realism doesn’t mean “there’s no normativity at all going on here”.)
But I definitely think that the case for believing in things like quantum mechanics despite not understanding them is much stronger than the case for believing in things like speaker-independent normativity despite not understanding it.
Also, just in case this wasn’t clear, by those sentences of mine that you quoted, I meant that I’m not sure I’d call “uncertainty that’s just about what should follow from our fundamental goals” normative/moral uncertainty, rather than logical or empirical uncertainty. I would call “uncertainty about what our fundamental goals should be” normative/moral uncertainty. (And then that’s subject to your criticisms.)