Could you explain what you mean by āopen-ended normative uncertaintyā and/āor āopen-ended notions of moral uncertaintyā, as distinct from the more general concepts of normative/āmoral uncertainty?
Footnote 26 leaves me with the impression that perhaps you mean something like āuncertainty about what our fundamental goals should be, rather than uncertainty thatās just about what should follow from our fundamental goalsā. But Iām not sure Iād call the latter type of uncertainty normative/āmoral uncertainty at allāit seems more like logical or empirical uncertainty.
(Or feel free to let this be answered by your future post on āmoral reflection from an anti-realist perspectiveā.)
By āopen-ended moral uncertaintyā I mean being uncertain about oneās values without having in mind well-defined criteria (either implicit or explicit) for what constitutes a correct solution.
Footnote 26 leaves me with the impression that perhaps you mean something like āuncertainty about what our fundamental goals should be, rather than uncertainty thatās just about what should follow from our fundamental goalsā. But Iām not sure Iād call the latter type of uncertainty normative/āmoral uncertainty at allāit seems more like logical or empirical uncertainty.
Yes, this captures it well. Iād say most of the usage of āmoral uncertaintyā in EA circles is at least in part open-ended, so this is in agreement with your intuition that maybe what Iām describing isnāt ānormative uncertaintyā at all. I think many effective altruists use āmoral uncertaintyā in a way that either fails to refer to anything meaningful, or it implies under-determined moral values. (I think this can often be okay. Our views on lots of things are under-determined and there isnāt necessarily anything wrong with that. But sometimes it can be bad to think that something is well-determined when itās not.)
Now, I didnāt necessarily mean to suggest that the only defensible way to think that morality has enough āstructureā to deserve the label āmoral realismā is to advance an object-level normative theory that specifies every single possible detail. If someone subscribes to hedonistic total utilitarianism but leaves it under-defined to what degree bees can feel pleasure, maybe that still qualifies as moral realism. But if someone is so morally uncertain that they donāt know whether they favor preference utilitarianism or hedonistic utilitarianism, or whether they might favor some kind prioritarianism after all, or even something entirely different such as Kantianism, moral particularism, etc., then I would ask them: āWhy do you think the question youāre asking yourself is well-defined? What are you uncertain about? Why do you expect there to be a speaker-independent solution to this question?ā
To be clear, Iām not making an argument that one cannot be in a state of uncertainty between, for instance, preference utilitarianism versus hedonistic utilitarianism. Iām just saying that, as far as I can tell, the way to make this work satisfactorily would be based on anti-realist assumptions. The question weāre asking, in this case, isnāt āWhatās the true moral theory?ā but āWhich moral theory would I come to endorse if I thought about this question more?ā
This is an interesting perspective. I have indeed noticed for a while that my moral uncertainty has the very weird feature that Iām not even sure what shape or type of solution Iām after, or what criteria Iād evaluate it against. And this seems to mesh well with your comments about this seeming to be ill-defined, and a matter where people donāt even know what theyāre uncertain about.
Thus far, Iāve basically responded to that issue with the thought that: āIām extremely confused about lots of things, including things that I have reason to believe really do correspond to reality like quantum mechanics or the ābeginningā or āendingā of the universe. So even if Iām extremely confused about this, maybe thereās still something real going on there that Iām uncertain about, rather than there just being nothing [in the sense of speaker-independent normativity] going on there.ā (Iām aware that anti-realism doesnāt mean āthereās no normativity at all going on hereā.)
But I definitely think that the case for believing in things like quantum mechanics despite not understanding them is much stronger than the case for believing in things like speaker-independent normativity despite not understanding it.
Also, just in case this wasnāt clear, by those sentences of mine that you quoted, I meant that Iām not sure Iād call āuncertainty thatās just about what should follow from our fundamental goalsā normative/āmoral uncertainty, rather than logical or empirical uncertainty. I would call āuncertainty about what our fundamental goals should beā normative/āmoral uncertainty. (And then thatās subject to your criticisms.)
Thanks for this post.
Could you explain what you mean by āopen-ended normative uncertaintyā and/āor āopen-ended notions of moral uncertaintyā, as distinct from the more general concepts of normative/āmoral uncertainty?
Footnote 26 leaves me with the impression that perhaps you mean something like āuncertainty about what our fundamental goals should be, rather than uncertainty thatās just about what should follow from our fundamental goalsā. But Iām not sure Iād call the latter type of uncertainty normative/āmoral uncertainty at allāit seems more like logical or empirical uncertainty.
(Or feel free to let this be answered by your future post on āmoral reflection from an anti-realist perspectiveā.)
Good question!
By āopen-ended moral uncertaintyā I mean being uncertain about oneās values without having in mind well-defined criteria (either implicit or explicit) for what constitutes a correct solution.
Yes, this captures it well. Iād say most of the usage of āmoral uncertaintyā in EA circles is at least in part open-ended, so this is in agreement with your intuition that maybe what Iām describing isnāt ānormative uncertaintyā at all. I think many effective altruists use āmoral uncertaintyā in a way that either fails to refer to anything meaningful, or it implies under-determined moral values. (I think this can often be okay. Our views on lots of things are under-determined and there isnāt necessarily anything wrong with that. But sometimes it can be bad to think that something is well-determined when itās not.)
Now, I didnāt necessarily mean to suggest that the only defensible way to think that morality has enough āstructureā to deserve the label āmoral realismā is to advance an object-level normative theory that specifies every single possible detail. If someone subscribes to hedonistic total utilitarianism but leaves it under-defined to what degree bees can feel pleasure, maybe that still qualifies as moral realism. But if someone is so morally uncertain that they donāt know whether they favor preference utilitarianism or hedonistic utilitarianism, or whether they might favor some kind prioritarianism after all, or even something entirely different such as Kantianism, moral particularism, etc., then I would ask them: āWhy do you think the question youāre asking yourself is well-defined? What are you uncertain about? Why do you expect there to be a speaker-independent solution to this question?ā
To be clear, Iām not making an argument that one cannot be in a state of uncertainty between, for instance, preference utilitarianism versus hedonistic utilitarianism. Iām just saying that, as far as I can tell, the way to make this work satisfactorily would be based on anti-realist assumptions. The question weāre asking, in this case, isnāt āWhatās the true moral theory?ā but āWhich moral theory would I come to endorse if I thought about this question more?ā
This is an interesting perspective. I have indeed noticed for a while that my moral uncertainty has the very weird feature that Iām not even sure what shape or type of solution Iām after, or what criteria Iād evaluate it against. And this seems to mesh well with your comments about this seeming to be ill-defined, and a matter where people donāt even know what theyāre uncertain about.
Thus far, Iāve basically responded to that issue with the thought that: āIām extremely confused about lots of things, including things that I have reason to believe really do correspond to reality like quantum mechanics or the ābeginningā or āendingā of the universe. So even if Iām extremely confused about this, maybe thereās still something real going on there that Iām uncertain about, rather than there just being nothing [in the sense of speaker-independent normativity] going on there.ā (Iām aware that anti-realism doesnāt mean āthereās no normativity at all going on hereā.)
But I definitely think that the case for believing in things like quantum mechanics despite not understanding them is much stronger than the case for believing in things like speaker-independent normativity despite not understanding it.
Also, just in case this wasnāt clear, by those sentences of mine that you quoted, I meant that Iām not sure Iād call āuncertainty thatās just about what should follow from our fundamental goalsā normative/āmoral uncertainty, rather than logical or empirical uncertainty. I would call āuncertainty about what our fundamental goals should beā normative/āmoral uncertainty. (And then thatās subject to your criticisms.)