There are some fundamental problems facing moral uncertainty that I haven’t seen its proponents even refer to, let alone refute:
The xkcd.com/927 problem—whatever moral uncertainty theory one expounds to deal with theories T1...Tn seems likely to constitute Tn+1. I’ve just been reading through Will’s new book, and though it addresses this one, it does so very vaguely, basically by claiming that ‘one ought under moral uncertainty theory X to do X1’ is a qualitatively different claim than ‘one ought under moral theory Y to do Y1’. This might be true, depending on some very murky questions about what norms look like, but it also seems that the latter is qualitatively different from the claim that ‘one ought under moral theory Z to do Z1’. We use the same word ‘ought’ in all three cases, but it may well be a homonym.
If one of many of the subtypes of moral anti-realism is true, moral uncertainty is devoid of content—words like ‘should’, ‘ought’ etc are either necessarily wrong or not even meaningful.
On your second point, I think it’s true that the way I described moral uncertainty in this post applies most straightforwardly if we accept moral realism rather than antirealism. But I think much of the discussion, theories, etc. related to moral uncertainty will still be relevant given various types of moral antirealism—there’ll just need to be some adjustments in interpretation and application. (I’m not sure if this is the case for all types of moral antirealism.)
As I say in a footnote of the post:
In various places in this sequence, I will use language that may appear to endorse or presume moral realism (e.g., referring to “moral information” or to probability of a particular moral theory being “true”). But this is essentially just for convenience; I intend this sequence to be neutral on the matter of moral realism vs antirealism, and I believe this post can be useful in mostly similar ways regardless of one’s position on that matter. I discuss the matter of “moral uncertainty for antirealists” in more detail in this separate post.
For more on that, see the post linked to there.
I also don’t think it’s true that all types of moral antirealism would mean/claim that “words like ‘should’, ‘ought’ etc are either necessarily wrong or not even meaningful.” And I’ve talked to thoughtful antirealists who actively argue against such a view—if I recall correctly, this post is a good example of that (and in any case, it’s an interesting post).
I’ll just respond quickly, but I imagine people who are actively working on moral uncertainty stuff would be able to say much more. And I’ll split my response to each point into a separate comment.
On your first point, you may find this paper from Phil Trammell interesting. (Though I haven’t read beyond the abstract myself, and am not sure I’d understand the paper easily if I did.) The abstract reads:
When we are faced with a choice among acts, but are uncertain about the true state of the world, we may be uncertain about the acts’ “choiceworthiness”. Decision theories guide our choice by making normative claims about how we should respond to this uncertainty. If we are unsure which decision theory is correct, however, we may remain unsure of what we ought to do. Given this decision-theoretic uncertainty, meta-theories attempt to resolve the conflicts between our decision theories...but we may be unsure which meta-theory is correct as well. This reasoning can launch a regress of ever-higher-order uncertainty, which may leave one forever uncertain about what one ought to do. There is, fortunately, a class of circumstances under which this regress is not a problem. If one holds a cardinal understanding of subjective choiceworthiness, and accepts certain other criteria (which are too weak to specify any particular decision theory), one’s hierarchy of metanormative uncertainty ultimately converges to precise definitions of “subjective choiceworthiness” for any finite set of acts. If one allows the metanormative regress to extend to the transfinite ordinals, the convergence criteria can be weakened further. Finally, the structure of these results applies straightforwardly not just to decision-theoretic uncertainty, but also to other varieties of normative uncertainty, such as moral uncertainty.
I believe similar issues were also discussed on some episodes of the 80,000 Hours Podcast—perhaps one with Hilary Greaves?
There are some fundamental problems facing moral uncertainty that I haven’t seen its proponents even refer to, let alone refute:
The xkcd.com/927 problem—whatever moral uncertainty theory one expounds to deal with theories T1...Tn seems likely to constitute Tn+1. I’ve just been reading through Will’s new book, and though it addresses this one, it does so very vaguely, basically by claiming that ‘one ought under moral uncertainty theory X to do X1’ is a qualitatively different claim than ‘one ought under moral theory Y to do Y1’. This might be true, depending on some very murky questions about what norms look like, but it also seems that the latter is qualitatively different from the claim that ‘one ought under moral theory Z to do Z1’. We use the same word ‘ought’ in all three cases, but it may well be a homonym.
If one of many of the subtypes of moral anti-realism is true, moral uncertainty is devoid of content—words like ‘should’, ‘ought’ etc are either necessarily wrong or not even meaningful.
On your second point, I think it’s true that the way I described moral uncertainty in this post applies most straightforwardly if we accept moral realism rather than antirealism. But I think much of the discussion, theories, etc. related to moral uncertainty will still be relevant given various types of moral antirealism—there’ll just need to be some adjustments in interpretation and application. (I’m not sure if this is the case for all types of moral antirealism.)
As I say in a footnote of the post:
For more on that, see the post linked to there.
I also don’t think it’s true that all types of moral antirealism would mean/claim that “words like ‘should’, ‘ought’ etc are either necessarily wrong or not even meaningful.” And I’ve talked to thoughtful antirealists who actively argue against such a view—if I recall correctly, this post is a good example of that (and in any case, it’s an interesting post).
Hi Arepo,
I’ll just respond quickly, but I imagine people who are actively working on moral uncertainty stuff would be able to say much more. And I’ll split my response to each point into a separate comment.
On your first point, you may find this paper from Phil Trammell interesting. (Though I haven’t read beyond the abstract myself, and am not sure I’d understand the paper easily if I did.) The abstract reads:
I believe similar issues were also discussed on some episodes of the 80,000 Hours Podcast—perhaps one with Hilary Greaves?