This is an interesting post, and I have a couple of things to say in response. I’m copying over the part of my shortform that deals with this:
Normative Realism by degrees
Further to the whole question of Normative / moral realism, there is this post on Moral Anti-Realism. While I don’t really agree with it, I do recommend reading it—one thing that it convinced me of is that there is a close connection between your particular normative ethical theory and moral realism. If you claim to be a moral realist but don’t make ethical claims beyond ‘self-evident’ ones like pain is bad, given the background implausibility of making such a claim about mind-independent facts, you don’t have enough ‘material to work with’ for your theory to plausibly refer to anything. The Moral Anti-Realism post presents this dilemma for the moral realist:
There are instances where just a handful of examples or carefully selected “pointers” can convey all the meaning needed for someone to understand a far-reaching and well-specified concept. I will give two cases where this seems to work (at least superficially) to point out how—absent a compelling object-level theory—we cannot say the same about “normativity.”
...these thought experiments illustrate that under the right circumstances, it’s possible for just a few carefully selected examples to successfully pinpoint fruitful and well-specified concepts in their entirety. We don’t have the philosophical equivalent of a background understanding of chemistry or formal systems… To maintain that normativity—reducible or not—is knowable at least in theory, and to separate it from merely subjective reasons, we have to be able to make direct claims about the structure of normative reality, explaining how the concept unambiguously targets salient features in the space of possible considerations. It is only in this way that the ambitious concept of normativity could attain successful reference. As I have shown in previous sections, absent such an account, we are dealing with a concept that is under-defined, meaningless, or forever unknowable.
The challenge for normative realists is to explain how irreducible reasons can go beyond self-evident principles and remain well-defined and speaker-independent at the same time.
To a large degree, I agree with this claim—I think that many moral realists do as well. Convergence type arguments oftenappear in more recent metaethics (Hare and Parfit are in those previous lists) - so this may already have been recognised. The post discusses such a response to antirealism at the end:
I titled this post “Against Irreducible Normativity.” However, I believe that I have not yet refuted all versions of irreducible normativity. Despite the similarity Parfit’s ethical views share with moral naturalism, Parfit was a proponent of irreducible normativity. Judging by his “climbing the same mountain” analogy, it seems plausible to me that his account of moral realism escapes the main force of my criticism thus far.
But there’s one point I want to make which is in disagreement with that post. I agree that how much you can concretely say about your supposed mind-independent domain of facts affects how plausible its existence should seem, and even how coherent the concept is, but I think that this can come by degrees. This should not be surprising - we’ve known since Quine and Kripke that you can have evidential considerations for/against and degrees of uncertainty about a priori questions. The correct method in such a situation is Bayesian—tally the plausibility points for and against admitting the new thing into your ontology. This can work even if we don’t have an entirely coherent understanding of normative facts, as long as it is coherent enough.
Suppose you’re an Ancient Egyptian who knows a few practical methods for trigonometry and surveying, doesn’t know anything about formal systems or proofs, and someone asks you if there are ‘mathematical facts’. You would say something like “I’m not totally sure what this ‘maths’ thing consists of, but it seems at least plausible that there are some underlying reasons why we keep hitting on the same answers”. You’d be less confident than a modern mathematician, but you could still give a justification for the claim that there are right and wrong answers to mathematical claims. I think that the general thrust of convergence arguments puts us in a similar position with respect to ethical facts.
If we think about how words obtain their meaning, it should be apparent that in order to defend this type of normative realism, one has to commit to a specific normative-ethical theory. If the claim is that normative reality sticks out at us like Mount Fuji on a clear summer day, we need to be able to describe enough of its primary features to be sure that what we’re seeing really is a mountain. If all we are seeing is some rocks (“self-evident principles”) floating in the clouds, it would be premature to assume that they must somehow be connected and form a full mountain.
So, we don’t see the whole mountain, but nor are we seeing simply a few free-floating rocks that might be a mirage. Instead, what we see is maybe part of one slope and a peak.
Let’s be concrete, now—the 5 second, high level description of both Hare’s and Parfit’s convergence arguments goes like this:
If we are going to will the maxim of our action to be a universal law, it must be, to use the jargon, universalizable. I have, that is, to will it not only for the present situation, in which I occupy the role that I do, but also for all situations resembling this in their universal properties, including those in which I occupy all the other possible roles. But I cannot will this unless I am willing to undergo what I should suffer in all those roles, and of course also get the good things that I should enjoy in others of the roles. The upshot is that I shall be able to will only such maxims as do the best, all in all, impartially, for all those affected by my action. And this, again, is utilitarianism.
and
An act is wrong just when such acts are disallowed by some principle that is optimific, uniquely universally willable, and not reasonably rejectable
In other words, the principles that (whatever our particular wants) would produce the best outcome in terms of satisfying our goals, could be willed to be a universal law by all of us and would not be rejected as the basis for a contract, are all the same principles. That is at least suspicious levels of agreement between ethical theories. This is something substantive that can be said—out of every major attempt to get at a universal ethics that has in fact been attempted in history: what produces the best outcome, what can you will to be a universal law, what would we all agree on, seem to produce really similar answers.
The particular convergence arguments given by Parfit and Hare are a lot more complex, I can’t speak to their overall validity. If we thought they were valid then we’d be seeing the entire mountain precisely. Since they just seem quite persuasive, we’re seeing the vague outline of something through the fog, but that’s not the same as just spotting a few free-floating rocks.
Now, run through these same convergence arguments but for decision theory and utility theory, and you have a far stronger conclusion. there might be a bit of haze at the top of that mountain, but we can clearly see which way the slope is headed.
This is why I think that ethical realism should be seen as plausible and realism about some normative facts, like epistemic facts, should be seen as more plausible still. There is some regularity here in need of explanation, and it seems somewhat more natural on the realist framework.
I agree that this ‘theory’ is woefully incomplete, and has very little to say about what the moral facts actually consist of beyond ‘the thing that makes there be a convergence’, but that’s often the case when we’re dealing with difficult conceptual terrain.
I wouldn’t necessarily describe myself as a realist. I get that realism is a weird position. It’s both metaphysically and epistemologically suspicious. What is this mysterious property of “should-ness” that certain actions are meant to possess—and why would our intuitions about which actions possess it be reliable? But I am also very sympathetic to realism and, in practice, tend to reason about normative questions as though I was a full-throated realist.
From the perspective of x, x is not self-defeating
From the antirealism post, referring to the normative web argument:
It’s correct that anti-realism means that none of our beliefs are justified in the realist sense of justification. The same goes for our belief in normative anti-realism itself. According to the realist sense of justification, anti-realism is indeed self-defeating.
However, the entire discussion is about whether the realist way of justification makes any sense in the first place—it would beg the question to postulate that it does.
From the perspective of Theism, God is an excellent explanation for the universe’s existence since he is a person with the freedom to choose to create a contingent entity at any time, while existing necessarily himself. From the perspective of almost anyone likely to read this post, that is obvious nonsense since ‘persons’ and ‘free will’ are not primitive pieces of our ontology, and a ‘necessarily existent person’ makes as much sense as ‘necessarily existent cabbage’- so you can’t call it a compelling argument for the atheist to become a theist.
By the same logic, it is true that saying ‘anti-realism is unjustified on the realist sense of justification’ is question-begging by the realist. The anti-realist has nothing much to say to it except ‘so what’. But you can convert that into a Quinean, non-question begging plausibility argument by saying something like:
We have two competing ways of understanding how beliefs are justified. One is where we have anti-realist ‘justification’ for our beliefs, in purely descriptive terms, the other in which there are mind-independent facts about which of our beliefs are justified, and the latter is a more plausible, parsimonious account of the structure of our beliefs.
This won’t compel the anti-realist, but I think it would compel someone weighing up the two alternative theories of how justification works. If you are uncertain about whether there are mind-independent facts about our beliefs being justified, the argument that anti-realism is self-defeating pulls you in the direction of realism.
[...] one thing that it convinced me of is that there is a close connection between your particular normative ethical theory and moral realism. If you claim to be a moral realist but don’t make ethical claims beyond ‘self-evident’ ones like pain is bad, given the background implausibility of making such a claim about mind-independent facts, you don’t have enough ‘material to work with’ for your theory to plausibly refer to anything.
Cool, I’m happy that this argument appeals to a moral realist!
I agree that it then shifts the arena to convergence arguments. I will discuss them in posts 6 and 7.
In short, I don’t think of myself as a moral realist because I see strong reasons against convergence about moral axiology and population ethics.
This won’t compel the anti-realist, but I think it would compel someone weighing up the two alternative theories of how justification works. If you are uncertain about whether there are mind-independent facts about our beliefs being justified, the argument that anti-realism is self-defeating pulls you in the direction of realism.
I don’t think this argument (“anti-realism is self-defeating”) works well in this context. If anti-realism is just the claim “the rocks or free-floating mountain slopes that we’re seeing don’t connect to form a full mountain,” I don’t see what’s self-defeating about that.
One can try to say that a mistaken anti-realist makes a more costly mistake than a mistaken realist. However, on close inspection, I argue that this intuition turns out to be wrong. It also depends a lot on the details. Consider the following cases:
(1) A person with weak object-level normative opinions. To such a person, the moral landscape they’re seeing looks like either:
(1a) free-floating rocks or parts of mountain slope, with a lot of fog and clouds.
(1b) many (more or less) full mountains, all of which are similarly appealing. The view feels disorienting.
(2) A person with strong object-level normative opinions. To such a person, the moral landscape they’re seeing looks like either:
(2a) a full mountain with nothing else of note even remotely in the vicinity.
(2b) many (more or less) full mountains, but one of which is definitely theirs. All the other mountains have something wrong/unwanted about them.
2a is confident moral realism. 2b is confident moral anti-realism. 1a is genuine uncertainty, which is compatible with moral realism in theory, but there’s no particular reason to assume that the floating rocks would connect. 1b is having underdefined values.
Of course, how things appear to someone may not reflect how they really are. We can construct various types of mistakes that people in the above examples might be making.
This requires longer discussion, but I feel strongly that someone whose view is closest to 2b has a lot to lose by trying to change their psychology into something that lets them see things as 1a or 1b instead. They do have something to gain if 1a or 1b are actually epistemically warranted, but they also have stuff to lose. And the losses and gains here are commensurate – I tried to explain this in endnote 2 of my fourth post. (But it’s a hastily written endnote and I would have ideally written a separate post about just this issue. I plan to touch on it again in a future post on how anti-realism changes things for EAs.)
Lastly, it’s worth noting that sometimes people’s metaethics interact with their normative ethics. A person might not adopt a mindset of thinking about or actually taking stances on normative questions because they’re in the habit of deferring to others or waiting until morality is solved. But if morality is a bit like career choice, then there are things to lose from staying indefinitely uncertain about one’s ideal career, or just going along with others.
To summarize: There’s no infinitely strong wager for moral realism. There is an argument for valuing moral reflection (in the analogy: gaining more clarity on the picture that you’re seeing, and making sure you’re right about what you think you’re seeing). However, the argument for valuing moral reflection is not overridingly strong. It is to be traded off against one’s the strength of one’s object-level normative opinions. And without object-level normative opinions, one’s values might be underdetermined.
You’ve given me a lot to think about! I broadly agree with a lot of what you’ve said here.
I think that it is a more damaging mistake to think moral antirealism is true when realism is true than vice versa, but I agree with you that the difference is nowhere near infinite, and doesn’t give you a strong wager.
However, I do think that normative anti-realism is self-defeating, assuming you start out with normative concepts (though not an assumption that those concepts apply to anything). I consider this argument to be step 1 in establishing moral realism, nowhere near the whole argument.
Epistemic anti-realism
Cool, I’m happy that this argument appeals to a moral realist! ….
...I don’t think this argument (“anti-realism is self-defeating”) works well in this context. If anti-realism is just the claim “the rocks or free-floating mountain slopes that we’re seeing don’t connect to form a full mountain,” I don’t see what’s self-defeating about that...
To summarize: There’s no infinitely strong wager for moral realism.
I agree that there is no infinitely strong wager for moral realism. As soon as moral realists start making empirical claims about the consequences of realism (that convergence is likely), you can’t say that moral realism is true necessarily or that there is an infinitely strong prior in favour of it. An AI that knows that your idealised preferences don’t cohere could always show up and prove you wrong, just as you say. If I were Bob in this dialogue, I’d happily concede that moral anti-realism is true.
If (supposing it were the case) there were not much consensus on anything to do with morality (“The rocks don’t connect...”), someone who pointed that out and said ‘from that I infer that moral realism is unlikely’ wouldn’t be saying anything self-defeating. Moral anti-realism is not self-defeating, either on its own terms or on the terms of a ‘mixed view’ like I describe here:
We have two competing ways of understanding how beliefs are justified. One is where we have anti-realist ‘justification’ for our beliefs, in purely descriptive terms, the other in which there are mind-independent facts about which of our beliefs are justified...
However, I do think that there is an infinitely strong wager in favour of normative realism and that normative anti-realism is self-defeating on the terms of a ‘mixed view’ that starts out considering the two alternatives like that given above. This wager is because of the subset of normative facts that are epistemic facts.
The example that I used was about ‘how beliefs are justified’. Maybe I wasn’t clear, but I was referring to beliefs in general, not to beliefs about morality. Epistemic facts, e.g. that you should believe something if there is sufficient amount of evidence, are a kind of normative fact. You noted them on your list here.
So, the infinite wager argument goes like this -
1) On normative anti-realism there are no facts about which beliefs are justified. So there are no facts about whether normative anti-realism is justified. Therefore, normative anti-realism is self-defeating.
Except that doesn’t work! Because on normative anti-realism, the whole idea of external facts about which beliefs are justified is mistaken, and instead we all just have fundamental principles (whether moral or epistemic) that we use but don’t question, which means that holding a belief without (the realist’s notion of) justification is consistent with anti-realism.
So the wager argument for normative realism actually goes like this -
2) We have two competing ways of understanding how beliefs are justified. One is where we have anti-realist ‘justification’ for our beliefs, in purely descriptive terms of what we will probably end up believing given basic facts about how our minds work in some idealised situation. The other is where there are mind-independent facts about which of our beliefs are justified. The latter is more plausible because of 1).
Evidence for epistemic facts?
I find it interesting the imagined scenario you give in #5 essentially skips over argument 2) as something that is impossible to judge:
AI: Only in a sense I don’t endorse as such! We’ve gone full circle. I take it that you believe that just like there might be irreducibly normative facts about how to do good, the same goes for irreducible normative facts about how to reason?
Bob: Indeed, that has always been my view.
AI: Of course, that concept is just as incomprehensible to me.
The AI doesn’t give evidence against there being irreducible normative facts about how to reason, it just states it finds the concept incoherent, unlike the (hypothetical) evidence that the AI piles on against moral realism (for example, that people’s moral preferences don’t cohere).
Either you think some basic epistemic facts have to exist for reasoning to get off the ground and therefore that epistemic anti-realism is self-defeating, or you are an epistemic anti-realist and don’t care about the realist’s sense of ‘self-defeating’. The AI is in the latter camp, but not because of evidence, the way that it’s a moral anti-realist (...However, you haven’t established that all normative statements work the same way—that was just an intuition...), but just because it’s constructed in such a way that it lacks the concept of an epistemic reason.
So, if this AI is constructed such that irreducibly normative facts about how to reason aren’t comprehensible to it, it only has access to argument 1), which doesn’t work. It can’t imagine 2).
However, I think that we humans are in a situation where 2) is open to consideration, where we have the concept of a reason for believing something, but aren’t sure if it applies—and if we are in that situation, I think we are dragged towards thinking that it must apply, because otherwise our beliefs wouldn’t be justified.
However, this doesn’t establish moral realism—as you said earlier, moral anti-realism is not self-defeating.
If anti-realism is just the claim “the rocks or free-floating mountain slopes that we’re seeing don’t connect to form a full mountain,” I don’t see what’s self-defeating about that
Combining convergence arguments and the infinite wager
If you want to argue for moral realism, then you need evidence for moral realism, which comes in the form of convergence arguments. But the above argument is still relevant, because the convergence and ‘infinite wager’ arguments support each other.
The reason 2) would be bolstered by the success of convergence arguments (in epistemology, or ethics, or any other normative domain) is that convergence arguments increase our confidence that normativity is a coherent concept—which is what 2) needs to work. It certainly seems coherent to me, but this cannot be taken as self-evident since various people have claimed that they or others don’t have the concept.
I also think that 2) is some evidence in favour of moral realism, because it undermines some of the strongest antirealist arguments.
By contrast, for versions of normativity that depend on claims about a normative domain’s structure, the partners-in-crime arguments don’t even apply. After all, just because philosophers might—hypothetically, under idealized circumstances—agree on the answers to all (e.g.) decision-theoretic questions doesn’t mean that they would automatically also find agreement on moral questions.[29] On this interpretation of realism, all domains have to be evaluated separately
I don’t think this is right. What I’m giving here is such a ‘partners-in-crime’ argument with a structure, with epistemic facts at the base. Realism about normativity certainly should lower the burden of proof on moral realism to prove total convergence now, because we already have reason to believe normative facts exist. For most anti-realists, the very strongest argument is the ‘queerness argument’ that normative facts are incoherent or too strange to be allowed into our ontology. The ‘partners-in-crime’/‘infinite wager’ undermines this strong argument against moral realism. So some sort of very strong hint of a convergence structure might be good enough—depending on the details.
I agree that it then shifts the arena to convergence arguments. I will discuss them in posts 6 and 7.
So, with all that out of the way, when we start discussing the convergence arguments, the burden of proof on them is not colossal. If we already have reason to suspect that there are normative facts out there, perhaps some of them are moral facts. But if we found a random morass of different considerations under the name ‘morality’ then we’d be stuck concluding that there might be some normative facts, but maybe they are only epistemic facts, with nothing else in the domain of normativity.
I don’t think this is the case, but I will have to wait until your posts on that topic—I look forward to them!
All I’ll say is that I don’t consider strongly conflicting intuitions in e.g. population ethics to be persuasive reasons for thinking that convergence will not occur. As long as the direction of travel is consistent, and we can mention many positive examples of convergence, the preponderance of evidence is that there are elements of our morality that reach high-level agreement. (I say elements because realism is not all-or-nothing—there could be an objective ‘core’ to ethics, maybe axiology, and much ethics could be built on top of such a realist core—that even seems like the most natural reading of the evidence, if the evidence is that there is convergence only on a limited subset of questions.) If Kant could have been a utilitarian and never realised it, then those who are appalled by the repugnant conclusion could certainly converge to accept it after enough ideal reflection!
Belief in God, or in many gods, prevented the free development of moral reasoning. Disbelief in God, openly admitted by a majority, is a recent event, not yet completed. Because this event is so recent, Non-Religious Ethics is at a very early stage. We cannot yet predict whether, as in Mathematics, we will all reach agreement. Since we cannot know how Ethics will develop, it is not irrational to have high hopes.
This discussion continues to feel like the most productive discussion I’ve had with a moral realist! :)
However, I do think that normative anti-realism is self-defeating, assuming you start out with normative concepts (though not an assumption that those concepts apply to anything). I consider this argument to be step 1 in establishing moral realism, nowhere near the whole argument.
[...]
So the wager argument for normative realism actually goes like this -
2) We have two competing ways of understanding how beliefs are justified. One is where we have anti-realist ‘justification’ for our beliefs, in purely descriptive terms of what we will probably end up believing given basic facts about how our minds work in some idealised situation. The other is where there are mind-independent facts about which of our beliefs are justified. The latter is more plausible because of 1).
[...]
Either you think some basic epistemic facts have to exist for reasoning to get off the ground and therefore that epistemic anti-realism is self-defeating, or you are an epistemic anti-realist and don’t care about the realist’s sense of ‘self-defeating’. The AI is in the latter camp, but not because of evidence, the way that it’s a moral anti-realist (...However, you haven’t established that all normative statements work the same way—that was just an intuition...), but just because it’s constructed in such a way that it lacks the concept of an epistemic reason.
So, if this AI is constructed such that irreducibly normative facts about how to reason aren’t comprehensible to it, it only has access to argument 1), which doesn’t work. It can’t imagine 2).
I think I agree with all of this, but I’m not sure, because we seem to draw different conclusions. In any case, I’m now convinced I should have written the AI’s dialogue a bit differently. You’re right that the AI shouldn’t just state that it has no concept of irreducible normative facts. It should provide an argument as well!
What would you reply if the AI uses the same structure of arguments against other types of normative realism as it uses against moral realism? This would amount to the following trilemma for proponents of irreducible normativity (using section headings from my text):
(1) Is irreducible normativity about super-reasons?
(2) Is (our knowledge of) irreducible normativity confined to self-evident principles?
(3) Is there a speaker-independent normative reality?
I think you’re inclined to agree with me that (1) and (2) are unworkable or not worthy of the term “normative realism.” Also, it seems like there’s a weak sense in which you agree with the points I made in (3), as it relates to the domain of morality.
But maybe you only agree with my points in (3) in a weak sense, whereas I consider the arguments in that section to have stronger implications. The way I thought about this, I think the points in (3) apply to all domains of normativity, and they show that unless we come up with some other way to make normative concepts work that I haven’t yet thought of, we are forced to accept that normative concepts, in order to be action-guiding and meaningful, have to be linked to claims about convergence in human expert reasoners. Doesn’t this pin down the concept of irreducible normativity in a way that blocks any infinite wagers? It doesn’t feel like proper non-naturalism anymore once you postulate this link as a conceptual necessity. “Normativity” became a much more mundane concept after we accepted this link.
However, I think that we humans are in a situation where 2) is open to consideration, where we have the concept of a reason for believing something, but aren’t sure if it applies—and if we are in that situation, I think we are dragged towards thinking that it must apply, because otherwise our beliefs wouldn’t be justified.
The trilemma applies here as well. Saying that it must apply still leaves you with the task of making up your mind on how normative concepts even work. I don’t see alternatives to my suggestions (1), (2) and (3).
What I’m giving here is such a ‘partners-in-crime’ argument with a structure, with epistemic facts at the base. Realism about normativity certainly should lower the burden of proof on moral realism to prove total convergence now, because we already have reason to believe normative facts exist. For most anti-realists, the very strongest argument is the ‘queerness argument’ that normative facts are incoherent or too strange to be allowed into our ontology. The ‘partners-in-crime’/‘infinite wager’ undermines this strong argument against moral realism. So some sort of very strong hint of a convergence structure might be good enough—depending on the details.
Since I don’t think we have established anything interesting about normative facts, the only claim I see in the vicinity of what you say in this paragraph, would go as follows:
“Since we probably agree that there is a lot of convergence among expert reasoners on epistemic facts, we shouldn’t be too surprised if morality works similarly.”
And I kind of agree with that, but I don’t know how much convergence I would expect in epistemology. (I think it’s plausible that it would be higher than for morality, and I do agree that this is an argument to at least look really closely for ways of bringing about convergence on moral questions.)
All I’ll say is that I don’t consider strongly conflicting intuitions in e.g. population ethics to be persuasive reasons for thinking that convergence will not occur. As long as the direction of travel is consistent, and we can mention many positive examples of convergence, the preponderance of evidence is that there are elements of our morality that reach high-level agreement.
I agree with this. My confidence that convergence won’t work is based on not only observing disagreements in fundamental intuitions, but also on seeing why people disagree, and seeing that these disagreements are sometimes “legitimate” because ethical discussions always get stuck in the same places (differences in life goals, which is intertwined with axiology). If one actually thinks about what sorts of assumptions are required for the discussions not to get stuck (something like: “all humans would adopt the same broad types of life goals under idealized conditions”), many people would probably recognize that those assumptions are extremely strong and counterintuitive. Oddly enough, people often don’t seem to think that far because they self-identify as moral realists for reasons that don’t make any sense. They expect convergence on moral questions because they somehow ended up self-identifying as moral realists, instead of them self-identifying as moral realists because they expect convergence.
(I’ll maybe make another comment later today to briefly expand on my line of argument here.)
(I say elements because realism is not all-or-nothing—there could be an objective ‘core’ to ethics, maybe axiology, and much ethics could be built on top of such a realist core—that even seems like the most natural reading of the evidence, if the evidence is that there is convergence only on a limited subset of questions.)
I also agree with that, except that I think axiology is the one place where I’m most confident that there’s no convergence. :)
Maybe my anti-realism is best described as “some moral facts exist (in a weak sense as far as other realist proposals go), but morality is underdetermined.”
(I thought “anti-realism” was the best description for my view, because as I discussed in this comment, the way in which I treat normative concepts takes away the specialness they have under non-naturalism. Even some non-naturalists claim that naturalism isn’t interesting enough to be called “moral realism.” And insofar as my position can be characterized as naturalism, it’s still underdetermined in places where it matters a lot for our ethical practice.)
Belief in God, or in many gods, prevented the free development of moral reasoning. Disbelief in God, openly admitted by a majority, is a recent event, not yet completed. Because this event is so recent, Non-Religious Ethics is at a very early stage. We cannot yet predict whether, as in Mathematics, we will all reach agreement. Since we cannot know how Ethics will develop, it is not irrational to have high hopes.
When I read some similar passage at the end of Parfit’s Reasons and Persons (which may have even included a quote of this passage?), I shared Parfit’s view. But I’ve done a lot of thinking since then. At some point one also has to drastically increase one’s confidence that further game-changing considerations won’t show up, especially if one’s map of the option space feels very complete in a self-contained way, and intellectually satisfying.
This discussion continues to feel like the most productive discussion I’ve had with a moral realist! :)
Glad to be of help! I feel like I’m learning a lot.
What would you reply if the AI uses the same structure of arguments against other types of normative realism as it uses against moral realism? This would amount to the following trilemma for proponents of irreducible normativity (using section headings from my text)
...
(3) Is there a speaker-independent normative reality?
Focussing on epistemic facts, the AI could not make that argument. I assumed that you had the AI lack the concept of epistemic reasons because you agreed with me that there is no possible argument out of using this concept, if you start out with the concept, not because you just felt that it would have been too much of a detour to have the AI explain why it finds the concept incoherent.
I think I agree with all of this, but I’m not sure, because we seem to draw different conclusions. In any case, I’m now convinced I should have written the AI’s dialogue a bit differently. You’re right that the AI shouldn’t just state that it has no concept of irreducible normative facts. It should provide an argument as well!
How would this analogous argument go? I’ll take the AI’s key point and reword it to be speaking about epistemic facts instead of moral facts
AI: To motivate the use of irreducibly normative concepts, philosophers often point to instances of universal agreement on epistemic propositions. Sammy Martin uses the example “we always have a reason to believe that 2+2=4.” Your intuition suggests that all epistemic propositions work the same way. Therefore, you might conclude that even for propositions philosophers disagree over, there exists a solution that’s “just as right” as “we always have a reason to believe that 2+2=4” is right. However, you haven’t established that all epistemic statements work the same way—that was just an intuition. “we always have a reason to believe that 2+2=4” describes something that people are automatically disposed to believe. It expresses something that normally-disposed people come to endorse by their own lights. That makes it a true fact of some kind, but it’s not necessarily an “objective” or “speaker-independent” fact. If you want to show beyond doubt that there are epistemic facts that don’t depend on the attitudes held by the speakers—i.e., epistemic facts beyond what people themselves will judge to be what you should believe —you’d need to deliver a stronger example. But then you run into the following dilemma: If you pick a self-evident epistemic proposition, you face the critique that the “epistemic facts” that you claim exist are merely examples of a subjectivist epistemology. By contrast, if you pick an example proposition that philosophers can reasonably disagree over, you face the critique that you haven’t established what it could mean for one party to be right. If one person claims wehave reason to believe that alien life exists, and another person denies this, how would we tell who’s right? What is the question that these two parties disagree on? Thus far, I have no coherent account of what it could mean for an epistemic theory to be right in the elusive, objectivist sense that Martin and other normative realists hold in mind.
Bob: I think I followed that. You mentioned the example of uncontroversial epistemic propositions, and you seemed somewhat dismissive about their relevance? I always thought those were pretty interesting. Couldn’t I hold the view that true epistemic statements are always self-evident? Maybe not because self-evidence is what makes them true, but because, as rational beings, we are predisposed to appreciate epistemic facts?
AI: Such an account would render epistemology very narrow. Incredibly few epistemic propositions appear self-evident to all humans. The same goes for whatever subset of “well-informed” or “philosophically sophisticated” humans you may want to construct.
It doesn’t work, does it? The reason it doesn’t work is that the scenario in which the AI is written where it ‘concluded’ that ‘incredibly few epistemic propositions appear self-evident to all humans’ is unimaginable. What would it mean for this to be true, what would the world have to be like?
I think the points in (3) apply to all domains of normativity, and they show that unless we come up with some other way to make normative concepts work that I haven’t yet thought of, we are forced to accept that normative concepts, in order to be action-guiding and meaningful, have to be linked to claims about convergence in human expert reasoners.
I do not believe it is logically impossible that expert reasoners could diverge on all epistemic facts, but I do think that it is in some fairly deep sense impossible. For there to be such a divergence, reality itself would have to be unknowable.
The ‘speaker-independent normative reality’ that epistemic facts refer to is just actual objective reality—of all the potential epistemic facts out there, the one that actually corresponds to reality is the one that ‘sticks out’ in exactly the way that a speaker-independent normative reality should.
This means that there is no possible world where anyone with the concept of epistemic facts gets convinced, probabilistically, because they fail to see any epistemic convergence, that there are no epistemic facts. There would never be such a lack of convergence.
So my initial point,
The AI is in the latter camp, but not because of evidence, the way that it’s a moral anti-realist (...However, you haven’t established that all normative statements work the same way—that was just an intuition...), but just because it’s constructed in such a way that it lacks the concept of an epistemic reason.
So, if this AI is constructed such that irreducibly normative facts about how to reason aren’t comprehensible to it, it only has access to argument 1), which doesn’t work. It can’t imagine 2).
still stands—that the AI is a normative anti-realist because it doesn’t have the concept of a normative reason, not because it has the concept and has decided that it probably doesn’t apply (and there was no alternative way for you to write the AI reaching that conclusion).
The trilemma applies here as well. Saying that it must apply still leaves you with the task of making up your mind on how normative concepts even work. I don’t see alternatives to my suggestions (1), (2) and (3).
So I take option (3), where the ‘extremely strong convergence’ on claims about epistemic facts about what we should believe implies with virtual certainty that there is a speaker-independent normative reality, because the reality-corresponding collection of epistemic claims, in fact, stick out compared to all the other possible epistemic facts.
So, maybe the ‘normativity argument’ as I called it is really just another convergence argument, but just a convergence argument that is of infinite or near-infinite strength, because the convergence among our beliefs about what is epistemically justified is so strong that it’s effectively unimaginable that they couldn’t converge.
If you wish to deny that epistemic facts are needed to explain the convergence, I think that you end up in quite a strong form of pragmatism about truth, and give up on the notion of knowing anything about mind-independent objective reality, Kant-style, for reasons that I discuss here. That’s quite a bullet to bite. You don’t expect much convergence on epistemic facts, so maybe you are already a pragmatist about truth?
“Since we probably agree that there is a lot of convergence among expert reasoners on epistemic facts, we shouldn’t be too surprised if morality works similarly.”
And I kind of agree with that, but I don’t know how much convergence I would expect in epistemology. (I think it’s plausible that it would be higher than for morality, and I do agree that this is an argument to at least look really closely for ways of bringing about convergence on moral questions.)
Lastly,
My confidence that convergence won’t work is based on not only observing disagreements in fundamental intuitions, but also on seeing why people disagree, and seeing that these disagreements are sometimes “legitimate” because ethical discussions always get stuck in the same places (differences in life goals, which is intertwined with axiology).
I’ll have to wait for your more specific arguments on this topic! I did give some preliminary discussion here of why, for example, I think that you’re dragged towards a total-utilitarian view whether you like it or not. It’s also important to note that the convergence arguments aren’t (principally) about people, but about possible normative theories—people might refuse to accept the implications of their own beliefs.
This is an interesting post, and I have a couple of things to say in response. I’m copying over the part of my shortform that deals with this:
Normative Realism by degrees
Further to the whole question of Normative / moral realism, there is this post on Moral Anti-Realism. While I don’t really agree with it, I do recommend reading it—one thing that it convinced me of is that there is a close connection between your particular normative ethical theory and moral realism. If you claim to be a moral realist but don’t make ethical claims beyond ‘self-evident’ ones like pain is bad, given the background implausibility of making such a claim about mind-independent facts, you don’t have enough ‘material to work with’ for your theory to plausibly refer to anything. The Moral Anti-Realism post presents this dilemma for the moral realist:
To a large degree, I agree with this claim—I think that many moral realists do as well. Convergence type arguments often appear in more recent metaethics (Hare and Parfit are in those previous lists) - so this may already have been recognised. The post discusses such a response to antirealism at the end:
But there’s one point I want to make which is in disagreement with that post. I agree that how much you can concretely say about your supposed mind-independent domain of facts affects how plausible its existence should seem, and even how coherent the concept is, but I think that this can come by degrees. This should not be surprising - we’ve known since Quine and Kripke that you can have evidential considerations for/against and degrees of uncertainty about a priori questions. The correct method in such a situation is Bayesian—tally the plausibility points for and against admitting the new thing into your ontology. This can work even if we don’t have an entirely coherent understanding of normative facts, as long as it is coherent enough.
Suppose you’re an Ancient Egyptian who knows a few practical methods for trigonometry and surveying, doesn’t know anything about formal systems or proofs, and someone asks you if there are ‘mathematical facts’. You would say something like “I’m not totally sure what this ‘maths’ thing consists of, but it seems at least plausible that there are some underlying reasons why we keep hitting on the same answers”. You’d be less confident than a modern mathematician, but you could still give a justification for the claim that there are right and wrong answers to mathematical claims. I think that the general thrust of convergence arguments puts us in a similar position with respect to ethical facts.
So, we don’t see the whole mountain, but nor are we seeing simply a few free-floating rocks that might be a mirage. Instead, what we see is maybe part of one slope and a peak.
Let’s be concrete, now—the 5 second, high level description of both Hare’s and Parfit’s convergence arguments goes like this:
and
In other words, the principles that (whatever our particular wants) would produce the best outcome in terms of satisfying our goals, could be willed to be a universal law by all of us and would not be rejected as the basis for a contract, are all the same principles. That is at least suspicious levels of agreement between ethical theories. This is something substantive that can be said—out of every major attempt to get at a universal ethics that has in fact been attempted in history: what produces the best outcome, what can you will to be a universal law, what would we all agree on, seem to produce really similar answers.
The particular convergence arguments given by Parfit and Hare are a lot more complex, I can’t speak to their overall validity. If we thought they were valid then we’d be seeing the entire mountain precisely. Since they just seem quite persuasive, we’re seeing the vague outline of something through the fog, but that’s not the same as just spotting a few free-floating rocks.
Now, run through these same convergence arguments but for decision theory and utility theory, and you have a far stronger conclusion. there might be a bit of haze at the top of that mountain, but we can clearly see which way the slope is headed.
This is why I think that ethical realism should be seen as plausible and realism about some normative facts, like epistemic facts, should be seen as more plausible still. There is some regularity here in need of explanation, and it seems somewhat more natural on the realist framework.
I agree that this ‘theory’ is woefully incomplete, and has very little to say about what the moral facts actually consist of beyond ‘the thing that makes there be a convergence’, but that’s often the case when we’re dealing with difficult conceptual terrain.
From Ben’s post:
From the perspective of x, x is not self-defeating
From the antirealism post, referring to the normative web argument:
Sooner or later every theory ends up question-begging.
From the perspective of Theism, God is an excellent explanation for the universe’s existence since he is a person with the freedom to choose to create a contingent entity at any time, while existing necessarily himself. From the perspective of almost anyone likely to read this post, that is obvious nonsense since ‘persons’ and ‘free will’ are not primitive pieces of our ontology, and a ‘necessarily existent person’ makes as much sense as ‘necessarily existent cabbage’- so you can’t call it a compelling argument for the atheist to become a theist.
By the same logic, it is true that saying ‘anti-realism is unjustified on the realist sense of justification’ is question-begging by the realist. The anti-realist has nothing much to say to it except ‘so what’. But you can convert that into a Quinean, non-question begging plausibility argument by saying something like:
This won’t compel the anti-realist, but I think it would compel someone weighing up the two alternative theories of how justification works. If you are uncertain about whether there are mind-independent facts about our beliefs being justified, the argument that anti-realism is self-defeating pulls you in the direction of realism.
Cool, I’m happy that this argument appeals to a moral realist!
I agree that it then shifts the arena to convergence arguments. I will discuss them in posts 6 and 7.
In short, I don’t think of myself as a moral realist because I see strong reasons against convergence about moral axiology and population ethics.
I don’t think this argument (“anti-realism is self-defeating”) works well in this context. If anti-realism is just the claim “the rocks or free-floating mountain slopes that we’re seeing don’t connect to form a full mountain,” I don’t see what’s self-defeating about that.
One can try to say that a mistaken anti-realist makes a more costly mistake than a mistaken realist. However, on close inspection, I argue that this intuition turns out to be wrong. It also depends a lot on the details. Consider the following cases:
(1) A person with weak object-level normative opinions. To such a person, the moral landscape they’re seeing looks like either:
(1a) free-floating rocks or parts of mountain slope, with a lot of fog and clouds.
(1b) many (more or less) full mountains, all of which are similarly appealing. The view feels disorienting.
(2) A person with strong object-level normative opinions. To such a person, the moral landscape they’re seeing looks like either:
(2a) a full mountain with nothing else of note even remotely in the vicinity.
(2b) many (more or less) full mountains, but one of which is definitely theirs. All the other mountains have something wrong/unwanted about them.
2a is confident moral realism. 2b is confident moral anti-realism. 1a is genuine uncertainty, which is compatible with moral realism in theory, but there’s no particular reason to assume that the floating rocks would connect. 1b is having underdefined values.
Of course, how things appear to someone may not reflect how they really are. We can construct various types of mistakes that people in the above examples might be making.
This requires longer discussion, but I feel strongly that someone whose view is closest to 2b has a lot to lose by trying to change their psychology into something that lets them see things as 1a or 1b instead. They do have something to gain if 1a or 1b are actually epistemically warranted, but they also have stuff to lose. And the losses and gains here are commensurate – I tried to explain this in endnote 2 of my fourth post. (But it’s a hastily written endnote and I would have ideally written a separate post about just this issue. I plan to touch on it again in a future post on how anti-realism changes things for EAs.)
Lastly, it’s worth noting that sometimes people’s metaethics interact with their normative ethics. A person might not adopt a mindset of thinking about or actually taking stances on normative questions because they’re in the habit of deferring to others or waiting until morality is solved. But if morality is a bit like career choice, then there are things to lose from staying indefinitely uncertain about one’s ideal career, or just going along with others.
To summarize: There’s no infinitely strong wager for moral realism. There is an argument for valuing moral reflection (in the analogy: gaining more clarity on the picture that you’re seeing, and making sure you’re right about what you think you’re seeing). However, the argument for valuing moral reflection is not overridingly strong. It is to be traded off against one’s the strength of one’s object-level normative opinions. And without object-level normative opinions, one’s values might be underdetermined.
You’ve given me a lot to think about! I broadly agree with a lot of what you’ve said here.
I think that it is a more damaging mistake to think moral antirealism is true when realism is true than vice versa, but I agree with you that the difference is nowhere near infinite, and doesn’t give you a strong wager.
However, I do think that normative anti-realism is self-defeating, assuming you start out with normative concepts (though not an assumption that those concepts apply to anything). I consider this argument to be step 1 in establishing moral realism, nowhere near the whole argument.
Epistemic anti-realism
I agree that there is no infinitely strong wager for moral realism. As soon as moral realists start making empirical claims about the consequences of realism (that convergence is likely), you can’t say that moral realism is true necessarily or that there is an infinitely strong prior in favour of it. An AI that knows that your idealised preferences don’t cohere could always show up and prove you wrong, just as you say. If I were Bob in this dialogue, I’d happily concede that moral anti-realism is true.
If (supposing it were the case) there were not much consensus on anything to do with morality (“The rocks don’t connect...”), someone who pointed that out and said ‘from that I infer that moral realism is unlikely’ wouldn’t be saying anything self-defeating. Moral anti-realism is not self-defeating, either on its own terms or on the terms of a ‘mixed view’ like I describe here:
However, I do think that there is an infinitely strong wager in favour of normative realism and that normative anti-realism is self-defeating on the terms of a ‘mixed view’ that starts out considering the two alternatives like that given above. This wager is because of the subset of normative facts that are epistemic facts.
The example that I used was about ‘how beliefs are justified’. Maybe I wasn’t clear, but I was referring to beliefs in general, not to beliefs about morality. Epistemic facts, e.g. that you should believe something if there is sufficient amount of evidence, are a kind of normative fact. You noted them on your list here.
So, the infinite wager argument goes like this -
Except that doesn’t work! Because on normative anti-realism, the whole idea of external facts about which beliefs are justified is mistaken, and instead we all just have fundamental principles (whether moral or epistemic) that we use but don’t question, which means that holding a belief without (the realist’s notion of) justification is consistent with anti-realism.
So the wager argument for normative realism actually goes like this -
Evidence for epistemic facts?
I find it interesting the imagined scenario you give in #5 essentially skips over argument 2) as something that is impossible to judge:
The AI doesn’t give evidence against there being irreducible normative facts about how to reason, it just states it finds the concept incoherent, unlike the (hypothetical) evidence that the AI piles on against moral realism (for example, that people’s moral preferences don’t cohere).
Either you think some basic epistemic facts have to exist for reasoning to get off the ground and therefore that epistemic anti-realism is self-defeating, or you are an epistemic anti-realist and don’t care about the realist’s sense of ‘self-defeating’. The AI is in the latter camp, but not because of evidence, the way that it’s a moral anti-realist (...However, you haven’t established that all normative statements work the same way—that was just an intuition...), but just because it’s constructed in such a way that it lacks the concept of an epistemic reason.
So, if this AI is constructed such that irreducibly normative facts about how to reason aren’t comprehensible to it, it only has access to argument 1), which doesn’t work. It can’t imagine 2).
However, I think that we humans are in a situation where 2) is open to consideration, where we have the concept of a reason for believing something, but aren’t sure if it applies—and if we are in that situation, I think we are dragged towards thinking that it must apply, because otherwise our beliefs wouldn’t be justified.
However, this doesn’t establish moral realism—as you said earlier, moral anti-realism is not self-defeating.
Combining convergence arguments and the infinite wager
If you want to argue for moral realism, then you need evidence for moral realism, which comes in the form of convergence arguments. But the above argument is still relevant, because the convergence and ‘infinite wager’ arguments support each other.
The reason 2) would be bolstered by the success of convergence arguments (in epistemology, or ethics, or any other normative domain) is that convergence arguments increase our confidence that normativity is a coherent concept—which is what 2) needs to work. It certainly seems coherent to me, but this cannot be taken as self-evident since various people have claimed that they or others don’t have the concept.
I also think that 2) is some evidence in favour of moral realism, because it undermines some of the strongest antirealist arguments.
I don’t think this is right. What I’m giving here is such a ‘partners-in-crime’ argument with a structure, with epistemic facts at the base. Realism about normativity certainly should lower the burden of proof on moral realism to prove total convergence now, because we already have reason to believe normative facts exist. For most anti-realists, the very strongest argument is the ‘queerness argument’ that normative facts are incoherent or too strange to be allowed into our ontology. The ‘partners-in-crime’/‘infinite wager’ undermines this strong argument against moral realism. So some sort of very strong hint of a convergence structure might be good enough—depending on the details.
So, with all that out of the way, when we start discussing the convergence arguments, the burden of proof on them is not colossal. If we already have reason to suspect that there are normative facts out there, perhaps some of them are moral facts. But if we found a random morass of different considerations under the name ‘morality’ then we’d be stuck concluding that there might be some normative facts, but maybe they are only epistemic facts, with nothing else in the domain of normativity.
I don’t think this is the case, but I will have to wait until your posts on that topic—I look forward to them!
All I’ll say is that I don’t consider strongly conflicting intuitions in e.g. population ethics to be persuasive reasons for thinking that convergence will not occur. As long as the direction of travel is consistent, and we can mention many positive examples of convergence, the preponderance of evidence is that there are elements of our morality that reach high-level agreement. (I say elements because realism is not all-or-nothing—there could be an objective ‘core’ to ethics, maybe axiology, and much ethics could be built on top of such a realist core—that even seems like the most natural reading of the evidence, if the evidence is that there is convergence only on a limited subset of questions.) If Kant could have been a utilitarian and never realised it, then those who are appalled by the repugnant conclusion could certainly converge to accept it after enough ideal reflection!
This discussion continues to feel like the most productive discussion I’ve had with a moral realist! :)
[...]
[...]
I think I agree with all of this, but I’m not sure, because we seem to draw different conclusions. In any case, I’m now convinced I should have written the AI’s dialogue a bit differently. You’re right that the AI shouldn’t just state that it has no concept of irreducible normative facts. It should provide an argument as well!
What would you reply if the AI uses the same structure of arguments against other types of normative realism as it uses against moral realism? This would amount to the following trilemma for proponents of irreducible normativity (using section headings from my text):
(1) Is irreducible normativity about super-reasons?
(2) Is (our knowledge of) irreducible normativity confined to self-evident principles?
(3) Is there a speaker-independent normative reality?
I think you’re inclined to agree with me that (1) and (2) are unworkable or not worthy of the term “normative realism.” Also, it seems like there’s a weak sense in which you agree with the points I made in (3), as it relates to the domain of morality.
But maybe you only agree with my points in (3) in a weak sense, whereas I consider the arguments in that section to have stronger implications. The way I thought about this, I think the points in (3) apply to all domains of normativity, and they show that unless we come up with some other way to make normative concepts work that I haven’t yet thought of, we are forced to accept that normative concepts, in order to be action-guiding and meaningful, have to be linked to claims about convergence in human expert reasoners. Doesn’t this pin down the concept of irreducible normativity in a way that blocks any infinite wagers? It doesn’t feel like proper non-naturalism anymore once you postulate this link as a conceptual necessity. “Normativity” became a much more mundane concept after we accepted this link.
The trilemma applies here as well. Saying that it must apply still leaves you with the task of making up your mind on how normative concepts even work. I don’t see alternatives to my suggestions (1), (2) and (3).
Since I don’t think we have established anything interesting about normative facts, the only claim I see in the vicinity of what you say in this paragraph, would go as follows:
“Since we probably agree that there is a lot of convergence among expert reasoners on epistemic facts, we shouldn’t be too surprised if morality works similarly.”
And I kind of agree with that, but I don’t know how much convergence I would expect in epistemology. (I think it’s plausible that it would be higher than for morality, and I do agree that this is an argument to at least look really closely for ways of bringing about convergence on moral questions.)
I agree with this. My confidence that convergence won’t work is based on not only observing disagreements in fundamental intuitions, but also on seeing why people disagree, and seeing that these disagreements are sometimes “legitimate” because ethical discussions always get stuck in the same places (differences in life goals, which is intertwined with axiology). If one actually thinks about what sorts of assumptions are required for the discussions not to get stuck (something like: “all humans would adopt the same broad types of life goals under idealized conditions”), many people would probably recognize that those assumptions are extremely strong and counterintuitive. Oddly enough, people often don’t seem to think that far because they self-identify as moral realists for reasons that don’t make any sense. They expect convergence on moral questions because they somehow ended up self-identifying as moral realists, instead of them self-identifying as moral realists because they expect convergence.
(I’ll maybe make another comment later today to briefly expand on my line of argument here.)
I also agree with that, except that I think axiology is the one place where I’m most confident that there’s no convergence. :)
Maybe my anti-realism is best described as “some moral facts exist (in a weak sense as far as other realist proposals go), but morality is underdetermined.”
(I thought “anti-realism” was the best description for my view, because as I discussed in this comment, the way in which I treat normative concepts takes away the specialness they have under non-naturalism. Even some non-naturalists claim that naturalism isn’t interesting enough to be called “moral realism.” And insofar as my position can be characterized as naturalism, it’s still underdetermined in places where it matters a lot for our ethical practice.)
When I read some similar passage at the end of Parfit’s Reasons and Persons (which may have even included a quote of this passage?), I shared Parfit’s view. But I’ve done a lot of thinking since then. At some point one also has to drastically increase one’s confidence that further game-changing considerations won’t show up, especially if one’s map of the option space feels very complete in a self-contained way, and intellectually satisfying.
Glad to be of help! I feel like I’m learning a lot.
Focussing on epistemic facts, the AI could not make that argument. I assumed that you had the AI lack the concept of epistemic reasons because you agreed with me that there is no possible argument out of using this concept, if you start out with the concept, not because you just felt that it would have been too much of a detour to have the AI explain why it finds the concept incoherent.
How would this analogous argument go? I’ll take the AI’s key point and reword it to be speaking about epistemic facts instead of moral facts
It doesn’t work, does it? The reason it doesn’t work is that the scenario in which the AI is written where it ‘concluded’ that ‘incredibly few epistemic propositions appear self-evident to all humans’ is unimaginable. What would it mean for this to be true, what would the world have to be like?
I do not believe it is logically impossible that expert reasoners could diverge on all epistemic facts, but I do think that it is in some fairly deep sense impossible. For there to be such a divergence, reality itself would have to be unknowable.
The ‘speaker-independent normative reality’ that epistemic facts refer to is just actual objective reality—of all the potential epistemic facts out there, the one that actually corresponds to reality is the one that ‘sticks out’ in exactly the way that a speaker-independent normative reality should.
This means that there is no possible world where anyone with the concept of epistemic facts gets convinced, probabilistically, because they fail to see any epistemic convergence, that there are no epistemic facts. There would never be such a lack of convergence.
So my initial point,
still stands—that the AI is a normative anti-realist because it doesn’t have the concept of a normative reason, not because it has the concept and has decided that it probably doesn’t apply (and there was no alternative way for you to write the AI reaching that conclusion).
So I take option (3), where the ‘extremely strong convergence’ on claims about epistemic facts about what we should believe implies with virtual certainty that there is a speaker-independent normative reality, because the reality-corresponding collection of epistemic claims, in fact, stick out compared to all the other possible epistemic facts.
So, maybe the ‘normativity argument’ as I called it is really just another convergence argument, but just a convergence argument that is of infinite or near-infinite strength, because the convergence among our beliefs about what is epistemically justified is so strong that it’s effectively unimaginable that they couldn’t converge.
If you wish to deny that epistemic facts are needed to explain the convergence, I think that you end up in quite a strong form of pragmatism about truth, and give up on the notion of knowing anything about mind-independent objective reality, Kant-style, for reasons that I discuss here. That’s quite a bullet to bite. You don’t expect much convergence on epistemic facts, so maybe you are already a pragmatist about truth?
Lastly,
I’ll have to wait for your more specific arguments on this topic! I did give some preliminary discussion here of why, for example, I think that you’re dragged towards a total-utilitarian view whether you like it or not. It’s also important to note that the convergence arguments aren’t (principally) about people, but about possible normative theories—people might refuse to accept the implications of their own beliefs.