[Vote explanation]: The most important reason for my favoring moral realism is my sense that some goals (e.g. promoting happiness, averting misery) are intrinsically more rationally warranted than others (like promoting misery and averting happiness).
In the same way that some things are true and worth believing, some things are good and worth desiring. We should ultimately find the notion of justified goals to be no more deeply mysterious than that of justified beliefs. To deny the objective reality of either goodness or truth would seem to undermine inquiry, and thereâs no deeply compelling reason to do so. (For one thing: in order for there to be a suitably objective normative reason, normative realism would have to be true!)
You were negative toward the idea of hypothetical imperatives elsewhere but I donât see how you get around the need for them.
You say epistemic and moral obligations work âin the same way,â but they donât. Yes, we have epistemic obligations to believe true things⊠in order to have accurate beliefs about reality. Thatâs a specific goal. But you canât just assert âsome things are good and worth desiringâ without specifying⊠good according to what standard? The existence of epistemic standards doesnât prove thereâs One True Moral Standard any more than the existence of chess rules prove thereâs One True Game.
For morality, there are facts about which actions would best satisfy different value systems. I consider those to be a form of objective moral facts. And if you have those value systems, I think it is thus rationally warranted to desire those outcomes and pursue those actions. But I donât know how you would get facts about which value system to have without appealing to a higher-order value system.
Far from undermining inquiry, this view improves it by forcing explicitness about our goals. When you feel âpromoting happiness is obviously better than promoting misery,â that doesnât strike me as metaphysical truth but expressive assertivism. Real inquiry means examining why we value what we value and how to get it.
Iâm far from a professional philosopher and I know you have deeply studied this much more than I have, so I donât mean to accuse you of being naive. Looking forward to learning more.
Itâs an interesting dialectic! I donât have heaps of time to go into depth on this, but you may get a better sense of my view from reading my response to Maguire & Woods, âWhy Belief is No Gameâ:
My biggest complaint about this sort of view is that it completely divorces reasons from rationality. They conceive of reasons as things that support (either by the authoritative standard of value, or some practice-relative standard of correctness) rather than as things that rationalize. As a result, they miss an important disanalogy between practice-relative âreasonsâ and epistemic reasons: violating the latter, but not the former, renders one (to some degree) irrational, or liable to rational criticism.
Of course, there are more important things than being rational: Iâm all in favour of ârational irrationalityââtaking magic pills that will make you crazy if thatâs essential to save the world from an evil demon or the like. But I still think its important to recognize rationality as the objective/ââauthoritativeâ standard of correctness for our cognitive/âagential functioning. Itâs really importantly different from mere practice-relative reasons, which I donât think are properly conceived of as normative at all. Thereâs really nothing genuinely erroneous (irrational) about playing chess badly in order to save the world, in striking contrast to the person who (rightly and rationally) turns themselves irrational in order to save the world.
So, whereas M&W are happy to speak of âchess reasonsâ as genuinely normative (just not authoritative) reasons, I would reject this on the grounds that chess reasons do not rationalize action. If the evil demon will punish us all if you play chess well, then you really have no good reason at all to play well. (By contrast, if youâre punished for believing in line with the evidence, that doesnât change what it is rational to believe, it just provides an overwhelmingly important practical reason to [act so as to] block or override your epistemic rationality somehow!)
⊠Surprisingly, M&W take ânon-complianceâ with âoperative standards of correctnessâ to ârender one liable to certain kinds of criticismâ, even if one has violated these non-authoritative standards precisely in order to comply with authoritative normative reasons, or what one all-things-considered ought to do. This claim strikes me as substantively nuts. If you rightly violate your professional code in order to save the world from destruction, it simply isnât true that youâre thereby âliable to professional criticism.â (Especially if your profession is, say, a concentration camp guard.) Anyone who criticized you would reveal themselves to be the worldâs biggest rule-fetishist. Put another way: conforming to the all-things-considered ought is an indisputable justification, and you cannot reasonably be blamed or criticized when you act in a way that is perfectly well justified.
I think all reasons are hypothetical, but some hypotheticals (like âif you want to avoid unnecessary suffering...â) are so deeply embedded in human psychology that they feel categorical. This explains our moral intuitions without mysterious metaphysical facts.
The concentration camp guard example actually supports my viewâwe think the guard shouldnât follow professional norms precisely because weâre applying a different value system (human welfare over rule-following). Thereâs no view from nowhere; thereâs just the fact that (luckily) most of us share similar core values.
Do you think thereâs an epistemic fact of the matter as to what beliefs about the future are most reasonable and likely to be true given the past? (E.g., whether we should expect future emeralds to be green or grue?) Is probability end-relational too? Objective norms for inductive reasoning donât seem any less metaphysically mysterious than objective norms for practical reasoning.
One could just debunk all philosophical beliefs as mere âdeeply embedded⊠intuitionsâ so as to avoid âmysterious metaphysical factsâ. But that then leaves you committed to thinking that all open philosophical questionsâmany of which seem to be sensible things to wonder aboutâare actually total nonsense. (Some do go this way, but itâs a pretty extreme view!) We project green, the grue-speaker projects grue, and thatâs all there is to say. I just donât find such radical skepticism remotely credible. You might as well posit that the world was created 5 minutes ago, or that solipsism is true, in order to further trim down your ontology. Iâd rather say: parsimony is not the only theoretical virtue; actually accounting for the full range of real questions we can ask matters too!
(Iâm more sympathetic to the view that we canât know the answers to these questions than to the view that there is no real question here to ask.)
You raise a fair challenge about epistemic norms! Yes, I do think there are facts about which beliefs are most reasonable given evidence. But Iâd argue this actually supports my view rather than undermining it.
The key difference: epistemic norms have a built-in goalâaccurate representation of reality. When we ask âshould I expect emeralds to be green or grue?â weâre implicitly asking âin order to have beliefs that accurately track reality, what should I expect?â The standard is baked into the enterprise of belief formation itself.
But moral norms lack this inherent goal. When you say some goals are âintrinsically more rationally warranted,â Iâd ask: warranted for what purpose? The hypothetical imperative lurks even in your formulation. Yes, promoting happiness over misery feels obviously correct to usâbut thatâs because weâre humans with particular values, not because weâve discovered some goal-independent truth.
Iâm not embracing radical skepticism or saying moral questions are nonsense. Iâm making a more modest claim: moral questions make perfect sense once we specify the evaluative standard. âIs X wrong according to utilitarianism?â has a determinate, objective, mind-indpendent answer. âIs X wrong simpliciter?â does not.
The fact that we share deep moral intuitions (like preferring happiness to misery) is explained by our shared humanity, not by those intuitions tracking mind-independent moral facts. After all, we could imagine beings with very different value systems who would find our intuitions as arbitrary as we might find theirs.
So yes, I think we can know things about the future and have justified beliefs. But thatâs because âjustifiedâ in epistemology means âlikely to be trueââthereâs an implicit standard. In ethics, we need to make our standards explicit.
Why couldnât someone disagree with you about the purpose of belief-formation: âsure, truth-seeking feels obviously correct to you, but thatâs just because [some story]⊠not because weâve discovered some goal-independent truth.â
Further, part of my point with induction is that merely aiming at truth doesnât settle the hard questions of epistemology (any more than aiming at the good settles the hard questions of axiology).
To see this: suppose that, oddly enough, the grue-speakers turn out to be right that all new emeralds discovered after 2030 are observed to be (what we call) blue. Surprising! Still, I take it that as of 2025, it was reasonable for us to expect future emeralds to be green, and unreasonable of the grue-speakers to expect them to be grue. Part of the challenge I meant to raise for you was: What grounds this epistemic fact? (Isnât it metaphysically mysterious to say that green as a property is privileged over âgrueâ for purposes of inductive reasoning? What could make that true, on your view? Donât you need to specify your âinductive standardsâ?)
moral questions make perfect sense once we specify the evaluative standard
Once you fully specify the evaluative standard, there is no open question left to ask, just concealed tautologies. Youâve replaced all the important moral questions with trivial logical ones. (âDoes P&Q&R imply P?â) Normative questions it no longer makes sense to ask on your view include:
I already know what Nazism implies, and what liberalism implies, but which view is better justified?
I already know what the different theories of well-being imply. But which view is actually correct? Would plugging into the experience machine be good or bad for me?
I already know what moral theory I endorse, but would it be wise to âhedgeâ and take moral uncertainty into account, in case Iâm wrong?
And in the epistemic case (once we extend your view to cover inductive standards):
I already know what the green vs grue inductive standards have to say about whether I should expect future emeralds to be green or grue; butâin order to have the best shot at a true belief, given my available evidenceâwhich should I expect?
Youâre right that I need to bite the bullet on epistemic norms too and I do think thatâs a highly effective reply. But at the end of the day, yes, I think âreasonableâ in epistemology is also implicitly goal-relative in a meta-ethical senseâit means âin order to have beliefs that accurately track reality.â The difference is that this goal is so universally shared, so universal across so many different value systems, and so deeply embedded in the concept of belief itself that it feels categorical.
You say Iâve âreplaced all the important moral questions with trivial logical ones,â but thatâs unfair. The questions remain very substantiveâthey just need proper framing:
Instead of âWhich view is better justified?â we ask âWhich view better satisfies [specific criteria like internal consistency, explanatory power, alignment with considered judgments, etc.]?â
Instead of âWould the experience machine be good for me?â we ask âWould it satisfy my actual values /â promote my flourishing /â give me what I reflectively endorse /â give me what an idealized version of myself might want?â
These arenât trivial questions! Theyâre complex empirical and philosophical questions. What Iâm denying is that thereâs some further questionââBut which view is really justified?ââfloating free of any standard of justification.
Your challenge about moral uncertainty is interesting, but Iâd say: yes, you can hedge across different moral theories if you have a higher-order standard for managing that uncertainty (like maximizing expected moral value across theories you find plausible). Thatâs still goal-relative, just at a meta-level.
The key insight remains: every âshouldâ or âjustifiedâ implicitly references some standard. Making those standards explicit clarifies rather than trivializes our discussions. Weâre not eliminating important questionsâweâre revealing what weâre actually asking.
I agree itâs often helpful to make our implicit standards explicit. But I disagree that thatâs âwhat weâre actually askingâ. At least in my own normative thought, I donât just wonder about what meets my standards. And I donât just disagree with others about what does or doesnât meet their standards or mine. I think the most important disagreement of all is over which standards are really warranted.
On your view, there may not be any normative disagreement, once we all agree about the logical and empirical facts. I think itâs key to philosophy that there is more we can wonder about than just that. (There may not be any tractable disagreement once we get down to bedrock clashing standards, but I think there is still a further question over which we really disagree, even if we have no way to persuade the other of our position.)
Itâs interesting to consider the meta question of whether one of us is really right about our present metaethical dispute, or whether all you can say is that your position follows from your epistemic standards and mine follows from mine, and there is no further objective question about which we even disagree.
At least in my own normative thought, I donât just wonder about what meets my standards. [...] I think the most important disagreement of all is over which standards are really warranted.
Really warranted by what? I think Iâm an illusionist about this in particular as I donât even know what we could be reasonably disagreeing over.
For a disagreement about facts (is this blue?), we can argue about actual blueness (measurable) or we can argue about epistemics (which strategies most reliably predict the world?) and meta-epistemics (which strategies most reliably figure out strategies that reliably predict the world?), etc.
For disagreements about morals (is this good?), we can argue about goodness but what is goodness? Is it platonic? Is it grounded in God? Iâm not even sure what there is to dispute. Iâd argue the best we can do is argue to our shared values (perhaps even universal human values, perhaps idealized by arguing about consistency etc.) and then see what best satisfies those.
~
On your view, there may not be any normative disagreement, once we all agree about the logical and empirical facts.
Rightâand this matches our experience! When moral disagreements persist after full empirical and logical agreement, weâre left with clashing bedrock intuitions. You want to insist thereâs still a fact about whoâs ultimately correct, but canât explain what would make it true.
~
Itâs interesting to consider the meta question of whether one of us is really right about our present metaethical dispute, or whether all you can say is that your position follows from your epistemic standards and mine follows from mine, and there is no further objective question about which we even disagree.
I think weâre successfully engaging in a dispute here and that does kind of prove my position. Iâm trying to argue that youâre appealing to something that just doesnât exist and that this is inconsistent with your epistemic values. Whether one can ground a judgement about what is âreally warrantedâ is a factual question.
~
I want to add that your recent post on meta-metaethical realism also reinforces my point here. You worry that anti-realism about morality commits us to anti-realism about philosophy generally. But thereâs a crucial disanalogy: philosophical discourse (including this debate) works precisely because we share epistemic standardsâlogical consistency, explanatory power, and various other virtues. When we debate meta-ethics or meta-epistemology, weâre not searching for stance-independent truths but rather working out what follows from our shared epistemic commitments.
The âcompanions in guiltâ argument fails because epistemic norms are self-vindicating in a way moral norms arenât. To even engage in rational discourse about whatâs true (including about anti-realism), we must employ epistemic standards. But we can coherently describe worlds with radically different moral standards. Thereâs no pragmatic incoherence in moral anti-realism the way there would be in global philosophical anti-realism.
I am not sure there even are intuitions or seemings of the sort philosophers often talk about, but if I were to weigh in on the matter, Iâd have the exact opposite reaction. I can think of few things more obvious than that it doesnât make any sense to think some goals are more rational or correct than others. Goals are just descriptive facts about agents. They donât even seem like an appropriate target of evaluation for such judgments. To me, this sounds like saying that someoneâs birthday is more rationally warranted.
I also donât see why denying the objective reality of goodness would undermine inquiry. Why would it? I act in pursuit of my goals. Inquiry is a means of pursuing my goals. I donât even think it makes sense to talk of things being objectively good, but even if there were objective goods, I would not care about them.
Regarding the last remark: that thereâs no âdeeply compelling reason to do so,â you go on to say âFor one thing: in order for there to be a suitably objective normative reason, normative realism would have to be true!â
But âdeeply compellingâ is not, to my mind, identical to âobjective.â I donât believe I or anyone else needs or benefits in any way from having objective reasons to do anything. We can do things because we want to. We donât need any more âreasonâ (if desires could be construed as reasons) than that.
So one way of thinking about this is as follows.
Imagine youâre goal is to eat every apple you see. I show you an apple. You acknowledge that it is in fact an apple, and you have seen the apple. I say you should then eat the apple. You refuse to eat the apple.
My view is that you (epistemically) ought to have eaten the apple. There is a normativity about reasons (and logic) that suggest I am justified in saying this. If you reject normativity about epistemic reasons, it seems to me that you donât have to accept that you ought to have eaten the apple.
Maybe there is something different about epistemic normativity than ethical normativity, or maybe there is something unique about epistemic normativity in the logical domain, but Iâm not really sure what that special thing is.
I fail to follow the apple example. Why should I epistemically have eaten the apple? Either I have a true goal (and desire) to eat it or not. If I do, I will not refuse to eat it. If you assume it is a goal, I am assuming it is true, although people donât generally have those sorts of goals, I think. They look more like⊠lists of preferences and degree of each preferences. Some are core-preferences difficult to change, while others are very mutable.
If by epistemic normativity you mean something like there are x, y, z reasons we should trust when we want to have proper beliefs about things, what Iâd say is that this doesnât seem normative to me. I personally value truth very highly as an end in itself, but even if I didnât, truthful information is useful for acting to satisfy your desires, but I donât see why one has some obligation to do so.I f someone doesnât follow the effective means to their ends, theyâre being ineffective or foolish, but not violating any norm. If you want a bridge to stand, build it this way; otherwise, it falls. But thereâs no moral or rational requirement to build it that wayâyou just wonât get what you want.
I donât accept that I âought to have eaten the apple.â At the very least, I wouldnât accept this without knowing what you take that to mean. I donât think there are any irreducibly normative facts at all, nor do I think there are any such thing as âreasonsâ independent of descriptive facts about the relation between means and ends. So I donât know what you have in mind when you say that âyou ought to have eaten the apple.â I also donât know why you epistemically ought to have; why not prudential, or some other normative domain?
Could you perhaps explain what you have in mind by epistemic and moral normativity? Thereâs a good chance I donât accept the account you have in mind.
What do you say to someone who doesnât share your goals? Eg thinks that happiness is only justified if itâs earned, and that most people do not deserve it, as they do âbad thing Xâ, and being against promoting happiness to them
Generally parallel things to what Iâd say to someone with different fundamental epistemic standards, like:
I could be wrong about whatâs justified. (Certainly my endorsing a standard doesnât suffice to make it justifiedâand likewise for them. Weâre not infallible!)
Check whether their answer seems objectionably ad hoc in some way, fails to treat like cases alike, is in tension with other claims they accept, or rests on dubious presuppositions (âwhy think X is so bad?â), etc.
If we get to bedrock, neither of us will be able to persuade the other to change their mind. Still, we may each think that (at least) one of us must be mistaken about whatâs genuinely justified.
+ we may at least identify some areas of overlap (e.g. it sure would suck if a clearly innocent individual were to suffer...)
he most important reason for my favoring moral realism is my sense that some goals
Your sense is just vibes.
In the same way that some things are true and worth believing, some things are good and worth desiring.
Some things may be true depending on what you mean by true. worth believing would presuppose realism depending on what you mean by âworthâ. If this sentence matters to your argument then the whole thing is circular.
We should ultimately find the notion of justified goals to be no more deeply mysterious than that of justified beliefs.
obviously not true, but peter addresses this.
To deny the objective reality of either goodness or truth would seem to undermine inquiry, and thereâs no deeply compelling reason to do so.
again you are presupposing and/âor being circular.
There isnât a coherent argument here. Itâs just you coming to the table with your priors and handwaving them. I appreciate you saying your piece but I donât find this even mildly compelling and struggling to understand the level of agreement.
Everyone has fundamental assumptions. You could imagine someone who disagrees with yours calling them âjust vibesâ or âpresuppositionsâ, but that doesnât yet establish that thereâs anything wrong with your assumptions. To show an error, the critic would need to put forward some (disputable) positive claims of their own.
The level of agreement just shows that plenty of others share my starting assumptions.
If you take arguments to be âcircularâ whenever a determined opponent could dispute them, I have news for you: there is no such thing as an argument that lacks this feature. (See my note on the limits of argumentation.)
I am trying to articulate (probably wrongly) the disconnect I perceive here. I think âvibesâ might sound condescending, but ultimately, you seem to agree with assumptions (like math axioms) not being amenable to disputation. Like, technically, in philosophical practice, one can try to show, I imagine, that given assumption x some contradiction (or at least, something very generally perceived as wrong and undesirable) follows.
I do share the feeling expressed by Charlie Guthmann here that a lot of starting arguments for moral realists are just of the type âx is obvious/âself-evident/âfeels good to be/âfeels worth believingâ, and when stated in that way, they feel equally obviously false to those who donât share those intuitions, and as magical thinking (âIf you really want something, the universe conspires to make it come aboutâ Paulo Coelho style). I feel more productive engaging strategies should just avoid altogether any claims of the mentioned sort, and perhaps start with stating what might follow from realist assumptions that might be convincing/âpersuasive to the other side, and vice versa.
[Vote explanation]: The most important reason for my favoring moral realism is my sense that some goals (e.g. promoting happiness, averting misery) are intrinsically more rationally warranted than others (like promoting misery and averting happiness).
In the same way that some things are true and worth believing, some things are good and worth desiring. We should ultimately find the notion of justified goals to be no more deeply mysterious than that of justified beliefs. To deny the objective reality of either goodness or truth would seem to undermine inquiry, and thereâs no deeply compelling reason to do so. (For one thing: in order for there to be a suitably objective normative reason, normative realism would have to be true!)
You were negative toward the idea of hypothetical imperatives elsewhere but I donât see how you get around the need for them.
You say epistemic and moral obligations work âin the same way,â but they donât. Yes, we have epistemic obligations to believe true things⊠in order to have accurate beliefs about reality. Thatâs a specific goal. But you canât just assert âsome things are good and worth desiringâ without specifying⊠good according to what standard? The existence of epistemic standards doesnât prove thereâs One True Moral Standard any more than the existence of chess rules prove thereâs One True Game.
For morality, there are facts about which actions would best satisfy different value systems. I consider those to be a form of objective moral facts. And if you have those value systems, I think it is thus rationally warranted to desire those outcomes and pursue those actions. But I donât know how you would get facts about which value system to have without appealing to a higher-order value system.
Far from undermining inquiry, this view improves it by forcing explicitness about our goals. When you feel âpromoting happiness is obviously better than promoting misery,â that doesnât strike me as metaphysical truth but expressive assertivism. Real inquiry means examining why we value what we value and how to get it.
Iâm far from a professional philosopher and I know you have deeply studied this much more than I have, so I donât mean to accuse you of being naive. Looking forward to learning more.
Itâs an interesting dialectic! I donât have heaps of time to go into depth on this, but you may get a better sense of my view from reading my response to Maguire & Woods, âWhy Belief is No Gameâ:
Thanks!
I think all reasons are hypothetical, but some hypotheticals (like âif you want to avoid unnecessary suffering...â) are so deeply embedded in human psychology that they feel categorical. This explains our moral intuitions without mysterious metaphysical facts.
The concentration camp guard example actually supports my viewâwe think the guard shouldnât follow professional norms precisely because weâre applying a different value system (human welfare over rule-following). Thereâs no view from nowhere; thereâs just the fact that (luckily) most of us share similar core values.
Do you think thereâs an epistemic fact of the matter as to what beliefs about the future are most reasonable and likely to be true given the past? (E.g., whether we should expect future emeralds to be green or grue?) Is probability end-relational too? Objective norms for inductive reasoning donât seem any less metaphysically mysterious than objective norms for practical reasoning.
One could just debunk all philosophical beliefs as mere âdeeply embedded⊠intuitionsâ so as to avoid âmysterious metaphysical factsâ. But that then leaves you committed to thinking that all open philosophical questionsâmany of which seem to be sensible things to wonder aboutâare actually total nonsense. (Some do go this way, but itâs a pretty extreme view!) We project green, the grue-speaker projects grue, and thatâs all there is to say. I just donât find such radical skepticism remotely credible. You might as well posit that the world was created 5 minutes ago, or that solipsism is true, in order to further trim down your ontology. Iâd rather say: parsimony is not the only theoretical virtue; actually accounting for the full range of real questions we can ask matters too!
(Iâm more sympathetic to the view that we canât know the answers to these questions than to the view that there is no real question here to ask.)
You raise a fair challenge about epistemic norms! Yes, I do think there are facts about which beliefs are most reasonable given evidence. But Iâd argue this actually supports my view rather than undermining it.
The key difference: epistemic norms have a built-in goalâaccurate representation of reality. When we ask âshould I expect emeralds to be green or grue?â weâre implicitly asking âin order to have beliefs that accurately track reality, what should I expect?â The standard is baked into the enterprise of belief formation itself.
But moral norms lack this inherent goal. When you say some goals are âintrinsically more rationally warranted,â Iâd ask: warranted for what purpose? The hypothetical imperative lurks even in your formulation. Yes, promoting happiness over misery feels obviously correct to usâbut thatâs because weâre humans with particular values, not because weâve discovered some goal-independent truth.
Iâm not embracing radical skepticism or saying moral questions are nonsense. Iâm making a more modest claim: moral questions make perfect sense once we specify the evaluative standard. âIs X wrong according to utilitarianism?â has a determinate, objective, mind-indpendent answer. âIs X wrong simpliciter?â does not.
The fact that we share deep moral intuitions (like preferring happiness to misery) is explained by our shared humanity, not by those intuitions tracking mind-independent moral facts. After all, we could imagine beings with very different value systems who would find our intuitions as arbitrary as we might find theirs.
So yes, I think we can know things about the future and have justified beliefs. But thatâs because âjustifiedâ in epistemology means âlikely to be trueââthereâs an implicit standard. In ethics, we need to make our standards explicit.
Why couldnât someone disagree with you about the purpose of belief-formation: âsure, truth-seeking feels obviously correct to you, but thatâs just because [some story]⊠not because weâve discovered some goal-independent truth.â
Further, part of my point with induction is that merely aiming at truth doesnât settle the hard questions of epistemology (any more than aiming at the good settles the hard questions of axiology).
To see this: suppose that, oddly enough, the grue-speakers turn out to be right that all new emeralds discovered after 2030 are observed to be (what we call) blue. Surprising! Still, I take it that as of 2025, it was reasonable for us to expect future emeralds to be green, and unreasonable of the grue-speakers to expect them to be grue. Part of the challenge I meant to raise for you was: What grounds this epistemic fact? (Isnât it metaphysically mysterious to say that green as a property is privileged over âgrueâ for purposes of inductive reasoning? What could make that true, on your view? Donât you need to specify your âinductive standardsâ?)
Once you fully specify the evaluative standard, there is no open question left to ask, just concealed tautologies. Youâve replaced all the important moral questions with trivial logical ones. (âDoes P&Q&R imply P?â) Normative questions it no longer makes sense to ask on your view include:
I already know what Nazism implies, and what liberalism implies, but which view is better justified?
I already know what the different theories of well-being imply. But which view is actually correct? Would plugging into the experience machine be good or bad for me?
I already know what moral theory I endorse, but would it be wise to âhedgeâ and take moral uncertainty into account, in case Iâm wrong?
And in the epistemic case (once we extend your view to cover inductive standards):
I already know what the green vs grue inductive standards have to say about whether I should expect future emeralds to be green or grue; butâin order to have the best shot at a true belief, given my available evidenceâwhich should I expect?
Youâre right that I need to bite the bullet on epistemic norms too and I do think thatâs a highly effective reply. But at the end of the day, yes, I think âreasonableâ in epistemology is also implicitly goal-relative in a meta-ethical senseâit means âin order to have beliefs that accurately track reality.â The difference is that this goal is so universally shared, so universal across so many different value systems, and so deeply embedded in the concept of belief itself that it feels categorical.
You say Iâve âreplaced all the important moral questions with trivial logical ones,â but thatâs unfair. The questions remain very substantiveâthey just need proper framing:
Instead of âWhich view is better justified?â we ask âWhich view better satisfies [specific criteria like internal consistency, explanatory power, alignment with considered judgments, etc.]?â
Instead of âWould the experience machine be good for me?â we ask âWould it satisfy my actual values /â promote my flourishing /â give me what I reflectively endorse /â give me what an idealized version of myself might want?â
These arenât trivial questions! Theyâre complex empirical and philosophical questions. What Iâm denying is that thereâs some further questionââBut which view is really justified?ââfloating free of any standard of justification.
Your challenge about moral uncertainty is interesting, but Iâd say: yes, you can hedge across different moral theories if you have a higher-order standard for managing that uncertainty (like maximizing expected moral value across theories you find plausible). Thatâs still goal-relative, just at a meta-level.
The key insight remains: every âshouldâ or âjustifiedâ implicitly references some standard. Making those standards explicit clarifies rather than trivializes our discussions. Weâre not eliminating important questionsâweâre revealing what weâre actually asking.
I agree itâs often helpful to make our implicit standards explicit. But I disagree that thatâs âwhat weâre actually askingâ. At least in my own normative thought, I donât just wonder about what meets my standards. And I donât just disagree with others about what does or doesnât meet their standards or mine. I think the most important disagreement of all is over which standards are really warranted.
On your view, there may not be any normative disagreement, once we all agree about the logical and empirical facts. I think itâs key to philosophy that there is more we can wonder about than just that. (There may not be any tractable disagreement once we get down to bedrock clashing standards, but I think there is still a further question over which we really disagree, even if we have no way to persuade the other of our position.)
Itâs interesting to consider the meta question of whether one of us is really right about our present metaethical dispute, or whether all you can say is that your position follows from your epistemic standards and mine follows from mine, and there is no further objective question about which we even disagree.
Really warranted by what? I think Iâm an illusionist about this in particular as I donât even know what we could be reasonably disagreeing over.
For a disagreement about facts (is this blue?), we can argue about actual blueness (measurable) or we can argue about epistemics (which strategies most reliably predict the world?) and meta-epistemics (which strategies most reliably figure out strategies that reliably predict the world?), etc.
For disagreements about morals (is this good?), we can argue about goodness but what is goodness? Is it platonic? Is it grounded in God? Iâm not even sure what there is to dispute. Iâd argue the best we can do is argue to our shared values (perhaps even universal human values, perhaps idealized by arguing about consistency etc.) and then see what best satisfies those.
~
Rightâand this matches our experience! When moral disagreements persist after full empirical and logical agreement, weâre left with clashing bedrock intuitions. You want to insist thereâs still a fact about whoâs ultimately correct, but canât explain what would make it true.
~
I think weâre successfully engaging in a dispute here and that does kind of prove my position. Iâm trying to argue that youâre appealing to something that just doesnât exist and that this is inconsistent with your epistemic values. Whether one can ground a judgement about what is âreally warrantedâ is a factual question.
~
I want to add that your recent post on meta-metaethical realism also reinforces my point here. You worry that anti-realism about morality commits us to anti-realism about philosophy generally. But thereâs a crucial disanalogy: philosophical discourse (including this debate) works precisely because we share epistemic standardsâlogical consistency, explanatory power, and various other virtues. When we debate meta-ethics or meta-epistemology, weâre not searching for stance-independent truths but rather working out what follows from our shared epistemic commitments.
The âcompanions in guiltâ argument fails because epistemic norms are self-vindicating in a way moral norms arenât. To even engage in rational discourse about whatâs true (including about anti-realism), we must employ epistemic standards. But we can coherently describe worlds with radically different moral standards. Thereâs no pragmatic incoherence in moral anti-realism the way there would be in global philosophical anti-realism.
I am not sure there even are intuitions or seemings of the sort philosophers often talk about, but if I were to weigh in on the matter, Iâd have the exact opposite reaction. I can think of few things more obvious than that it doesnât make any sense to think some goals are more rational or correct than others. Goals are just descriptive facts about agents. They donât even seem like an appropriate target of evaluation for such judgments. To me, this sounds like saying that someoneâs birthday is more rationally warranted.
I also donât see why denying the objective reality of goodness would undermine inquiry. Why would it? I act in pursuit of my goals. Inquiry is a means of pursuing my goals. I donât even think it makes sense to talk of things being objectively good, but even if there were objective goods, I would not care about them.
Regarding the last remark: that thereâs no âdeeply compelling reason to do so,â you go on to say âFor one thing: in order for there to be a suitably objective normative reason, normative realism would have to be true!â
But âdeeply compellingâ is not, to my mind, identical to âobjective.â I donât believe I or anyone else needs or benefits in any way from having objective reasons to do anything. We can do things because we want to. We donât need any more âreasonâ (if desires could be construed as reasons) than that.
So one way of thinking about this is as follows. Imagine youâre goal is to eat every apple you see. I show you an apple. You acknowledge that it is in fact an apple, and you have seen the apple. I say you should then eat the apple. You refuse to eat the apple. My view is that you (epistemically) ought to have eaten the apple. There is a normativity about reasons (and logic) that suggest I am justified in saying this. If you reject normativity about epistemic reasons, it seems to me that you donât have to accept that you ought to have eaten the apple. Maybe there is something different about epistemic normativity than ethical normativity, or maybe there is something unique about epistemic normativity in the logical domain, but Iâm not really sure what that special thing is.
I fail to follow the apple example. Why should I epistemically have eaten the apple? Either I have a true goal (and desire) to eat it or not. If I do, I will not refuse to eat it. If you assume it is a goal, I am assuming it is true, although people donât generally have those sorts of goals, I think. They look more like⊠lists of preferences and degree of each preferences. Some are core-preferences difficult to change, while others are very mutable.
If by epistemic normativity you mean something like there are x, y, z reasons we should trust when we want to have proper beliefs about things, what Iâd say is that this doesnât seem normative to me. I personally value truth very highly as an end in itself, but even if I didnât, truthful information is useful for acting to satisfy your desires, but I donât see why one has some obligation to do so.I f someone doesnât follow the effective means to their ends, theyâre being ineffective or foolish, but not violating any norm. If you want a bridge to stand, build it this way; otherwise, it falls. But thereâs no moral or rational requirement to build it that wayâyou just wonât get what you want.
I donât accept that I âought to have eaten the apple.â At the very least, I wouldnât accept this without knowing what you take that to mean. I donât think there are any irreducibly normative facts at all, nor do I think there are any such thing as âreasonsâ independent of descriptive facts about the relation between means and ends. So I donât know what you have in mind when you say that âyou ought to have eaten the apple.â I also donât know why you epistemically ought to have; why not prudential, or some other normative domain?
Could you perhaps explain what you have in mind by epistemic and moral normativity? Thereâs a good chance I donât accept the account you have in mind.
What do you say to someone who doesnât share your goals? Eg thinks that happiness is only justified if itâs earned, and that most people do not deserve it, as they do âbad thing Xâ, and being against promoting happiness to them
Generally parallel things to what Iâd say to someone with different fundamental epistemic standards, like:
I could be wrong about whatâs justified. (Certainly my endorsing a standard doesnât suffice to make it justifiedâand likewise for them. Weâre not infallible!)
Check whether their answer seems objectionably ad hoc in some way, fails to treat like cases alike, is in tension with other claims they accept, or rests on dubious presuppositions (âwhy think X is so bad?â), etc.
If we get to bedrock, neither of us will be able to persuade the other to change their mind. Still, we may each think that (at least) one of us must be mistaken about whatâs genuinely justified.
+ we may at least identify some areas of overlap (e.g. it sure would suck if a clearly innocent individual were to suffer...)
Your sense is just vibes.
Some things may be true depending on what you mean by true. worth believing would presuppose realism depending on what you mean by âworthâ. If this sentence matters to your argument then the whole thing is circular.
obviously not true, but peter addresses this.
again you are presupposing and/âor being circular.
There isnât a coherent argument here. Itâs just you coming to the table with your priors and handwaving them. I appreciate you saying your piece but I donât find this even mildly compelling and struggling to understand the level of agreement.
Everyone has fundamental assumptions. You could imagine someone who disagrees with yours calling them âjust vibesâ or âpresuppositionsâ, but that doesnât yet establish that thereâs anything wrong with your assumptions. To show an error, the critic would need to put forward some (disputable) positive claims of their own.
The level of agreement just shows that plenty of others share my starting assumptions.
If you take arguments to be âcircularâ whenever a determined opponent could dispute them, I have news for you: there is no such thing as an argument that lacks this feature. (See my note on the limits of argumentation.)
I am trying to articulate (probably wrongly) the disconnect I perceive here. I think âvibesâ might sound condescending, but ultimately, you seem to agree with assumptions (like math axioms) not being amenable to disputation. Like, technically, in philosophical practice, one can try to show, I imagine, that given assumption x some contradiction (or at least, something very generally perceived as wrong and undesirable) follows.
I do share the feeling expressed by Charlie Guthmann here that a lot of starting arguments for moral realists are just of the type âx is obvious/âself-evident/âfeels good to be/âfeels worth believingâ, and when stated in that way, they feel equally obviously false to those who donât share those intuitions, and as magical thinking (âIf you really want something, the universe conspires to make it come aboutâ Paulo Coelho style). I feel more productive engaging strategies should just avoid altogether any claims of the mentioned sort, and perhaps start with stating what might follow from realist assumptions that might be convincing/âpersuasive to the other side, and vice versa.