I think all reasons are hypothetical, but some hypotheticals (like “if you want to avoid unnecessary suffering...”) are so deeply embedded in human psychology that they feel categorical. This explains our moral intuitions without mysterious metaphysical facts.
The concentration camp guard example actually supports my view—we think the guard shouldn’t follow professional norms precisely because we’re applying a different value system (human welfare over rule-following). There’s no view from nowhere; there’s just the fact that (luckily) most of us share similar core values.
Do you think there’s an epistemic fact of the matter as to what beliefs about the future are most reasonable and likely to be true given the past? (E.g., whether we should expect future emeralds to be green or grue?) Is probability end-relational too? Objective norms for inductive reasoning don’t seem any less metaphysically mysterious than objective norms for practical reasoning.
One could just debunk all philosophical beliefs as mere “deeply embedded… intuitions” so as to avoid “mysterious metaphysical facts”. But that then leaves you committed to thinking that all open philosophical questions—many of which seem to be sensible things to wonder about—are actually total nonsense. (Some do go this way, but it’s a pretty extreme view!) We project green, the grue-speaker projects grue, and that’s all there is to say. I just don’t find such radical skepticism remotely credible. You might as well posit that the world was created 5 minutes ago, or that solipsism is true, in order to further trim down your ontology. I’d rather say: parsimony is not the only theoretical virtue; actually accounting for the full range of real questions we can ask matters too!
(I’m more sympathetic to the view that we can’t know the answers to these questions than to the view that there is no real question here to ask.)
You raise a fair challenge about epistemic norms! Yes, I do think there are facts about which beliefs are most reasonable given evidence. But I’d argue this actually supports my view rather than undermining it.
The key difference: epistemic norms have a built-in goal—accurate representation of reality. When we ask “should I expect emeralds to be green or grue?” we’re implicitly asking “in order to have beliefs that accurately track reality, what should I expect?” The standard is baked into the enterprise of belief formation itself.
But moral norms lack this inherent goal. When you say some goals are “intrinsically more rationally warranted,” I’d ask: warranted for what purpose? The hypothetical imperative lurks even in your formulation. Yes, promoting happiness over misery feels obviously correct to us—but that’s because we’re humans with particular values, not because we’ve discovered some goal-independent truth.
I’m not embracing radical skepticism or saying moral questions are nonsense. I’m making a more modest claim: moral questions make perfect sense once we specify the evaluative standard. “Is X wrong according to utilitarianism?” has a determinate, objective, mind-indpendent answer. “Is X wrong simpliciter?” does not.
The fact that we share deep moral intuitions (like preferring happiness to misery) is explained by our shared humanity, not by those intuitions tracking mind-independent moral facts. After all, we could imagine beings with very different value systems who would find our intuitions as arbitrary as we might find theirs.
So yes, I think we can know things about the future and have justified beliefs. But that’s because “justified” in epistemology means “likely to be true”—there’s an implicit standard. In ethics, we need to make our standards explicit.
Why couldn’t someone disagree with you about the purpose of belief-formation: “sure, truth-seeking feels obviously correct to you, but that’s just because [some story]… not because we’ve discovered some goal-independent truth.”
Further, part of my point with induction is that merely aiming at truth doesn’t settle the hard questions of epistemology (any more than aiming at the good settles the hard questions of axiology).
To see this: suppose that, oddly enough, the grue-speakers turn out to be right that all new emeralds discovered after 2030 are observed to be (what we call) blue. Surprising! Still, I take it that as of 2025, it was reasonable for us to expect future emeralds to be green, and unreasonable of the grue-speakers to expect them to be grue. Part of the challenge I meant to raise for you was: What grounds this epistemic fact? (Isn’t it metaphysically mysterious to say that green as a property is privileged over “grue” for purposes of inductive reasoning? What could make that true, on your view? Don’t you need to specify your “inductive standards”?)
moral questions make perfect sense once we specify the evaluative standard
Once you fully specify the evaluative standard, there is no open question left to ask, just concealed tautologies. You’ve replaced all the important moral questions with trivial logical ones. (“Does P&Q&R imply P?”) Normative questions it no longer makes sense to ask on your view include:
I already know what Nazism implies, and what liberalism implies, but which view is better justified?
I already know what the different theories of well-being imply. But which view is actually correct? Would plugging into the experience machine be good or bad for me?
I already know what moral theory I endorse, but would it be wise to “hedge” and take moral uncertainty into account, in case I’m wrong?
And in the epistemic case (once we extend your view to cover inductive standards):
I already know what the green vs grue inductive standards have to say about whether I should expect future emeralds to be green or grue; but—in order to have the best shot at a true belief, given my available evidence—which should I expect?
You’re right that I need to bite the bullet on epistemic norms too and I do think that’s a highly effective reply. But at the end of the day, yes, I think “reasonable” in epistemology is also implicitly goal-relative in a meta-ethical sense—it means “in order to have beliefs that accurately track reality.” The difference is that this goal is so universally shared, so universal across so many different value systems, and so deeply embedded in the concept of belief itself that it feels categorical.
You say I’ve “replaced all the important moral questions with trivial logical ones,” but that’s unfair. The questions remain very substantive—they just need proper framing:
Instead of “Which view is better justified?” we ask “Which view better satisfies [specific criteria like internal consistency, explanatory power, alignment with considered judgments, etc.]?”
Instead of “Would the experience machine be good for me?” we ask “Would it satisfy my actual values / promote my flourishing / give me what I reflectively endorse / give me what an idealized version of myself might want?”
These aren’t trivial questions! They’re complex empirical and philosophical questions. What I’m denying is that there’s some further question—“But which view is really justified?”—floating free of any standard of justification.
Your challenge about moral uncertainty is interesting, but I’d say: yes, you can hedge across different moral theories if you have a higher-order standard for managing that uncertainty (like maximizing expected moral value across theories you find plausible). That’s still goal-relative, just at a meta-level.
The key insight remains: every “should” or “justified” implicitly references some standard. Making those standards explicit clarifies rather than trivializes our discussions. We’re not eliminating important questions—we’re revealing what we’re actually asking.
I agree it’s often helpful to make our implicit standards explicit. But I disagree that that’s “what we’re actually asking”. At least in my own normative thought, I don’t just wonder about what meets my standards. And I don’t just disagree with others about what does or doesn’t meet their standards or mine. I think the most important disagreement of all is over which standards are really warranted.
On your view, there may not be any normative disagreement, once we all agree about the logical and empirical facts. I think it’s key to philosophy that there is more we can wonder about than just that. (There may not be any tractable disagreement once we get down to bedrock clashing standards, but I think there is still a further question over which we really disagree, even if we have no way to persuade the other of our position.)
It’s interesting to consider the meta question of whether one of us is really right about our present metaethical dispute, or whether all you can say is that your position follows from your epistemic standards and mine follows from mine, and there is no further objective question about which we even disagree.
At least in my own normative thought, I don’t just wonder about what meets my standards. [...] I think the most important disagreement of all is over which standards are really warranted.
Really warranted by what? I think I’m an illusionist about this in particular as I don’t even know what we could be reasonably disagreeing over.
For a disagreement about facts (is this blue?), we can argue about actual blueness (measurable) or we can argue about epistemics (which strategies most reliably predict the world?) and meta-epistemics (which strategies most reliably figure out strategies that reliably predict the world?), etc.
For disagreements about morals (is this good?), we can argue about goodness but what is goodness? Is it platonic? Is it grounded in God? I’m not even sure what there is to dispute. I’d argue the best we can do is argue to our shared values (perhaps even universal human values, perhaps idealized by arguing about consistency etc.) and then see what best satisfies those.
~
On your view, there may not be any normative disagreement, once we all agree about the logical and empirical facts.
Right—and this matches our experience! When moral disagreements persist after full empirical and logical agreement, we’re left with clashing bedrock intuitions. You want to insist there’s still a fact about who’s ultimately correct, but can’t explain what would make it true.
~
It’s interesting to consider the meta question of whether one of us is really right about our present metaethical dispute, or whether all you can say is that your position follows from your epistemic standards and mine follows from mine, and there is no further objective question about which we even disagree.
I think we’re successfully engaging in a dispute here and that does kind of prove my position. I’m trying to argue that you’re appealing to something that just doesn’t exist and that this is inconsistent with your epistemic values. Whether one can ground a judgement about what is “really warranted” is a factual question.
~
I want to add that your recent post on meta-metaethical realism also reinforces my point here. You worry that anti-realism about morality commits us to anti-realism about philosophy generally. But there’s a crucial disanalogy: philosophical discourse (including this debate) works precisely because we share epistemic standards—logical consistency, explanatory power, and various other virtues. When we debate meta-ethics or meta-epistemology, we’re not searching for stance-independent truths but rather working out what follows from our shared epistemic commitments.
The “companions in guilt” argument fails because epistemic norms are self-vindicating in a way moral norms aren’t. To even engage in rational discourse about what’s true (including about anti-realism), we must employ epistemic standards. But we can coherently describe worlds with radically different moral standards. There’s no pragmatic incoherence in moral anti-realism the way there would be in global philosophical anti-realism.
Thanks!
I think all reasons are hypothetical, but some hypotheticals (like “if you want to avoid unnecessary suffering...”) are so deeply embedded in human psychology that they feel categorical. This explains our moral intuitions without mysterious metaphysical facts.
The concentration camp guard example actually supports my view—we think the guard shouldn’t follow professional norms precisely because we’re applying a different value system (human welfare over rule-following). There’s no view from nowhere; there’s just the fact that (luckily) most of us share similar core values.
Do you think there’s an epistemic fact of the matter as to what beliefs about the future are most reasonable and likely to be true given the past? (E.g., whether we should expect future emeralds to be green or grue?) Is probability end-relational too? Objective norms for inductive reasoning don’t seem any less metaphysically mysterious than objective norms for practical reasoning.
One could just debunk all philosophical beliefs as mere “deeply embedded… intuitions” so as to avoid “mysterious metaphysical facts”. But that then leaves you committed to thinking that all open philosophical questions—many of which seem to be sensible things to wonder about—are actually total nonsense. (Some do go this way, but it’s a pretty extreme view!) We project green, the grue-speaker projects grue, and that’s all there is to say. I just don’t find such radical skepticism remotely credible. You might as well posit that the world was created 5 minutes ago, or that solipsism is true, in order to further trim down your ontology. I’d rather say: parsimony is not the only theoretical virtue; actually accounting for the full range of real questions we can ask matters too!
(I’m more sympathetic to the view that we can’t know the answers to these questions than to the view that there is no real question here to ask.)
You raise a fair challenge about epistemic norms! Yes, I do think there are facts about which beliefs are most reasonable given evidence. But I’d argue this actually supports my view rather than undermining it.
The key difference: epistemic norms have a built-in goal—accurate representation of reality. When we ask “should I expect emeralds to be green or grue?” we’re implicitly asking “in order to have beliefs that accurately track reality, what should I expect?” The standard is baked into the enterprise of belief formation itself.
But moral norms lack this inherent goal. When you say some goals are “intrinsically more rationally warranted,” I’d ask: warranted for what purpose? The hypothetical imperative lurks even in your formulation. Yes, promoting happiness over misery feels obviously correct to us—but that’s because we’re humans with particular values, not because we’ve discovered some goal-independent truth.
I’m not embracing radical skepticism or saying moral questions are nonsense. I’m making a more modest claim: moral questions make perfect sense once we specify the evaluative standard. “Is X wrong according to utilitarianism?” has a determinate, objective, mind-indpendent answer. “Is X wrong simpliciter?” does not.
The fact that we share deep moral intuitions (like preferring happiness to misery) is explained by our shared humanity, not by those intuitions tracking mind-independent moral facts. After all, we could imagine beings with very different value systems who would find our intuitions as arbitrary as we might find theirs.
So yes, I think we can know things about the future and have justified beliefs. But that’s because “justified” in epistemology means “likely to be true”—there’s an implicit standard. In ethics, we need to make our standards explicit.
Why couldn’t someone disagree with you about the purpose of belief-formation: “sure, truth-seeking feels obviously correct to you, but that’s just because [some story]… not because we’ve discovered some goal-independent truth.”
Further, part of my point with induction is that merely aiming at truth doesn’t settle the hard questions of epistemology (any more than aiming at the good settles the hard questions of axiology).
To see this: suppose that, oddly enough, the grue-speakers turn out to be right that all new emeralds discovered after 2030 are observed to be (what we call) blue. Surprising! Still, I take it that as of 2025, it was reasonable for us to expect future emeralds to be green, and unreasonable of the grue-speakers to expect them to be grue. Part of the challenge I meant to raise for you was: What grounds this epistemic fact? (Isn’t it metaphysically mysterious to say that green as a property is privileged over “grue” for purposes of inductive reasoning? What could make that true, on your view? Don’t you need to specify your “inductive standards”?)
Once you fully specify the evaluative standard, there is no open question left to ask, just concealed tautologies. You’ve replaced all the important moral questions with trivial logical ones. (“Does P&Q&R imply P?”) Normative questions it no longer makes sense to ask on your view include:
I already know what Nazism implies, and what liberalism implies, but which view is better justified?
I already know what the different theories of well-being imply. But which view is actually correct? Would plugging into the experience machine be good or bad for me?
I already know what moral theory I endorse, but would it be wise to “hedge” and take moral uncertainty into account, in case I’m wrong?
And in the epistemic case (once we extend your view to cover inductive standards):
I already know what the green vs grue inductive standards have to say about whether I should expect future emeralds to be green or grue; but—in order to have the best shot at a true belief, given my available evidence—which should I expect?
You’re right that I need to bite the bullet on epistemic norms too and I do think that’s a highly effective reply. But at the end of the day, yes, I think “reasonable” in epistemology is also implicitly goal-relative in a meta-ethical sense—it means “in order to have beliefs that accurately track reality.” The difference is that this goal is so universally shared, so universal across so many different value systems, and so deeply embedded in the concept of belief itself that it feels categorical.
You say I’ve “replaced all the important moral questions with trivial logical ones,” but that’s unfair. The questions remain very substantive—they just need proper framing:
Instead of “Which view is better justified?” we ask “Which view better satisfies [specific criteria like internal consistency, explanatory power, alignment with considered judgments, etc.]?”
Instead of “Would the experience machine be good for me?” we ask “Would it satisfy my actual values / promote my flourishing / give me what I reflectively endorse / give me what an idealized version of myself might want?”
These aren’t trivial questions! They’re complex empirical and philosophical questions. What I’m denying is that there’s some further question—“But which view is really justified?”—floating free of any standard of justification.
Your challenge about moral uncertainty is interesting, but I’d say: yes, you can hedge across different moral theories if you have a higher-order standard for managing that uncertainty (like maximizing expected moral value across theories you find plausible). That’s still goal-relative, just at a meta-level.
The key insight remains: every “should” or “justified” implicitly references some standard. Making those standards explicit clarifies rather than trivializes our discussions. We’re not eliminating important questions—we’re revealing what we’re actually asking.
I agree it’s often helpful to make our implicit standards explicit. But I disagree that that’s “what we’re actually asking”. At least in my own normative thought, I don’t just wonder about what meets my standards. And I don’t just disagree with others about what does or doesn’t meet their standards or mine. I think the most important disagreement of all is over which standards are really warranted.
On your view, there may not be any normative disagreement, once we all agree about the logical and empirical facts. I think it’s key to philosophy that there is more we can wonder about than just that. (There may not be any tractable disagreement once we get down to bedrock clashing standards, but I think there is still a further question over which we really disagree, even if we have no way to persuade the other of our position.)
It’s interesting to consider the meta question of whether one of us is really right about our present metaethical dispute, or whether all you can say is that your position follows from your epistemic standards and mine follows from mine, and there is no further objective question about which we even disagree.
Really warranted by what? I think I’m an illusionist about this in particular as I don’t even know what we could be reasonably disagreeing over.
For a disagreement about facts (is this blue?), we can argue about actual blueness (measurable) or we can argue about epistemics (which strategies most reliably predict the world?) and meta-epistemics (which strategies most reliably figure out strategies that reliably predict the world?), etc.
For disagreements about morals (is this good?), we can argue about goodness but what is goodness? Is it platonic? Is it grounded in God? I’m not even sure what there is to dispute. I’d argue the best we can do is argue to our shared values (perhaps even universal human values, perhaps idealized by arguing about consistency etc.) and then see what best satisfies those.
~
Right—and this matches our experience! When moral disagreements persist after full empirical and logical agreement, we’re left with clashing bedrock intuitions. You want to insist there’s still a fact about who’s ultimately correct, but can’t explain what would make it true.
~
I think we’re successfully engaging in a dispute here and that does kind of prove my position. I’m trying to argue that you’re appealing to something that just doesn’t exist and that this is inconsistent with your epistemic values. Whether one can ground a judgement about what is “really warranted” is a factual question.
~
I want to add that your recent post on meta-metaethical realism also reinforces my point here. You worry that anti-realism about morality commits us to anti-realism about philosophy generally. But there’s a crucial disanalogy: philosophical discourse (including this debate) works precisely because we share epistemic standards—logical consistency, explanatory power, and various other virtues. When we debate meta-ethics or meta-epistemology, we’re not searching for stance-independent truths but rather working out what follows from our shared epistemic commitments.
The “companions in guilt” argument fails because epistemic norms are self-vindicating in a way moral norms aren’t. To even engage in rational discourse about what’s true (including about anti-realism), we must employ epistemic standards. But we can coherently describe worlds with radically different moral standards. There’s no pragmatic incoherence in moral anti-realism the way there would be in global philosophical anti-realism.