Do you think thereâs an epistemic fact of the matter as to what beliefs about the future are most reasonable and likely to be true given the past? (E.g., whether we should expect future emeralds to be green or grue?) Is probability end-relational too? Objective norms for inductive reasoning donât seem any less metaphysically mysterious than objective norms for practical reasoning.
One could just debunk all philosophical beliefs as mere âdeeply embedded⌠intuitionsâ so as to avoid âmysterious metaphysical factsâ. But that then leaves you committed to thinking that all open philosophical questionsâmany of which seem to be sensible things to wonder aboutâare actually total nonsense. (Some do go this way, but itâs a pretty extreme view!) We project green, the grue-speaker projects grue, and thatâs all there is to say. I just donât find such radical skepticism remotely credible. You might as well posit that the world was created 5 minutes ago, or that solipsism is true, in order to further trim down your ontology. Iâd rather say: parsimony is not the only theoretical virtue; actually accounting for the full range of real questions we can ask matters too!
(Iâm more sympathetic to the view that we canât know the answers to these questions than to the view that there is no real question here to ask.)
You raise a fair challenge about epistemic norms! Yes, I do think there are facts about which beliefs are most reasonable given evidence. But Iâd argue this actually supports my view rather than undermining it.
The key difference: epistemic norms have a built-in goalâaccurate representation of reality. When we ask âshould I expect emeralds to be green or grue?â weâre implicitly asking âin order to have beliefs that accurately track reality, what should I expect?â The standard is baked into the enterprise of belief formation itself.
But moral norms lack this inherent goal. When you say some goals are âintrinsically more rationally warranted,â Iâd ask: warranted for what purpose? The hypothetical imperative lurks even in your formulation. Yes, promoting happiness over misery feels obviously correct to usâbut thatâs because weâre humans with particular values, not because weâve discovered some goal-independent truth.
Iâm not embracing radical skepticism or saying moral questions are nonsense. Iâm making a more modest claim: moral questions make perfect sense once we specify the evaluative standard. âIs X wrong according to utilitarianism?â has a determinate, objective, mind-indpendent answer. âIs X wrong simpliciter?â does not.
The fact that we share deep moral intuitions (like preferring happiness to misery) is explained by our shared humanity, not by those intuitions tracking mind-independent moral facts. After all, we could imagine beings with very different value systems who would find our intuitions as arbitrary as we might find theirs.
So yes, I think we can know things about the future and have justified beliefs. But thatâs because âjustifiedâ in epistemology means âlikely to be trueââthereâs an implicit standard. In ethics, we need to make our standards explicit.
Why couldnât someone disagree with you about the purpose of belief-formation: âsure, truth-seeking feels obviously correct to you, but thatâs just because [some story]⌠not because weâve discovered some goal-independent truth.â
Further, part of my point with induction is that merely aiming at truth doesnât settle the hard questions of epistemology (any more than aiming at the good settles the hard questions of axiology).
To see this: suppose that, oddly enough, the grue-speakers turn out to be right that all new emeralds discovered after 2030 are observed to be (what we call) blue. Surprising! Still, I take it that as of 2025, it was reasonable for us to expect future emeralds to be green, and unreasonable of the grue-speakers to expect them to be grue. Part of the challenge I meant to raise for you was: What grounds this epistemic fact? (Isnât it metaphysically mysterious to say that green as a property is privileged over âgrueâ for purposes of inductive reasoning? What could make that true, on your view? Donât you need to specify your âinductive standardsâ?)
moral questions make perfect sense once we specify the evaluative standard
Once you fully specify the evaluative standard, there is no open question left to ask, just concealed tautologies. Youâve replaced all the important moral questions with trivial logical ones. (âDoes P&Q&R imply P?â) Normative questions it no longer makes sense to ask on your view include:
I already know what Nazism implies, and what liberalism implies, but which view is better justified?
I already know what the different theories of well-being imply. But which view is actually correct? Would plugging into the experience machine be good or bad for me?
I already know what moral theory I endorse, but would it be wise to âhedgeâ and take moral uncertainty into account, in case Iâm wrong?
And in the epistemic case (once we extend your view to cover inductive standards):
I already know what the green vs grue inductive standards have to say about whether I should expect future emeralds to be green or grue; butâin order to have the best shot at a true belief, given my available evidenceâwhich should I expect?
Youâre right that I need to bite the bullet on epistemic norms too and I do think thatâs a highly effective reply. But at the end of the day, yes, I think âreasonableâ in epistemology is also implicitly goal-relative in a meta-ethical senseâit means âin order to have beliefs that accurately track reality.â The difference is that this goal is so universally shared, so universal across so many different value systems, and so deeply embedded in the concept of belief itself that it feels categorical.
You say Iâve âreplaced all the important moral questions with trivial logical ones,â but thatâs unfair. The questions remain very substantiveâthey just need proper framing:
Instead of âWhich view is better justified?â we ask âWhich view better satisfies [specific criteria like internal consistency, explanatory power, alignment with considered judgments, etc.]?â
Instead of âWould the experience machine be good for me?â we ask âWould it satisfy my actual values /â promote my flourishing /â give me what I reflectively endorse /â give me what an idealized version of myself might want?â
These arenât trivial questions! Theyâre complex empirical and philosophical questions. What Iâm denying is that thereâs some further questionââBut which view is really justified?ââfloating free of any standard of justification.
Your challenge about moral uncertainty is interesting, but Iâd say: yes, you can hedge across different moral theories if you have a higher-order standard for managing that uncertainty (like maximizing expected moral value across theories you find plausible). Thatâs still goal-relative, just at a meta-level.
The key insight remains: every âshouldâ or âjustifiedâ implicitly references some standard. Making those standards explicit clarifies rather than trivializes our discussions. Weâre not eliminating important questionsâweâre revealing what weâre actually asking.
I agree itâs often helpful to make our implicit standards explicit. But I disagree that thatâs âwhat weâre actually askingâ. At least in my own normative thought, I donât just wonder about what meets my standards. And I donât just disagree with others about what does or doesnât meet their standards or mine. I think the most important disagreement of all is over which standards are really warranted.
On your view, there may not be any normative disagreement, once we all agree about the logical and empirical facts. I think itâs key to philosophy that there is more we can wonder about than just that. (There may not be any tractable disagreement once we get down to bedrock clashing standards, but I think there is still a further question over which we really disagree, even if we have no way to persuade the other of our position.)
Itâs interesting to consider the meta question of whether one of us is really right about our present metaethical dispute, or whether all you can say is that your position follows from your epistemic standards and mine follows from mine, and there is no further objective question about which we even disagree.
At least in my own normative thought, I donât just wonder about what meets my standards. [...] I think the most important disagreement of all is over which standards are really warranted.
Really warranted by what? I think Iâm an illusionist about this in particular as I donât even know what we could be reasonably disagreeing over.
For a disagreement about facts (is this blue?), we can argue about actual blueness (measurable) or we can argue about epistemics (which strategies most reliably predict the world?) and meta-epistemics (which strategies most reliably figure out strategies that reliably predict the world?), etc.
For disagreements about morals (is this good?), we can argue about goodness but what is goodness? Is it platonic? Is it grounded in God? Iâm not even sure what there is to dispute. Iâd argue the best we can do is argue to our shared values (perhaps even universal human values, perhaps idealized by arguing about consistency etc.) and then see what best satisfies those.
~
On your view, there may not be any normative disagreement, once we all agree about the logical and empirical facts.
Rightâand this matches our experience! When moral disagreements persist after full empirical and logical agreement, weâre left with clashing bedrock intuitions. You want to insist thereâs still a fact about whoâs ultimately correct, but canât explain what would make it true.
~
Itâs interesting to consider the meta question of whether one of us is really right about our present metaethical dispute, or whether all you can say is that your position follows from your epistemic standards and mine follows from mine, and there is no further objective question about which we even disagree.
I think weâre successfully engaging in a dispute here and that does kind of prove my position. Iâm trying to argue that youâre appealing to something that just doesnât exist and that this is inconsistent with your epistemic values. Whether one can ground a judgement about what is âreally warrantedâ is a factual question.
~
I want to add that your recent post on meta-metaethical realism also reinforces my point here. You worry that anti-realism about morality commits us to anti-realism about philosophy generally. But thereâs a crucial disanalogy: philosophical discourse (including this debate) works precisely because we share epistemic standardsâlogical consistency, explanatory power, and various other virtues. When we debate meta-ethics or meta-epistemology, weâre not searching for stance-independent truths but rather working out what follows from our shared epistemic commitments.
The âcompanions in guiltâ argument fails because epistemic norms are self-vindicating in a way moral norms arenât. To even engage in rational discourse about whatâs true (including about anti-realism), we must employ epistemic standards. But we can coherently describe worlds with radically different moral standards. Thereâs no pragmatic incoherence in moral anti-realism the way there would be in global philosophical anti-realism.
Do you think thereâs an epistemic fact of the matter as to what beliefs about the future are most reasonable and likely to be true given the past? (E.g., whether we should expect future emeralds to be green or grue?) Is probability end-relational too? Objective norms for inductive reasoning donât seem any less metaphysically mysterious than objective norms for practical reasoning.
One could just debunk all philosophical beliefs as mere âdeeply embedded⌠intuitionsâ so as to avoid âmysterious metaphysical factsâ. But that then leaves you committed to thinking that all open philosophical questionsâmany of which seem to be sensible things to wonder aboutâare actually total nonsense. (Some do go this way, but itâs a pretty extreme view!) We project green, the grue-speaker projects grue, and thatâs all there is to say. I just donât find such radical skepticism remotely credible. You might as well posit that the world was created 5 minutes ago, or that solipsism is true, in order to further trim down your ontology. Iâd rather say: parsimony is not the only theoretical virtue; actually accounting for the full range of real questions we can ask matters too!
(Iâm more sympathetic to the view that we canât know the answers to these questions than to the view that there is no real question here to ask.)
You raise a fair challenge about epistemic norms! Yes, I do think there are facts about which beliefs are most reasonable given evidence. But Iâd argue this actually supports my view rather than undermining it.
The key difference: epistemic norms have a built-in goalâaccurate representation of reality. When we ask âshould I expect emeralds to be green or grue?â weâre implicitly asking âin order to have beliefs that accurately track reality, what should I expect?â The standard is baked into the enterprise of belief formation itself.
But moral norms lack this inherent goal. When you say some goals are âintrinsically more rationally warranted,â Iâd ask: warranted for what purpose? The hypothetical imperative lurks even in your formulation. Yes, promoting happiness over misery feels obviously correct to usâbut thatâs because weâre humans with particular values, not because weâve discovered some goal-independent truth.
Iâm not embracing radical skepticism or saying moral questions are nonsense. Iâm making a more modest claim: moral questions make perfect sense once we specify the evaluative standard. âIs X wrong according to utilitarianism?â has a determinate, objective, mind-indpendent answer. âIs X wrong simpliciter?â does not.
The fact that we share deep moral intuitions (like preferring happiness to misery) is explained by our shared humanity, not by those intuitions tracking mind-independent moral facts. After all, we could imagine beings with very different value systems who would find our intuitions as arbitrary as we might find theirs.
So yes, I think we can know things about the future and have justified beliefs. But thatâs because âjustifiedâ in epistemology means âlikely to be trueââthereâs an implicit standard. In ethics, we need to make our standards explicit.
Why couldnât someone disagree with you about the purpose of belief-formation: âsure, truth-seeking feels obviously correct to you, but thatâs just because [some story]⌠not because weâve discovered some goal-independent truth.â
Further, part of my point with induction is that merely aiming at truth doesnât settle the hard questions of epistemology (any more than aiming at the good settles the hard questions of axiology).
To see this: suppose that, oddly enough, the grue-speakers turn out to be right that all new emeralds discovered after 2030 are observed to be (what we call) blue. Surprising! Still, I take it that as of 2025, it was reasonable for us to expect future emeralds to be green, and unreasonable of the grue-speakers to expect them to be grue. Part of the challenge I meant to raise for you was: What grounds this epistemic fact? (Isnât it metaphysically mysterious to say that green as a property is privileged over âgrueâ for purposes of inductive reasoning? What could make that true, on your view? Donât you need to specify your âinductive standardsâ?)
Once you fully specify the evaluative standard, there is no open question left to ask, just concealed tautologies. Youâve replaced all the important moral questions with trivial logical ones. (âDoes P&Q&R imply P?â) Normative questions it no longer makes sense to ask on your view include:
I already know what Nazism implies, and what liberalism implies, but which view is better justified?
I already know what the different theories of well-being imply. But which view is actually correct? Would plugging into the experience machine be good or bad for me?
I already know what moral theory I endorse, but would it be wise to âhedgeâ and take moral uncertainty into account, in case Iâm wrong?
And in the epistemic case (once we extend your view to cover inductive standards):
I already know what the green vs grue inductive standards have to say about whether I should expect future emeralds to be green or grue; butâin order to have the best shot at a true belief, given my available evidenceâwhich should I expect?
Youâre right that I need to bite the bullet on epistemic norms too and I do think thatâs a highly effective reply. But at the end of the day, yes, I think âreasonableâ in epistemology is also implicitly goal-relative in a meta-ethical senseâit means âin order to have beliefs that accurately track reality.â The difference is that this goal is so universally shared, so universal across so many different value systems, and so deeply embedded in the concept of belief itself that it feels categorical.
You say Iâve âreplaced all the important moral questions with trivial logical ones,â but thatâs unfair. The questions remain very substantiveâthey just need proper framing:
Instead of âWhich view is better justified?â we ask âWhich view better satisfies [specific criteria like internal consistency, explanatory power, alignment with considered judgments, etc.]?â
Instead of âWould the experience machine be good for me?â we ask âWould it satisfy my actual values /â promote my flourishing /â give me what I reflectively endorse /â give me what an idealized version of myself might want?â
These arenât trivial questions! Theyâre complex empirical and philosophical questions. What Iâm denying is that thereâs some further questionââBut which view is really justified?ââfloating free of any standard of justification.
Your challenge about moral uncertainty is interesting, but Iâd say: yes, you can hedge across different moral theories if you have a higher-order standard for managing that uncertainty (like maximizing expected moral value across theories you find plausible). Thatâs still goal-relative, just at a meta-level.
The key insight remains: every âshouldâ or âjustifiedâ implicitly references some standard. Making those standards explicit clarifies rather than trivializes our discussions. Weâre not eliminating important questionsâweâre revealing what weâre actually asking.
I agree itâs often helpful to make our implicit standards explicit. But I disagree that thatâs âwhat weâre actually askingâ. At least in my own normative thought, I donât just wonder about what meets my standards. And I donât just disagree with others about what does or doesnât meet their standards or mine. I think the most important disagreement of all is over which standards are really warranted.
On your view, there may not be any normative disagreement, once we all agree about the logical and empirical facts. I think itâs key to philosophy that there is more we can wonder about than just that. (There may not be any tractable disagreement once we get down to bedrock clashing standards, but I think there is still a further question over which we really disagree, even if we have no way to persuade the other of our position.)
Itâs interesting to consider the meta question of whether one of us is really right about our present metaethical dispute, or whether all you can say is that your position follows from your epistemic standards and mine follows from mine, and there is no further objective question about which we even disagree.
Really warranted by what? I think Iâm an illusionist about this in particular as I donât even know what we could be reasonably disagreeing over.
For a disagreement about facts (is this blue?), we can argue about actual blueness (measurable) or we can argue about epistemics (which strategies most reliably predict the world?) and meta-epistemics (which strategies most reliably figure out strategies that reliably predict the world?), etc.
For disagreements about morals (is this good?), we can argue about goodness but what is goodness? Is it platonic? Is it grounded in God? Iâm not even sure what there is to dispute. Iâd argue the best we can do is argue to our shared values (perhaps even universal human values, perhaps idealized by arguing about consistency etc.) and then see what best satisfies those.
~
Rightâand this matches our experience! When moral disagreements persist after full empirical and logical agreement, weâre left with clashing bedrock intuitions. You want to insist thereâs still a fact about whoâs ultimately correct, but canât explain what would make it true.
~
I think weâre successfully engaging in a dispute here and that does kind of prove my position. Iâm trying to argue that youâre appealing to something that just doesnât exist and that this is inconsistent with your epistemic values. Whether one can ground a judgement about what is âreally warrantedâ is a factual question.
~
I want to add that your recent post on meta-metaethical realism also reinforces my point here. You worry that anti-realism about morality commits us to anti-realism about philosophy generally. But thereâs a crucial disanalogy: philosophical discourse (including this debate) works precisely because we share epistemic standardsâlogical consistency, explanatory power, and various other virtues. When we debate meta-ethics or meta-epistemology, weâre not searching for stance-independent truths but rather working out what follows from our shared epistemic commitments.
The âcompanions in guiltâ argument fails because epistemic norms are self-vindicating in a way moral norms arenât. To even engage in rational discourse about whatâs true (including about anti-realism), we must employ epistemic standards. But we can coherently describe worlds with radically different moral standards. Thereâs no pragmatic incoherence in moral anti-realism the way there would be in global philosophical anti-realism.