When do intuitions need to be reliable?
(Cross-posted from my Substack.)
Here’s an important way people might often talk past each other when discussing the role of intuitions in philosophy.[1]
Intuitions as predictors
When someone appeals to an intuition to argue for something, it typically makes sense to ask how reliable their intuition is. Namely, how reliable is the intuition as a predictor of that “something”? The “something” in question might be some fact about the external world. Or it could be a fact about someone’s own future mental states, e.g., what they’d believe after thinking for a few years.
Some examples, which might seem obvious but will be helpful to set up the contrast:[2]
“My gut says not to trust this person I just met” is a good argument against trusting them (up to a point).
Because our social intuitions were probably selected for detecting exploitative individuals.
“Quantum superposition is really counterintuitive” is a weak argument against quantum mechanics.
Because our intuitions about physics were shaped by medium-sized objects, not subatomic particles (whose behavior quantum mechanics is meant to model).
“My gut says this chess position favors white” is a weak argument if you’re a beginner, but a strong argument if you’re a grandmaster.
Because grandmasters have analyzed oodles of positions and received consistent feedback through wins and losses, while beginners haven’t.
Intuitions as normative expressions
But, particularly in philosophy, not all intuitions are “predictors” in this (empirical) sense. Sometimes, when we report our intuition, we’re simply expressing how normatively compelling we find something.[3] Whenever this really is what we’re doing — if we’re not at all appealing to the intuition as a predictor, including in the ways discussed in the next section — then I think it’s a category error to ask how “reliable” the intuition is. For instance:
“The principle of indifference is a really intuitive way of assigning subjective probabilities. If all I know is that some list of outcomes are possible, and I don’t know anything else about them, it seems arbitrary to assign different probabilities to the different outcomes.”
“The law of noncontradiction is an extremely intuitive principle of logic. I can’t even conceive of a world where it’s false.”
“The repugnant conclusion is very counterintuitive.”
It seems bizarre to say, “You have no experience with worlds where other kinds of logic apply. So your intuition in favor of the law of noncontradiction is unreliable.” Or, “There are no relevant feedback loops shaping your intuitions about the goodness of abstract populations, so why trust your intuition against the repugnant conclusion?” (We might still reject these intuitions, but if so, this shouldn’t be because of their “unreliability”.)
Ambiguous cases
Sometimes, though, it’s unclear whether someone is reporting an intuition as a predictor or an expression of a normative attitude. So we need to pin down which of the two is meant, and then ask about the intuition’s “reliability” insofar as the intuition is supposed to be a predictor. Examples (meant only to illustrate the distinction, not to argue for my views):
“In the footbridge version of the trolley problem, it’s really counterintuitive to say you should push the fat man.” Some things this could mean:
“My strong intuition against pushing the fat man is evidence that there’s some deeper relevant difference from the classic trolley problem (where I think you should pull the lever), even if I can’t yet articulate it.”
I think this claim is plausibly debunked by, e.g., Greene’s (2013) and Singer’s (2005) arguments against the intuition’s reliability.
“I find it normatively compelling that you shouldn’t push the fat man, as a primitive. That is, it’s compelling even if there’s no deeper relevant difference between this case and the classic trolley problem.”
This claim doesn’t need to be justified by the intuition’s reliability. But if it isn’t meant to be a prediction, I’m pretty unsympathetic to it, because it’s not justified by any deeper reasons. More in this post.
“I find it normatively compelling that you shouldn’t push the fat man, as one of several mutually coherent moral judgments that I expect to survive reflective equilibrium.”
Similar to the option above (again, see this post).
“Pareto is extremely intuitive as a principle of social choice. If option A is better for some person than B, and at least as good as B for everyone else, why wouldn’t A be better for overall welfare?” Some things this could mean:
“My strong intuition in favor of Pareto is evidence that, if I reflected on various cases, my normative attitude about each of those cases would be aligned with Pareto.”
This seems like a reasonable claim. If you grasp the concept of Pareto, probably your approval of it in the abstract is correlated with your approval in concrete cases. I don’t expect this is usually what people mean when they say Pareto is really intuitive, though (at least, it’s not what I mean).
“I find Pareto normatively compelling as a primitive. It’s independently plausible, so it needs no further justification, at least as long as it’s consistent with other compelling principles.”
I’m very sympathetic to this claim. In particular, it doesn’t seem that my intuition about this principle is just as vulnerable to evolutionary debunking arguments as the fat man intuition-as-predictor.
“I find Pareto normatively compelling, as one of several mutually coherent judgments that I expect to survive reflective equilibrium.”
While I’m personally not that sympathetic to this claim (as a foundationalist), conditional on coherentism it seems pretty plausible, just as in the case directly above.
The bottom line is that we should be clear about when we’re appealing to (or critiquing) intuitions as predictors, vs. as normative expressions.
- ^
Thanks to Niels Warncke for a discussion that inspired this post, and Jesse Clifton for suggestions.
- ^
H/t Claude for most of these.
- ^
For normative realists, “expressing how normatively compelling we find something” is supposed to be equivalent to appealing to the intuition as a predictor of the normative truth. This is why I say “(empirical)” in the claim “not all intuitions are “predictors” in this (empirical) sense”.
For what it’s worth, here’s some bibliography in case anyone is interested in researching (moral) intuitions in philosophy.
An excerpt from my MA thesis:
“There are several possible characterizations of what intuitions are precisely supposed to be. Exceptionalists (e.g. Sosa, Ludwig) argue that intuitions are analytic or conceptual truths, a priori, and/or dealing with conceptual competence. Particularists (e.g. Bealer, Huemer, Schwitzgebel, Kagan) argue that intuitions have a distinct phenomenology, such as being snap judgments that are not consciously inferred from any other belief, or are a sui generis faculty. Minimalists (e.g. Machery, Lewis) argue that intuitions are not different from the application of concepts in ordinary life. (Machery, 2017, Ch. 2)”
I borrowed this terminology from Chapter 2 of Edouard Machery’s book, Philosophy Within Its Proper Bounds (2017).
Executive summary: Philosophers often talk past each other about intuitions because they conflate two distinct uses—appealing to intuitions as empirical predictors versus expressing them as primitive normative commitments—and the reliability question applies very differently to each.
Key points:
When intuitions function as empirical predictors, it makes sense to ask how reliable they are: our social intuitions work well for detecting exploitation, but our physics intuitions work poorly for subatomic particles, and expertise (like a grandmaster’s chess intuition) improves reliability through feedback.
In philosophy, intuitions sometimes express normative attitudes rather than predict facts, and asking whether such intuitions are “reliable” in an empirical sense commits a category error.
The principle of indifference, the law of noncontradiction, and other abstract philosophical principles may be expressions of normative compellingness rather than predictions, making evolutionary debunking arguments about their reliability inapplicable.
The trolley problem’s footbridge case is ambiguous: if the intuition against pushing the fat man is a predictor of some deeper moral difference, reliability criticisms apply; if it’s a primitive normative commitment without claimed predictive force, reliability arguments do not apply.
Pareto optimality can be understood as either a predictor of consistent normative judgments across cases or as a primitive normative principle, and the author is more sympathetic to the latter interpretation because it avoids vulnerability to evolutionary debunking arguments.
Philosophers should clarify whether they are appealing to intuitions as empirical predictors or as normative expressions before debating an intuition’s reliability.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.