I’m struck by how often two theoretical mistakes manage to (mostly) cancel each other out.
If that’s so, one might wonder why that happens.
In these cases, it seems that there are three questions; e.g.:
1) Is consequentialism correct? 2) Does consequentialism entail Machiavellianism? 3) Ought we to be Machiavellian?
You claim that people get the answer to the two first questions wrong, but the answer to the third question right, since the two mistakes cancel out each other. In effect, two incorrect premises lead to a correct conclusion.
It’s possible that in the cases you discuss, people tend to have the firmest intuitions about question 3) (“the conclusion”). E.g. they are more convinced that we ought not to be Machiavellian than that consequentialism is correct/incorrect or that consequentialism entails/does not entail Machiavellianism.
If that’s the case, then it would be unsurprising that mistakes would cancel each other out. E.g. someone who would start to believe that consequentialism entails Machiavellianism would be inclined to reject consequentialism, since they otherwise would need to accept that we ought to be Machiavellian (which they by hypothesis don’t do).
(Effectively, I’m saying that people reason holistically, reflective equilibrium-style; and not just from premises to conclusions.)
A corollary of this is that it’s maybe not as common as one might think that “a little knowledge” is as dangerous as one might believe. Suppose that someone initially believes that consequentialism is wrong (Question 1), that consequentialism entails Machiavellianism (Question 2), and that we ought not to be Machiavellian (Question 3). They then change their view on Question 1, adopting consequentialism. That creates an inconsistency between their three beliefs. But if they have firmer beliefs about Question 3 (the conclusion) than about Question 2 (the other premise), they’ll resolve this inconsistency by rejecting the other incorrect premise, not by endorsing the dangerous conclusion that we ought to be Machiavellian.
My argument is of course schematic and how plausible it is will no doubt vary depending which of the six cases you discuss we consider. I do think that “a little knowledge” is sometimes dangerous in the way you suggest. Nevertheless, I think the mechanism I discuss is worth remembering.
In general, I think a little knowledge is usually beneficial, meaning our prior that it’s harmful in an individual case should be reasonably low. However, priors can of course be overturned by evidence in specific cases.
If that’s so, one might wonder why that happens.
In these cases, it seems that there are three questions; e.g.:
1) Is consequentialism correct?
2) Does consequentialism entail Machiavellianism?
3) Ought we to be Machiavellian?
You claim that people get the answer to the two first questions wrong, but the answer to the third question right, since the two mistakes cancel out each other. In effect, two incorrect premises lead to a correct conclusion.
It’s possible that in the cases you discuss, people tend to have the firmest intuitions about question 3) (“the conclusion”). E.g. they are more convinced that we ought not to be Machiavellian than that consequentialism is correct/incorrect or that consequentialism entails/does not entail Machiavellianism.
If that’s the case, then it would be unsurprising that mistakes would cancel each other out. E.g. someone who would start to believe that consequentialism entails Machiavellianism would be inclined to reject consequentialism, since they otherwise would need to accept that we ought to be Machiavellian (which they by hypothesis don’t do).
(Effectively, I’m saying that people reason holistically, reflective equilibrium-style; and not just from premises to conclusions.)
A corollary of this is that it’s maybe not as common as one might think that “a little knowledge” is as dangerous as one might believe. Suppose that someone initially believes that consequentialism is wrong (Question 1), that consequentialism entails Machiavellianism (Question 2), and that we ought not to be Machiavellian (Question 3). They then change their view on Question 1, adopting consequentialism. That creates an inconsistency between their three beliefs. But if they have firmer beliefs about Question 3 (the conclusion) than about Question 2 (the other premise), they’ll resolve this inconsistency by rejecting the other incorrect premise, not by endorsing the dangerous conclusion that we ought to be Machiavellian.
My argument is of course schematic and how plausible it is will no doubt vary depending which of the six cases you discuss we consider. I do think that “a little knowledge” is sometimes dangerous in the way you suggest. Nevertheless, I think the mechanism I discuss is worth remembering.
In general, I think a little knowledge is usually beneficial, meaning our prior that it’s harmful in an individual case should be reasonably low. However, priors can of course be overturned by evidence in specific cases.
Thanks, yeah, I think I agree with all of that!