Thanks for this Michelle. I don’t think I’ve quite worked out how to present what I mean, which is probably why it isn’t clear.
To try again, what I’m alluding to are argumentative scenarios where X and Y are disagreeing, and it’s apparent to both of them that X know what view he/she hold, what its weird implications are and X still accepts the view as being, on balance, right.
Intuition jousting is where Y then says things like “but that’s nuts!” Note Y isn’t providing an argument now. It’s a purely rhetorical move that uses social pressure (“I don’t want people to think I’m nuts”) to try and win the argument. I don’t think conversations are very interesting at this stage or useful. Note also that X is able to turn this around on Y to say “but your view has different weird implications of its own, and that’s more nuts!” It’s like a joust because the two people are just testing who’s able to hold on to their view under the pressure from the other.
I suppose Y could counter-counter attack X and say “yeah, but more people who have thought about this deeply agree with me”. It’s not clear what logical (rather than rhetorical) force this adds. It seems like ‘deeply’ would, in any case, being doing most of the work in that scenario.
I’m somewhat unsure how to think about moral truth here. However, if you do think this is one moral truth to be found, I would think you would really want to understand people who disagree with you in case you might be wrong. As a practical matter, this speaks strongly in favour of engaging in considerate, polite and charitable disagreement (“intuition exchanging”) rather than intuition jousting anyway. From my anecdata, there is both types in the EA community and it’s only the jousting variety I object to.
Appealing to rhetoric in this way is, I agree, unjustifiable. But I thought there might be a valid point that tacked a bit closer to the spirit of your original post. There is no agreed methodology in moral philosophy, which I think explains a lot of persisting moral disagreement. People eventually start just trading which intuitions they think are the most plausible—“I’m happy to accept the repugnant conclusion, not the sadistic one” etc. But intuitions are ten a penny so this doesn’t really take us very far—smart people have summoned intuitions against the analytical truth that betterness is transitive.
What we really need is an account of which moral intuitions ought to be held on to and which ones we should get rid of. One might appeal to cognitive biases, to selective evolutionary debunking arguments, and so on. e.g...
One might resist prioritarianism by noting that people seemlessly shift from accepting that resources have diminishing marginal utility to accepting that utility has diminishing marginal utility. People have intuitions about diminishing utility with respect to that same utility, which makes no sense—http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.174.5213&rep=rep1&type=pdf.
Debunk an anti-aggregative view by appealing to people’s failure to grasp large numbers.
Debunk an anti-incest norm by noting that it is explained by evolutionary selective pressure rather than apprehension of independent normative truth.
Incorporating your suggestion then, when people start to intuition joust perhaps a better idea than the two I mentioned would be to try and debunk each others intuitions.
Do people think this debunking approach can go all the way? If it doesn’t, it looks like a more refined version of the problem still recurs.
Particularly interesting stuff about prioritarianism.
It’s a difficult question when we can stop debunking and what counts as successful debunking. But this is just to say that moral epistemology is difficult. I have my own views and what can and can’t be debunked. e.g. I don’t see how you could debunk the intuition that searing pain is bad. But this is a massive issue.
Thanks for this Michelle. I don’t think I’ve quite worked out how to present what I mean, which is probably why it isn’t clear.
To try again, what I’m alluding to are argumentative scenarios where X and Y are disagreeing, and it’s apparent to both of them that X know what view he/she hold, what its weird implications are and X still accepts the view as being, on balance, right.
Intuition jousting is where Y then says things like “but that’s nuts!” Note Y isn’t providing an argument now. It’s a purely rhetorical move that uses social pressure (“I don’t want people to think I’m nuts”) to try and win the argument. I don’t think conversations are very interesting at this stage or useful. Note also that X is able to turn this around on Y to say “but your view has different weird implications of its own, and that’s more nuts!” It’s like a joust because the two people are just testing who’s able to hold on to their view under the pressure from the other.
I suppose Y could counter-counter attack X and say “yeah, but more people who have thought about this deeply agree with me”. It’s not clear what logical (rather than rhetorical) force this adds. It seems like ‘deeply’ would, in any case, being doing most of the work in that scenario.
I’m somewhat unsure how to think about moral truth here. However, if you do think this is one moral truth to be found, I would think you would really want to understand people who disagree with you in case you might be wrong. As a practical matter, this speaks strongly in favour of engaging in considerate, polite and charitable disagreement (“intuition exchanging”) rather than intuition jousting anyway. From my anecdata, there is both types in the EA community and it’s only the jousting variety I object to.
Appealing to rhetoric in this way is, I agree, unjustifiable. But I thought there might be a valid point that tacked a bit closer to the spirit of your original post. There is no agreed methodology in moral philosophy, which I think explains a lot of persisting moral disagreement. People eventually start just trading which intuitions they think are the most plausible—“I’m happy to accept the repugnant conclusion, not the sadistic one” etc. But intuitions are ten a penny so this doesn’t really take us very far—smart people have summoned intuitions against the analytical truth that betterness is transitive.
What we really need is an account of which moral intuitions ought to be held on to and which ones we should get rid of. One might appeal to cognitive biases, to selective evolutionary debunking arguments, and so on. e.g...
One might resist prioritarianism by noting that people seemlessly shift from accepting that resources have diminishing marginal utility to accepting that utility has diminishing marginal utility. People have intuitions about diminishing utility with respect to that same utility, which makes no sense—http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.174.5213&rep=rep1&type=pdf.
Debunk an anti-aggregative view by appealing to people’s failure to grasp large numbers.
Debunk an anti-incest norm by noting that it is explained by evolutionary selective pressure rather than apprehension of independent normative truth.
You might want to look at Huemer’s stuff on intuitionism. - https://www.cambridge.org/core/journals/social-philosophy-and-policy/article/revisionary-intuitionism/EE5C8F3B9F457168029C7169BA1D62AD
That’s helpful, thanks.
Incorporating your suggestion then, when people start to intuition joust perhaps a better idea than the two I mentioned would be to try and debunk each others intuitions.
Do people think this debunking approach can go all the way? If it doesn’t, it looks like a more refined version of the problem still recurs.
Particularly interesting stuff about prioritarianism.
It’s a difficult question when we can stop debunking and what counts as successful debunking. But this is just to say that moral epistemology is difficult. I have my own views and what can and can’t be debunked. e.g. I don’t see how you could debunk the intuition that searing pain is bad. But this is a massive issue.