I’m not totally sure I understand what you mean by IJ. It sounds like what you’re getting at is telling someone they can’t possible have the fundamental intuition that they claim they have (either that they don’t really hold that intuition or that they are wrong to do so). Eg: ‘I simply feel fundamentally that what matters most is positive conscious experiences’ ‘That seems like a crazy thing to think!’. But then your example is
“But hold on: you think X, so your view entails Y and that’s ridiculous! You can’t possibly think that.”.
That seems like a different structure of argument, more akin to: ‘I feel that what matters most is having positive conscious experiences (X)’ ‘But that implies you think people ought to choose to enter the experience machine (Y), which is a crazy thing to think!’ The difference is significant: if the person is coming up with a novel Y, or even one that hasn’t been made salient to the person in this context, it actually seems really useful. Since that’s the case, I assume you meant IJ to refer to arguments more like the former kind.
I’m strongly in favour of people framing their arguments considerately, politely and charitably. But I do think there might be something in the ball-park of IJ which is useful, and should be used more by EAs than it is by philosophers. Philosophers have strong incentives to have views that no other philosophers hold, because to publish you have to be presenting a novel argument and it’s easier to describe and explore a novel theory you feel invested in. It’s also more interesting for other philosophers to explore novel theories, so in a sense they don’t have an incentive to convince other philosophers to agree with them. All reasoning should be sound, but differing in fundamental intuitions just makes for a greater array of interesting arguments. Whereas the project of effective altruism is fundamentally different: for those who think there is moral truth to be had, it’s absolutely crucial not just that an individual works out what that is, but that everyone converges on it. That means it’s important to thoroughly question our own fundamental moral intuitions, and to challenge those of others which we think are wrong. One way to do this is to point out when someone holds an intuition that is shared by hardly anyone else who has thought about this deeply. ‘No other serious philosophers hold that view’ might be a bonus in academic philosophy, but is a serious worry in EA. So I think when people say ‘Your intuition that A is ludicrous’, they might be meaning something which is actually useful: they might be highlighting just how unusual your intuition is, and thereby indicating that you should be strongly questioning it.
Thanks for this Michelle. I don’t think I’ve quite worked out how to present what I mean, which is probably why it isn’t clear.
To try again, what I’m alluding to are argumentative scenarios where X and Y are disagreeing, and it’s apparent to both of them that X know what view he/she hold, what its weird implications are and X still accepts the view as being, on balance, right.
Intuition jousting is where Y then says things like “but that’s nuts!” Note Y isn’t providing an argument now. It’s a purely rhetorical move that uses social pressure (“I don’t want people to think I’m nuts”) to try and win the argument. I don’t think conversations are very interesting at this stage or useful. Note also that X is able to turn this around on Y to say “but your view has different weird implications of its own, and that’s more nuts!” It’s like a joust because the two people are just testing who’s able to hold on to their view under the pressure from the other.
I suppose Y could counter-counter attack X and say “yeah, but more people who have thought about this deeply agree with me”. It’s not clear what logical (rather than rhetorical) force this adds. It seems like ‘deeply’ would, in any case, being doing most of the work in that scenario.
I’m somewhat unsure how to think about moral truth here. However, if you do think this is one moral truth to be found, I would think you would really want to understand people who disagree with you in case you might be wrong. As a practical matter, this speaks strongly in favour of engaging in considerate, polite and charitable disagreement (“intuition exchanging”) rather than intuition jousting anyway. From my anecdata, there is both types in the EA community and it’s only the jousting variety I object to.
Appealing to rhetoric in this way is, I agree, unjustifiable. But I thought there might be a valid point that tacked a bit closer to the spirit of your original post. There is no agreed methodology in moral philosophy, which I think explains a lot of persisting moral disagreement. People eventually start just trading which intuitions they think are the most plausible—“I’m happy to accept the repugnant conclusion, not the sadistic one” etc. But intuitions are ten a penny so this doesn’t really take us very far—smart people have summoned intuitions against the analytical truth that betterness is transitive.
What we really need is an account of which moral intuitions ought to be held on to and which ones we should get rid of. One might appeal to cognitive biases, to selective evolutionary debunking arguments, and so on. e.g...
One might resist prioritarianism by noting that people seemlessly shift from accepting that resources have diminishing marginal utility to accepting that utility has diminishing marginal utility. People have intuitions about diminishing utility with respect to that same utility, which makes no sense—http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.174.5213&rep=rep1&type=pdf.
Debunk an anti-aggregative view by appealing to people’s failure to grasp large numbers.
Debunk an anti-incest norm by noting that it is explained by evolutionary selective pressure rather than apprehension of independent normative truth.
Incorporating your suggestion then, when people start to intuition joust perhaps a better idea than the two I mentioned would be to try and debunk each others intuitions.
Do people think this debunking approach can go all the way? If it doesn’t, it looks like a more refined version of the problem still recurs.
Particularly interesting stuff about prioritarianism.
It’s a difficult question when we can stop debunking and what counts as successful debunking. But this is just to say that moral epistemology is difficult. I have my own views and what can and can’t be debunked. e.g. I don’t see how you could debunk the intuition that searing pain is bad. But this is a massive issue.
I’m not totally sure I understand what you mean by IJ. It sounds like what you’re getting at is telling someone they can’t possible have the fundamental intuition that they claim they have (either that they don’t really hold that intuition or that they are wrong to do so). Eg: ‘I simply feel fundamentally that what matters most is positive conscious experiences’ ‘That seems like a crazy thing to think!’. But then your example is
That seems like a different structure of argument, more akin to: ‘I feel that what matters most is having positive conscious experiences (X)’ ‘But that implies you think people ought to choose to enter the experience machine (Y), which is a crazy thing to think!’ The difference is significant: if the person is coming up with a novel Y, or even one that hasn’t been made salient to the person in this context, it actually seems really useful. Since that’s the case, I assume you meant IJ to refer to arguments more like the former kind.
I’m strongly in favour of people framing their arguments considerately, politely and charitably. But I do think there might be something in the ball-park of IJ which is useful, and should be used more by EAs than it is by philosophers. Philosophers have strong incentives to have views that no other philosophers hold, because to publish you have to be presenting a novel argument and it’s easier to describe and explore a novel theory you feel invested in. It’s also more interesting for other philosophers to explore novel theories, so in a sense they don’t have an incentive to convince other philosophers to agree with them. All reasoning should be sound, but differing in fundamental intuitions just makes for a greater array of interesting arguments. Whereas the project of effective altruism is fundamentally different: for those who think there is moral truth to be had, it’s absolutely crucial not just that an individual works out what that is, but that everyone converges on it. That means it’s important to thoroughly question our own fundamental moral intuitions, and to challenge those of others which we think are wrong. One way to do this is to point out when someone holds an intuition that is shared by hardly anyone else who has thought about this deeply. ‘No other serious philosophers hold that view’ might be a bonus in academic philosophy, but is a serious worry in EA. So I think when people say ‘Your intuition that A is ludicrous’, they might be meaning something which is actually useful: they might be highlighting just how unusual your intuition is, and thereby indicating that you should be strongly questioning it.
Thanks for this Michelle. I don’t think I’ve quite worked out how to present what I mean, which is probably why it isn’t clear.
To try again, what I’m alluding to are argumentative scenarios where X and Y are disagreeing, and it’s apparent to both of them that X know what view he/she hold, what its weird implications are and X still accepts the view as being, on balance, right.
Intuition jousting is where Y then says things like “but that’s nuts!” Note Y isn’t providing an argument now. It’s a purely rhetorical move that uses social pressure (“I don’t want people to think I’m nuts”) to try and win the argument. I don’t think conversations are very interesting at this stage or useful. Note also that X is able to turn this around on Y to say “but your view has different weird implications of its own, and that’s more nuts!” It’s like a joust because the two people are just testing who’s able to hold on to their view under the pressure from the other.
I suppose Y could counter-counter attack X and say “yeah, but more people who have thought about this deeply agree with me”. It’s not clear what logical (rather than rhetorical) force this adds. It seems like ‘deeply’ would, in any case, being doing most of the work in that scenario.
I’m somewhat unsure how to think about moral truth here. However, if you do think this is one moral truth to be found, I would think you would really want to understand people who disagree with you in case you might be wrong. As a practical matter, this speaks strongly in favour of engaging in considerate, polite and charitable disagreement (“intuition exchanging”) rather than intuition jousting anyway. From my anecdata, there is both types in the EA community and it’s only the jousting variety I object to.
Appealing to rhetoric in this way is, I agree, unjustifiable. But I thought there might be a valid point that tacked a bit closer to the spirit of your original post. There is no agreed methodology in moral philosophy, which I think explains a lot of persisting moral disagreement. People eventually start just trading which intuitions they think are the most plausible—“I’m happy to accept the repugnant conclusion, not the sadistic one” etc. But intuitions are ten a penny so this doesn’t really take us very far—smart people have summoned intuitions against the analytical truth that betterness is transitive.
What we really need is an account of which moral intuitions ought to be held on to and which ones we should get rid of. One might appeal to cognitive biases, to selective evolutionary debunking arguments, and so on. e.g...
One might resist prioritarianism by noting that people seemlessly shift from accepting that resources have diminishing marginal utility to accepting that utility has diminishing marginal utility. People have intuitions about diminishing utility with respect to that same utility, which makes no sense—http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.174.5213&rep=rep1&type=pdf.
Debunk an anti-aggregative view by appealing to people’s failure to grasp large numbers.
Debunk an anti-incest norm by noting that it is explained by evolutionary selective pressure rather than apprehension of independent normative truth.
You might want to look at Huemer’s stuff on intuitionism. - https://www.cambridge.org/core/journals/social-philosophy-and-policy/article/revisionary-intuitionism/EE5C8F3B9F457168029C7169BA1D62AD
That’s helpful, thanks.
Incorporating your suggestion then, when people start to intuition joust perhaps a better idea than the two I mentioned would be to try and debunk each others intuitions.
Do people think this debunking approach can go all the way? If it doesn’t, it looks like a more refined version of the problem still recurs.
Particularly interesting stuff about prioritarianism.
It’s a difficult question when we can stop debunking and what counts as successful debunking. But this is just to say that moral epistemology is difficult. I have my own views and what can and can’t be debunked. e.g. I don’t see how you could debunk the intuition that searing pain is bad. But this is a massive issue.
For a related example, see Carl’s comment on why presentism doesn’t have the implications some people claim it does.