I agree that defining human values is a philosophical issue, but I would not describe it as “not a psychological issue at all.” It is in part a psychological issue insofar as understanding how people conceive of values is itself an empirical question. Questions about individual and intergroup differences in how people conceive of values, distinguish moral from nonmoral norms, etc. cannot be resolved by philosophy alone.
I am sympathetic to some of the criticisms of Greene’s work, but I do not think Berker’s critique is completely correct, though explaining why I think Greene and others are correct in thinking that psychology can inform moral philosophy in detail would call for a rather titanic post.
The tl;dr point I’d make is that yes, you can draw philosophical conclusions from empirical premises, provided your argument is presented as a conditional one in which you propose that certain philosophical positions are dependent on certain factual claims. If anyone else accepts those premises, then empirical findings that confirm or disconfirm those factual claims can compel specific philosophical conclusions. A toy version of this would be the following:
P1: If the sky is blue, then utilitarianism is true.
P2: The sky is blue.
C: Therefore, utilitarianism is true.
If someone accepts P1, and if P2 is an empirical claim, then empirical evidence for/against P2 bears on the conclusion.
This is the kind of move Greene wants to make.
The slightly longer version of what I’d say to a lot of Greene’s critics is that they misconstrue Greene’s arguments if they think he is attempting to move straight from descriptive claims to normative claims. In arguing for the primacy of utilitarian over deontological moral norms, Greene appeals the presumptive shared premise between himself and his interlocutors that, on reflection, they will reject beliefs that are the result of epistemically dubious processes but retain those that are the result of epistemically justified processes.
If they share his views about what processes would in principle be justified/not justified, and if he can demonstrate that utilitarian judgments are reliably the result of justified processes but deontological judgments are not, then he has successfully appealed to empirical findings to draw a philosophical conclusion: that utilitarian judgments are justified and deontological ones are not. One could simply reject his premises about what constitutes justifed/unjustified grounds for belief, and in that case his argument would not be convincing. I don’t endorse his conclusions because I think his empirical findings are not compelling; not because I think he’s made any illicit philosophical moves.
The tl;dr point I’d make is that yes, you can draw philosophical conclusions from empirical premises, provided your argument is presented as a conditional one in which you propose that certain philosophical positions are dependent on certain factual claims.
You can do that if you want, but (1) it’s still a narrow case within a much larger philosophical framework and (2) such cases are usually pretty simple and don’t require sophisticated knowledge of psychology.
The slightly longer version of what I’d say to a lot of Greene’s critics is that they misconstrue Greene’s arguments if they think he is attempting to move straight from descriptive claims to normative claims.
To the contrary, Berker criticizes Greene precisely because his neuroscientific work is hardly relevant to the moral argument he’s making. You don’t need a complex account of neuroscience or psychology to know that people’s intuitions in the trolley problem are changing merely because of an apparently non-significant change in the situation. Philosophers knew that a century ago.
If they share his views about what processes would in principle be justified/not justified, and if he can demonstrate that utilitarian judgments are reliably the result of justified processes but deontological judgments are not, then he has successfully appealed to empirical findings to draw a philosophical conclusion: that utilitarian judgments are justified and deontological ones are not.
But nobody believes that judgements are correct or wrong merely because of the process that produces them. That just produces grounds for skepticism that the judgements are reliable—and it is skepticism of a sort that was already known without any reference to psychology, for instance through Plantinga’s evolutionary argument against naturalism or evolutionary debunking arguments.
Also it’s worth clarifying that Greene only deals with a particular instance of a deontological judgement rather than deontological judgements in general.
One could simply reject his premises about what constitutes justifed/unjustified grounds for belief, and in that case his argument would not be convincing.
Again, it’s worth stressing that this is a fairly narrow and methodologically controversial area of moral philosophy. There is a difference between giving an opinion on a novel approach to a subject, and telling a group of people what subject they need to study in order to be well-informed. Even if you do take the work of x-philers for granted, it’s not the sort of thing that can be done merely with education in psychology and neuroscience, because people who understand that side of the story but not the actual philosophy are going to be unable to evaluate or make the substantive moral arguments which are necessary for empirically informed work.
Greene would probably not dispute that philosophers have generally agreed that the difference between the lever and footbridge cases are due to “apparently non-significant changes in the situation”
However, what philosophers have typically done is either bit the bullet and said one ought to push, or denied that one ought to push in the footbridge case, but then feel the need to defend commonsense intuitions by offering a principled justification for the distinction between the two. The trolley literature is rife with attempts to vindicate an unwillingness to push, because these philosophers are starting from the assumption that commonsense moral intuitions track deep moral truths and we must explicate the underlying, implicit justification our moral competence is picking up on.
What Greene is doing by appealing to neuroscientific/psychological evidence is to offer a selective debunking explanation of some of those intuitions but not the others. If the evidence demonstrates that one set of outputs (deontological judgments) are the result of an unreliable cognitive process, and another set of outputs (utilitarian judgments) are the result of reliable cognitive processes, then he can show that we have reason to doubt one set of intuitions but not the other, provided we agree with his criteria about what constitutes a reliable vs. an unreliable process. A selective debunking argument of this kind, relying as it does on the reliability of distinct psychological systems or processes, does in fact turn on the empirical evidence (in this case, on his dual process model of moral cognition).
[But nobody believes that judgements are correct or wrong merely because of the process that produces them.]
Sure, but Greene does not need to argue that deontological/utilitarian conclusions are correct or incorrect, only that we have reason to doubt one but not the other. If we can offer reasons to doubt the very psychological processes that give rise to deontological intuitions, this skepticism may be sufficient to warrant skepticism about the larger project of assuming that these intuitions are underwitten by implicit, non-obvious justifications that the philosopher’s job is to extract and explicate.
You mention evolutionary debunking arguments as an alternative that is known “without any reference to psychology.” I think this is mistaken. Evolutionary debunking arguments are entirely predicated on specific empirical claims about the evolution of human psychology, and are thus a perfect example of the relevance of empirical findings to moral philosophy.
[Also it’s worth clarifying that Greene only deals with a particular instance of a deontological judgement rather than deontological judgements in general.]
Yes, I completely agree and I think this is a major weakness with Greene’s account.
I think there are two other major problems: the fMRI evidence he has is not very convincing, and trolley problems offer a distorted psychological picture of the distinction between utilitarian and non-utilitarian moral judgment. Recent work by Kahane shows that people who push in footbridge scenarios tend not to be utilitarians, just people with low empathy. The same people that push tend to also be more egoistic, less charitable, less impartial, less concerned about maximizing welfare, etc.
Regarding your last point two points: I agree that one move is to simply reject how he talks about intuitions (or one could raise other epistemic challenges presumably). I also agree that training in psychology/neuroscience but not philosophy impairs one’s ability to evaluate arguments that presumably depend on competence in both. I am not sure why you bring this up though, so if there was an inference I should draw from this help me out!
I agree that defining human values is a philosophical issue, but I would not describe it as “not a psychological issue at all.” It is in part a psychological issue insofar as understanding how people conceive of values is itself an empirical question. Questions about individual and intergroup differences in how people conceive of values, distinguish moral from nonmoral norms, etc. cannot be resolved by philosophy alone.
I am sympathetic to some of the criticisms of Greene’s work, but I do not think Berker’s critique is completely correct, though explaining why I think Greene and others are correct in thinking that psychology can inform moral philosophy in detail would call for a rather titanic post.
The tl;dr point I’d make is that yes, you can draw philosophical conclusions from empirical premises, provided your argument is presented as a conditional one in which you propose that certain philosophical positions are dependent on certain factual claims. If anyone else accepts those premises, then empirical findings that confirm or disconfirm those factual claims can compel specific philosophical conclusions. A toy version of this would be the following:
P1: If the sky is blue, then utilitarianism is true. P2: The sky is blue. C: Therefore, utilitarianism is true.
If someone accepts P1, and if P2 is an empirical claim, then empirical evidence for/against P2 bears on the conclusion.
This is the kind of move Greene wants to make.
The slightly longer version of what I’d say to a lot of Greene’s critics is that they misconstrue Greene’s arguments if they think he is attempting to move straight from descriptive claims to normative claims. In arguing for the primacy of utilitarian over deontological moral norms, Greene appeals the presumptive shared premise between himself and his interlocutors that, on reflection, they will reject beliefs that are the result of epistemically dubious processes but retain those that are the result of epistemically justified processes.
If they share his views about what processes would in principle be justified/not justified, and if he can demonstrate that utilitarian judgments are reliably the result of justified processes but deontological judgments are not, then he has successfully appealed to empirical findings to draw a philosophical conclusion: that utilitarian judgments are justified and deontological ones are not. One could simply reject his premises about what constitutes justifed/unjustified grounds for belief, and in that case his argument would not be convincing. I don’t endorse his conclusions because I think his empirical findings are not compelling; not because I think he’s made any illicit philosophical moves.
You can do that if you want, but (1) it’s still a narrow case within a much larger philosophical framework and (2) such cases are usually pretty simple and don’t require sophisticated knowledge of psychology.
To the contrary, Berker criticizes Greene precisely because his neuroscientific work is hardly relevant to the moral argument he’s making. You don’t need a complex account of neuroscience or psychology to know that people’s intuitions in the trolley problem are changing merely because of an apparently non-significant change in the situation. Philosophers knew that a century ago.
But nobody believes that judgements are correct or wrong merely because of the process that produces them. That just produces grounds for skepticism that the judgements are reliable—and it is skepticism of a sort that was already known without any reference to psychology, for instance through Plantinga’s evolutionary argument against naturalism or evolutionary debunking arguments.
Also it’s worth clarifying that Greene only deals with a particular instance of a deontological judgement rather than deontological judgements in general.
It’s only a question of moral epistemology, so you could simply disagree on how he talks about intuitions or abandon the idea altogether (https://global.oup.com/academic/product/philosophy-without-intuitions-9780199644865?cc=us&lang=en&).
Again, it’s worth stressing that this is a fairly narrow and methodologically controversial area of moral philosophy. There is a difference between giving an opinion on a novel approach to a subject, and telling a group of people what subject they need to study in order to be well-informed. Even if you do take the work of x-philers for granted, it’s not the sort of thing that can be done merely with education in psychology and neuroscience, because people who understand that side of the story but not the actual philosophy are going to be unable to evaluate or make the substantive moral arguments which are necessary for empirically informed work.
Thanks for the excellent reply.
Greene would probably not dispute that philosophers have generally agreed that the difference between the lever and footbridge cases are due to “apparently non-significant changes in the situation”
However, what philosophers have typically done is either bit the bullet and said one ought to push, or denied that one ought to push in the footbridge case, but then feel the need to defend commonsense intuitions by offering a principled justification for the distinction between the two. The trolley literature is rife with attempts to vindicate an unwillingness to push, because these philosophers are starting from the assumption that commonsense moral intuitions track deep moral truths and we must explicate the underlying, implicit justification our moral competence is picking up on.
What Greene is doing by appealing to neuroscientific/psychological evidence is to offer a selective debunking explanation of some of those intuitions but not the others. If the evidence demonstrates that one set of outputs (deontological judgments) are the result of an unreliable cognitive process, and another set of outputs (utilitarian judgments) are the result of reliable cognitive processes, then he can show that we have reason to doubt one set of intuitions but not the other, provided we agree with his criteria about what constitutes a reliable vs. an unreliable process. A selective debunking argument of this kind, relying as it does on the reliability of distinct psychological systems or processes, does in fact turn on the empirical evidence (in this case, on his dual process model of moral cognition).
[But nobody believes that judgements are correct or wrong merely because of the process that produces them.]
Sure, but Greene does not need to argue that deontological/utilitarian conclusions are correct or incorrect, only that we have reason to doubt one but not the other. If we can offer reasons to doubt the very psychological processes that give rise to deontological intuitions, this skepticism may be sufficient to warrant skepticism about the larger project of assuming that these intuitions are underwitten by implicit, non-obvious justifications that the philosopher’s job is to extract and explicate.
You mention evolutionary debunking arguments as an alternative that is known “without any reference to psychology.” I think this is mistaken. Evolutionary debunking arguments are entirely predicated on specific empirical claims about the evolution of human psychology, and are thus a perfect example of the relevance of empirical findings to moral philosophy.
[Also it’s worth clarifying that Greene only deals with a particular instance of a deontological judgement rather than deontological judgements in general.]
Yes, I completely agree and I think this is a major weakness with Greene’s account.
I think there are two other major problems: the fMRI evidence he has is not very convincing, and trolley problems offer a distorted psychological picture of the distinction between utilitarian and non-utilitarian moral judgment. Recent work by Kahane shows that people who push in footbridge scenarios tend not to be utilitarians, just people with low empathy. The same people that push tend to also be more egoistic, less charitable, less impartial, less concerned about maximizing welfare, etc.
Regarding your last point two points: I agree that one move is to simply reject how he talks about intuitions (or one could raise other epistemic challenges presumably). I also agree that training in psychology/neuroscience but not philosophy impairs one’s ability to evaluate arguments that presumably depend on competence in both. I am not sure why you bring this up though, so if there was an inference I should draw from this help me out!