It’s been many years (about 6?) since I’ve read an argument like this, so, y’know, you win on nostalgia. I also notice that my 12-year old self would’ve been really excited to be in a position to write a response to this, and given that I’ve never actually responded to this argument outside of my own head (and otherwise am never likely to in the future), I’m going to do some acausal trade with my 12-year old self here: below are my thoughts on the post.
Also, sorry it’s so long, I didn’t have the time to make it short.
I appreciate you making this post relatively concise for arguments in its reference class (which usually wax long). Here’s what seems to me to be a key crux of this arg (I’ve bolded the key sentences):
It is difficult to see how unguided evolution would give humans like Tina epistemic access to normative reasons. This seems to particularly be the case when it comes to a specific variety of reasons: moral reasons. There are no obvious structural connections between knowing correct moral facts and evolutionary benefit. (Note that I am assuming that non-objectivist moral theories such as subjectivism are not plausible. See the relevant section of Lukas Gloor’s post here for more on the objectivist/non-objectivist distinction.)
...[I]magine that moral reasons were all centred around maximising the number of paperclips in the universe. It’s not clear that there would be any evolutionary benefit to knowing that morality was shaped in this way. The picture for other potential types of reasons, such as prudential reasons is more complicated, see the appendix for more. The remainder of this analysis assumes that only moral reasons exist.
It therefore seems unlikely that an unguided evolutionary process would give humans access to moral facts. This suggests that most of the worlds Tina should pay attention to—worlds with normative realism and human access to moral facts—are worlds in which there is some sort of directing power over the emergence of human agents leading humans to have reliable moral beliefs.
Object-level response: this is confused about how values come into existence.
The things I care about aren’t written into the fabric of the universe. There is no clause in the laws of physics to distinguish what’s good and bad. I am a human being with desires and goals, and those are things I *actually care about*.
For any ‘moral’ law handed to me on high, I can always ask why I should care about it. But when I actually care, there’s no question. When I am suffering, when those around me suffer, or when someone I love is happy, no part of me is asking “Yeah, but why should I care about this?” These sorts of things I’m happy to start with as primitive, and this question of abstractly where meaning comes from is secondary.
(As for the particular question of how evolution created us and the things we care about, how the bloody selection of evolution could select for love, for familial bonds, for humility, and for playful curiosity about how the world works, I recommend any of the standard introductions to evolutionary psychology, which I also found valuable as a teenager. Robert Wright’s “The Moral Animal” was really great, and Joshua Greene’s “Moral Tribes” is a slightly more abstract version that also contains some key insights about how morality actually works.)
My model of the person who believes the OP wants to say
“Yes, but just because you can tell a story about how evolution would give you these values, how do you know that they’re actually good?”
To which I explain that I do not worry about that. I notice that I care about certain things, and I ask how I was built. Understanding that evolution created these cares and desires in me resolves the problem—I have no further confusion. I care about these things and it makes sense that I would. There is no part of me wondering whether there’s something else I should care about instead, the world just makes sense now.
To point to an example of the process turning out the other way: there’s a been a variety of updates I’ve made where I no longer trust or endorse basic emotions and intuitions, since a variety of factors have all pointed in the same direction:
Learning about scope insensitivity and framing effects
Learning about how the rate of economic growth has changed so suddenly since the industrial revolution (i.e. very recently in evolutionary terms)
Learning about the various dutch book theorems and axioms of rational behaviour that imply a rational agent is equivalent to an expected-utility maximiser.
These have radically changed which of my impulses I trust and endorse and listen to. After seeing these, I realise that subprocess in my brain are trying to approximate how much I should care about groups of difficult scales and failing at their goal, so I learn to ignore those and teach myself to do normative reasoning (e.g. taking into account orders-of-magnitude intuitively), because it’s what I reflectively care about.
I can overcome basic drives when I discover large amounts of evidence from different sources that predicts my experience that ties together into a cohesive worldview for me and explains how the drive isn’t in accordance with my deepest values. Throwing out the basic things I care about because of an abstract argument with none of the strong varieties of evidential backing of the above, isn’t how this works.
Meta-level response: I don’t trust the intellectual tradition of this group of arguments. I think religions have attempted to have a serious conversation about meaning and value in the past, and I’m actually interested in that conversation (which is largely anthropological and psychological). But my impression of modern apologetics is primarily one of rationalization, not the source of religion’s understanding of meaning, but a post-facto justification.
Having not personally read any of his books, I hear C.S. Lewis is the guy who most recently made serious attempts to engage with morality and values. But the most recent wave of this philosophy of religion stuff, since the dawn of the internet era, is represented by folks like the philosopher/theologian/public-debater William Lane Craig (who I watched a bunch as a young teenager), who sees argument and reason as secondary to his beliefs.
Here’s some relevant quotes of Lane Craig, borrowed from this post by Luke Muehlhauser (sources are behind the link):
…the way we know Christianity to be true is by the self-authenticating witness of God’s Holy Spirit. Now what do I mean by that? I mean that the experience of the Holy Spirit is… unmistakable… for him who has it; …that arguments and evidence incompatible with that truth are overwhelmed by the experience of the Holy Spirit…
…it is the self-authenticating witness of the Holy Spirit that gives us the fundamental knowledge of Christianity’s truth. Therefore, the only role left for argument and evidence to play is a subsidiary role… The magisterial use of reason occurs when reason stands over and above the gospel… and judges it on the basis of argument and evidence. The ministerial use of reason occurs when reason submits to and serves the gospel. In light of the Spirit’s witness, only the ministerial use of reason is legitimate. Philosophy is rightly the handmaid of theology. Reason is a tool to help us better understand and defend our faith…
[The inner witness of the Spirit] trumps all other evidence.
My impression is that it’s fair to characterise modern apologetics as searching for arguments to provide in defense of their beliefs, and not as the cause of them, nor as an accurate model of the world. Recall the principle of the bottom line:
Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts. If your car makes metallic squealing noises when you brake, and you aren’t willing to face up to the financial cost of getting your brakes replaced, you can decide to look for reasons why your car might not need fixing. But the actual percentage of you that survive in Everett branches or Tegmark worlds—which we will take to describe your effectiveness as a rationalist—is determined by the algorithm that decided which conclusion you would seek arguments for. In this case, the real algorithm is “Never repair anything expensive.” If this is a good algorithm, fine; if this is a bad algorithm, oh well. The arguments you write afterward, above the bottom line, will not change anything either way.
My high-confidence understanding of the whole space of apologetics is that the process generating them is, on a basic level, not systematically correlated with reality (and man, argument space is so big, just choosing which hypothesis to privilege is most of the work, so it’s not even worth exploring the particular mistakes made once you’ve reached this conclusion).
This is very different from many other fields. If a person with expertise in chemistry challenged me and offered an argument that was severely mistaken as I believe the one in the OP to be, I would still be interested in further discussion and understanding their views because these models have predicted lots of other really important stuff. With philosophy of religion, it is neither based in the interesting parts of religion (which are somewhat more anthropological and psychological), nor is it based in understanding some phenomena of the world where it’s actually made progress, but is instead some entirely different beast, not searching for truth whatsoever. The people seem nice and all, but I don’t think it’s worth spending time engaging with intellectually.
If you find yourself confused by a theologian’s argument, I don’t mean to say you should ignore that and pretend that you’re not confused. That’s a deeply anti-epistemic move. But I think that resolving these particular confusions will not be interesting, or useful, it will just end up being a silly error. I also don’t expect the field of theology / philosophy of religion / apologetics to accept your result, I think there will be further confusions and I think this is fine and correct and you should move on with other more important problems.
---
To clarify, I wrote down my meta-level response out of a desire to be honest about my beliefs here, and did not mean to signal I wouldn’t respond to further comments in this thread any less than usual :)
Thanks Ben! I’ll try and comment on your object level response in this comment and your meta level response in another.
Alas I’m not sure I properly track the full extent of your argument, but I’ll try and focus on the parts that are trackable to me. So apologies if I’m failing to understand the force of your argument because I am missing a crucial part.
I see the crux of our disagreement summed up here:
My model of the person who believes the OP wants to say
“Yes, but just because you can tell a story about how evolution would give you these values, how do you know that they’re actually good?”
To which I explain that I do not worry about that. I notice that I care about certain things, and I ask how I was built. Understanding that evolution created these cares and desires in me resolves the problem—I have no further confusion.
I don’t see how ‘understanding that evolution created these cares and desires in me resolves the problem.’
Desires on their own are at most relevant for *prudential* reason for action, i.e. I want chocolate so I have a [prudential] reason to get chocolate. I attempt to deal (admittedly briefly) with prudential reasons in the appendix. Note that I don’t think that these sort of prudential reasons (if they exist) amount to moral reasons.
Unless a mere desire finds itself in a world where some broader moral theory is at play i.e. preference utilitarianism, which would itself need to enjoy an appropriate meta-ethical grounding/truthmaker (i.e. perhaps Parfit’s Non-Metaphysical Non-Naturalist Normative Cognitivism). Then the mere desire won’t create moral reasons for action. However if you do offer some moral theory then this just runs into the argument of my post, how would the human have access to the relevant moral theory?
In short, if you’re just saying: ‘actually what we talk about as moral reasons for action just boil down to prudential reasons for action as they are just desires I have’ then you’ll need to decide whether it’s plausible to think that a mere desire actually can create an objectively binding prudential reason for action.
If instead you’re saying ‘moral reasons are just what I plainly and simply comprehend, and they are primitive so can have no further explanation’ then I have the simple question about why you think they are primitive when it seems we can ask the seemingly legitimate question which you preempt of ‘but why is X actually good?’
However, I imagine that neither of my two summaries of your argument really are what you are driving for, so apologies if that’s the case.
*nods* I think what I wrote there wasn’t very clear.
To restate my general point: I’m suggesting that your general frame contains a weird inversion. You’re supposing that there is an objective morality, and then wondering how we can find out about it and whether our moral intuitions are right. I first notice that I have very strong feelings about my and others’ behaviour, and then attempt to abstract that into a decision procedure, and then learn which of my conflicting intuitions to trust.
In the first one, you would be surprised to find out we’ve randomly been selected to have the right morality by evolution. In the second, it’s almost definitional that evolution has produced us to have the right morality. There’s still a lot of work to do to turn the messy desires of a human into a consistent utility function (or something like that), which is a thing I spend a lot of time thinking about.
Does the former seem like an accurate description of the way you’re proposing to think about morality?
Yep what you suggest I think isn’t far from the fact. Though note I’m open to the possibility of normative realism being false, it could be that we are all fooled and that there are no true moral facts.
I just think this question of ‘what grounds this moral experience’ is the right one to ask. On the way you’ve articulated it I just think your mere feelings about behaviours don’t amount to normative reasons for action, unless you can explain how these normative properties enter the picture.
Note that normative reasons are weird, they are not like anything descriptive, they have this weird property of what I sometimes call ‘binding oughtness’ that they rationally compel the agent to do particular things. It’s not obvious to me why your mere desires will throw up this special and weird property of binding oughtness.
I don’t trust the intellectual tradition of this argumentative style.
It’s not obvious that anyone’s asking you to trust anything? Surely those offering arguments are just asking you to assess an argument on its merits, rather than by the family of thinkers the argument emerges from?
But my impression of modern apologetics is primarily one of rationalization, not the source of religion’s understanding of meaning, but a post-facto justification.
I’m reasonably involved in the apologetics community. I think there is a good deal of rationalization going on, probably more so than in other communities, though all communities do this to some extent. However I don’t think we need to worry about the intentions of those offering the arguments. We can just assess the offered arguments one by one and see whether they are successful?
William Lane Craig (who I watched a bunch as a young teenager), who sees argument and reason as secondary to his belief
I don’t think the argument you quote is quite as silly as it sounds, a lot depends on your view within epistemology of the internalism/externalism debate. Craig subscribes to reformed epistemology, where one can be warranted in believing something without having arguments for the belief.
This doesn’t seem to me to be as silly as it first sounds. Imagine we simulated beings and then just dropped true beliefs into their heads about complicated maths theorems that they’d have no normal way of knowing. It seems to me that the simulated beings would be warranted to believe these facts (as they emerged from a reliable belief forming process) even if they couldn’t give arguments for why those maths theorems are the case.
This is what Craig and other reformed epistemologists are saying that God does when the Holy Spirit creates belief in God in people even if they can’t offer arguments for it being the case. Given that Craig believes this, he doesn’t think that we need arguments if we have the testimony of the Holy Spirit and that’s why he’s happy to talk about reason being non-magisterial.
My high-confidence understanding of the whole space of apologetics is that the process generating them is, on a basic level, not systematically correlated with reality
I have sympathy for your concern, this seems to be a world in which motivated reasoning might naturally be more present than in chemistry. However, I don’t know how much work you’ve done in philosophy of religion or philosophy more generally, but my assessment is that philosophy of religion is as well argued and thoughtful as many of the other branches of philosophy. As a result I don’t have this fear that motivated reasoning wipes the field out. As I defended before, we can look at each argument on its own merit.
It’s been many years (about 6?) since I’ve read an argument like this, so, y’know, you win on nostalgia. I also notice that my 12-year old self would’ve been really excited to be in a position to write a response to this, and given that I’ve never actually responded to this argument outside of my own head (and otherwise am never likely to in the future), I’m going to do some acausal trade with my 12-year old self here: below are my thoughts on the post.
Also, sorry it’s so long, I didn’t have the time to make it short.
I appreciate you making this post relatively concise for arguments in its reference class (which usually wax long). Here’s what seems to me to be a key crux of this arg (I’ve bolded the key sentences):
Object-level response: this is confused about how values come into existence.
The things I care about aren’t written into the fabric of the universe. There is no clause in the laws of physics to distinguish what’s good and bad. I am a human being with desires and goals, and those are things I *actually care about*.
For any ‘moral’ law handed to me on high, I can always ask why I should care about it. But when I actually care, there’s no question. When I am suffering, when those around me suffer, or when someone I love is happy, no part of me is asking “Yeah, but why should I care about this?” These sorts of things I’m happy to start with as primitive, and this question of abstractly where meaning comes from is secondary.
(As for the particular question of how evolution created us and the things we care about, how the bloody selection of evolution could select for love, for familial bonds, for humility, and for playful curiosity about how the world works, I recommend any of the standard introductions to evolutionary psychology, which I also found valuable as a teenager. Robert Wright’s “The Moral Animal” was really great, and Joshua Greene’s “Moral Tribes” is a slightly more abstract version that also contains some key insights about how morality actually works.)
My model of the person who believes the OP wants to say
To which I explain that I do not worry about that. I notice that I care about certain things, and I ask how I was built. Understanding that evolution created these cares and desires in me resolves the problem—I have no further confusion. I care about these things and it makes sense that I would. There is no part of me wondering whether there’s something else I should care about instead, the world just makes sense now.
To point to an example of the process turning out the other way: there’s a been a variety of updates I’ve made where I no longer trust or endorse basic emotions and intuitions, since a variety of factors have all pointed in the same direction:
Learning about scope insensitivity and framing effects
Learning about how the rate of economic growth has changed so suddenly since the industrial revolution (i.e. very recently in evolutionary terms)
Learning about the various dutch book theorems and axioms of rational behaviour that imply a rational agent is equivalent to an expected-utility maximiser.
These have radically changed which of my impulses I trust and endorse and listen to. After seeing these, I realise that subprocess in my brain are trying to approximate how much I should care about groups of difficult scales and failing at their goal, so I learn to ignore those and teach myself to do normative reasoning (e.g. taking into account orders-of-magnitude intuitively), because it’s what I reflectively care about.
I can overcome basic drives when I discover large amounts of evidence from different sources that predicts my experience that ties together into a cohesive worldview for me and explains how the drive isn’t in accordance with my deepest values. Throwing out the basic things I care about because of an abstract argument with none of the strong varieties of evidential backing of the above, isn’t how this works.
Meta-level response: I don’t trust the intellectual tradition of this group of arguments. I think religions have attempted to have a serious conversation about meaning and value in the past, and I’m actually interested in that conversation (which is largely anthropological and psychological). But my impression of modern apologetics is primarily one of rationalization, not the source of religion’s understanding of meaning, but a post-facto justification.
Having not personally read any of his books, I hear C.S. Lewis is the guy who most recently made serious attempts to engage with morality and values. But the most recent wave of this philosophy of religion stuff, since the dawn of the internet era, is represented by folks like the philosopher/theologian/public-debater William Lane Craig (who I watched a bunch as a young teenager), who sees argument and reason as secondary to his beliefs.
Here’s some relevant quotes of Lane Craig, borrowed from this post by Luke Muehlhauser (sources are behind the link):
My impression is that it’s fair to characterise modern apologetics as searching for arguments to provide in defense of their beliefs, and not as the cause of them, nor as an accurate model of the world. Recall the principle of the bottom line:
My high-confidence understanding of the whole space of apologetics is that the process generating them is, on a basic level, not systematically correlated with reality (and man, argument space is so big, just choosing which hypothesis to privilege is most of the work, so it’s not even worth exploring the particular mistakes made once you’ve reached this conclusion).
This is very different from many other fields. If a person with expertise in chemistry challenged me and offered an argument that was severely mistaken as I believe the one in the OP to be, I would still be interested in further discussion and understanding their views because these models have predicted lots of other really important stuff. With philosophy of religion, it is neither based in the interesting parts of religion (which are somewhat more anthropological and psychological), nor is it based in understanding some phenomena of the world where it’s actually made progress, but is instead some entirely different beast, not searching for truth whatsoever. The people seem nice and all, but I don’t think it’s worth spending time engaging with intellectually.
If you find yourself confused by a theologian’s argument, I don’t mean to say you should ignore that and pretend that you’re not confused. That’s a deeply anti-epistemic move. But I think that resolving these particular confusions will not be interesting, or useful, it will just end up being a silly error. I also don’t expect the field of theology / philosophy of religion / apologetics to accept your result, I think there will be further confusions and I think this is fine and correct and you should move on with other more important problems.
---
To clarify, I wrote down my meta-level response out of a desire to be honest about my beliefs here, and did not mean to signal I wouldn’t respond to further comments in this thread any less than usual :)
Thanks Ben! I’ll try and comment on your object level response in this comment and your meta level response in another.
Alas I’m not sure I properly track the full extent of your argument, but I’ll try and focus on the parts that are trackable to me. So apologies if I’m failing to understand the force of your argument because I am missing a crucial part.
I see the crux of our disagreement summed up here:
I don’t see how ‘understanding that evolution created these cares and desires in me resolves the problem.’
Desires on their own are at most relevant for *prudential* reason for action, i.e. I want chocolate so I have a [prudential] reason to get chocolate. I attempt to deal (admittedly briefly) with prudential reasons in the appendix. Note that I don’t think that these sort of prudential reasons (if they exist) amount to moral reasons.
Unless a mere desire finds itself in a world where some broader moral theory is at play i.e. preference utilitarianism, which would itself need to enjoy an appropriate meta-ethical grounding/truthmaker (i.e. perhaps Parfit’s Non-Metaphysical Non-Naturalist Normative Cognitivism). Then the mere desire won’t create moral reasons for action. However if you do offer some moral theory then this just runs into the argument of my post, how would the human have access to the relevant moral theory?
In short, if you’re just saying: ‘actually what we talk about as moral reasons for action just boil down to prudential reasons for action as they are just desires I have’ then you’ll need to decide whether it’s plausible to think that a mere desire actually can create an objectively binding prudential reason for action.
If instead you’re saying ‘moral reasons are just what I plainly and simply comprehend, and they are primitive so can have no further explanation’ then I have the simple question about why you think they are primitive when it seems we can ask the seemingly legitimate question which you preempt of ‘but why is X actually good?’
However, I imagine that neither of my two summaries of your argument really are what you are driving for, so apologies if that’s the case.
*nods* I think what I wrote there wasn’t very clear.
To restate my general point: I’m suggesting that your general frame contains a weird inversion. You’re supposing that there is an objective morality, and then wondering how we can find out about it and whether our moral intuitions are right. I first notice that I have very strong feelings about my and others’ behaviour, and then attempt to abstract that into a decision procedure, and then learn which of my conflicting intuitions to trust.
In the first one, you would be surprised to find out we’ve randomly been selected to have the right morality by evolution. In the second, it’s almost definitional that evolution has produced us to have the right morality. There’s still a lot of work to do to turn the messy desires of a human into a consistent utility function (or something like that), which is a thing I spend a lot of time thinking about.
Does the former seem like an accurate description of the way you’re proposing to think about morality?
Yep what you suggest I think isn’t far from the fact. Though note I’m open to the possibility of normative realism being false, it could be that we are all fooled and that there are no true moral facts.
I just think this question of ‘what grounds this moral experience’ is the right one to ask. On the way you’ve articulated it I just think your mere feelings about behaviours don’t amount to normative reasons for action, unless you can explain how these normative properties enter the picture.
Note that normative reasons are weird, they are not like anything descriptive, they have this weird property of what I sometimes call ‘binding oughtness’ that they rationally compel the agent to do particular things. It’s not obvious to me why your mere desires will throw up this special and weird property of binding oughtness.
<unfinished>
This is my response to your meta-level response.
It’s not obvious that anyone’s asking you to trust anything? Surely those offering arguments are just asking you to assess an argument on its merits, rather than by the family of thinkers the argument emerges from?
I’m reasonably involved in the apologetics community. I think there is a good deal of rationalization going on, probably more so than in other communities, though all communities do this to some extent. However I don’t think we need to worry about the intentions of those offering the arguments. We can just assess the offered arguments one by one and see whether they are successful?
I don’t think the argument you quote is quite as silly as it sounds, a lot depends on your view within epistemology of the internalism/externalism debate. Craig subscribes to reformed epistemology, where one can be warranted in believing something without having arguments for the belief.
This doesn’t seem to me to be as silly as it first sounds. Imagine we simulated beings and then just dropped true beliefs into their heads about complicated maths theorems that they’d have no normal way of knowing. It seems to me that the simulated beings would be warranted to believe these facts (as they emerged from a reliable belief forming process) even if they couldn’t give arguments for why those maths theorems are the case.
This is what Craig and other reformed epistemologists are saying that God does when the Holy Spirit creates belief in God in people even if they can’t offer arguments for it being the case. Given that Craig believes this, he doesn’t think that we need arguments if we have the testimony of the Holy Spirit and that’s why he’s happy to talk about reason being non-magisterial.
I have sympathy for your concern, this seems to be a world in which motivated reasoning might naturally be more present than in chemistry. However, I don’t know how much work you’ve done in philosophy of religion or philosophy more generally, but my assessment is that philosophy of religion is as well argued and thoughtful as many of the other branches of philosophy. As a result I don’t have this fear that motivated reasoning wipes the field out. As I defended before, we can look at each argument on its own merit.