Similar arguments for complex cluelessness also seems to apply to my own decisions about what would be in my rational self-interest to do. Nevertheless, I will not be wandering blindly into the road outside my hotel room in 10 minutes.
I appreciate you making this point, as I think it’s interesting and I hadn’t come across it before. However, I don’t currently find it that compelling, for the following reasons [these are sketches, not fully fleshed out arguments I expect to be able to defend in all respects]:
I think there is ample room for biting the bullet regarding rational self-interest, while avoiding counter-intuitive conclusions. To explain, I think that the common sense justification for not wandering blindly into the road simply is that I currently have a preference against being hit by a car. I don’t think the intuition that it’d be crazy to wander blindly into the road is driven by any theory that appeals exclusively to long-term consequences on my well-being, nor do I think it needs such a philosophical fundament. I think a theory of self-interest that just appeals to consequences for my time-neutral lifetime wellbeing is counter-intuitive and faced with various problems anyway (see e.g. the first part of Reasons and Persons). If it was the case that I’m clueless about the long-term consequences of my actions on my wellbeing, I think that would merely be yet another problem for the rational theory of self-interest; but I was inclined to discard that theory anyway, and don’t think that discarding it would undermine any of my common sense beliefs. So while I agree that there might be a problem analogous to cluelessness for philosophers who want to come up with a defensible theory of self-interest, I don’t think we get a common-sense-based argument against cluelessness.
However, I think one may well be able to dodge the bullet, at least to some extent. I think it’s simply not true that we are as clueless about our own future wellbeing as we are about the consequences of our actions for long-run impartial goodness, for the following reasons:
Roughly speaking, my own future predictable influence over my own future wellbeing is much greater than my own future influence over impartial goodness. Whatever happens to me, I’ll know how well off I am, and I’ll be able to react to it; something pretty drastic would need to happen to have a very large and lasting effect on my wellbeing. By contrast, I usually simply won’t know how impartial goodness has changed as a result of my actions, and even if I did, it would often be beyond my power to do something about it. If the job I enthusiastically took 10 years ago is now bad for me, I can quit. If the person I rescued from drowning when they were a child is now a dictator wrecking Europe, that’s too bad but I’m stuck with it.
The time horizon is much shorter, and there is limited opportunity for the indirect effects of my actions to affect me. Suppose I’ll still be alive in 60 years. It will, e.g., still be true that my actions will have far-reaching effects on the identities of people that will be born in the next 60 years. However, the number of identities affected, and the indirect effects flowing from this will be much more limited compared to time-horizons that are orders of magnitudes longer; more importantly, most of these indirect effects won’t affect me in any systematic way. While there will be some effects on me depending on which people will be born in, say, Nepal in 40 years, I think the defence that these effects will “cancel out” in expectation works, and similarly for most other indirect effects on my wellbeing.
Maybe most importantly: I think that a large part of the force of the “new problem of cluelessness” (i.e., instances where the defence that “indirect effects cancel out in expectation” doesn’t work) comes from the contingent fact that (according to most plausible axiologies) impartial goodness is freaking weird. I’m not sure how to make this precise, but it seems to me that an important part of the story is that impartial goodness, unlike my own wellbeing, hinges on heavy-tailed phenomena spread out over different scales—e.g., maybe I’m just barely able to guess the sign of the impact of AMF on population size, but assessing the impacts on impartial goodness would also require me to assess the impacts of population size on economic growth, technological progress, the trajectory of farmed animal populations, risks of human extinction, etc. That is, small indirect net effects of my actions on impartial goodness might blow up due to their effects on much larger known and unknown levers, giving rise to the familiar phenomenon of “crucial considerations.” For all I know, in an idealized epistemic state I’d realize that the effects of my actions are dominated by their indirect effects on electron suffering (using this as a token example of “something really weird I haven’t considered”, not to suggest we ought to in fact take electron suffering seriously) - by contrast, I don’t think there could be similar “crucial considerations” for my own well-being. It is not plausible that, say, actually, the effect of my walking into the road on my wellbeing will be dominated by the increased likelihood of seeing a red car; it seems that the “worst” kind of issues I’ll encounter are things like “does drinking one can of Coke Zero per day increase or decrease my life expectancy?”, which is a challenging but not hopeless problem; it’s something I’m uncertain, but not clueless about.
To explain, I think that the common sense justification for not wandering blindly into the road simply is that I currently have a preference against being hit by a car.
I don’t think this defence works, because some of your current preferences are manifestly about future events. Insisting that all these preferences are ultimately about the most immediate causal antecedent (1) misdescribes our preferences and (2) lacks a sound theoretical justification. You may think that Parfit’s arguments against S provide such a justification, but this isn’t so. One can accept Parfit’s criticism and reject the view that what is rational for an agent is to maximize their lifetime wellbeing, accepting instead a view on which it is rational for the agent to satisfy their present desires (which, incidentally, is not Parfit’s view). This in no way rules out the possibility that some of these present desires are aimed at future events. So the possibility that you may be clueless about which course of action satisfies those future oriented desires remains.
Thank you for raising this, I think I was too quick here in at least implicitly suggesting that this defence would work in all cases. I definitely agree with you that we have some desires that are about the future, and that it would misdescribe our desires to conceive all of them to be about present causal antecedents.
I think a more modest claim I might be able to defend would be something like:
The justification of everyday actions does not require an appeal to preferences with the property that, epistemically, we ought to be clueless about their content.
For example, consider the action of not wandering blindly into the road. I concede that some ways of justifying this action may involve preferences about whose contents we ought to be clueless—perhaps the preference to still be alive in 40 years is such a preference (though I don’t think this is obvious, cf. “dodge the bullet” above). However, I claim there would also be preferences, sufficient for justification, that don’t suffer from this cluelessness problem, even though they may be about the future—perhaps the preference to still be alive tomorrow, or to meet my friend tonight, or to give a lecture next week.
On the biting the bullet answer, that doesn’t seem plausible to me. The preference we have are a product of the beliefs we have about what will make our lives better over the long-run. My preference not to smoke is entirely a product of the fact that I believe that it will increase my risk of premature death. Per proponents of cluelessness, I could argue “maybe it will make me look cool to smoke, and that will increase my chances of getting a desirable partner” or something like that. In that sense the sign of the effect of smoking on my own interests is not certain. Nevertheless, I think it is irrational to smoke. I don’t think a Parfitian understanding of identity would help here because then my refusal to smoke would be altruistic—I would be helping out my future self.
The dodge the bullet answer is more plausible, and I may follow up with more later.
The preference we have are a product of the beliefs we have about what will make our lives better over the long-run. My preference not to smoke is entirely a product of the fact that I believe that it will increase my risk of premature death.
I think this is precisely what I’m inclined to dispute. I think I simply have a preference against premature death, and that this preference doesn’t rest on any belief about my long-run wellbeing. I think my long-run wellbeing is way too weird (in the sense that I’m doing things like hyperbolic discounting anyway) and uncertain to ground such preferences.
Nevertheless, I think it is irrational to smoke.
Maybe this points to a crux here: I think on sufficiently demanding notions of rationality, I’d agree with you that considerations analogous to cluelessness threaten the claim that smoking is irrational. My impression is that perhaps the key difference between our views is that I’m less troubled by this.
I don’t think a Parfitian understanding of identity would help here
I’m inclined to agree. Just to clarify though, I wasn’t referring to Parfit’s claims about identity, which if I remember correctly are in the second or third part of Reasons and Persons. I was referring to the first part, where he among other things discusses what he calls the “self-interest theory S” (or something like this).
I appreciate you making this point, as I think it’s interesting and I hadn’t come across it before. However, I don’t currently find it that compelling, for the following reasons [these are sketches, not fully fleshed out arguments I expect to be able to defend in all respects]:
I think there is ample room for biting the bullet regarding rational self-interest, while avoiding counter-intuitive conclusions. To explain, I think that the common sense justification for not wandering blindly into the road simply is that I currently have a preference against being hit by a car. I don’t think the intuition that it’d be crazy to wander blindly into the road is driven by any theory that appeals exclusively to long-term consequences on my well-being, nor do I think it needs such a philosophical fundament. I think a theory of self-interest that just appeals to consequences for my time-neutral lifetime wellbeing is counter-intuitive and faced with various problems anyway (see e.g. the first part of Reasons and Persons). If it was the case that I’m clueless about the long-term consequences of my actions on my wellbeing, I think that would merely be yet another problem for the rational theory of self-interest; but I was inclined to discard that theory anyway, and don’t think that discarding it would undermine any of my common sense beliefs. So while I agree that there might be a problem analogous to cluelessness for philosophers who want to come up with a defensible theory of self-interest, I don’t think we get a common-sense-based argument against cluelessness.
However, I think one may well be able to dodge the bullet, at least to some extent. I think it’s simply not true that we are as clueless about our own future wellbeing as we are about the consequences of our actions for long-run impartial goodness, for the following reasons:
Roughly speaking, my own future predictable influence over my own future wellbeing is much greater than my own future influence over impartial goodness. Whatever happens to me, I’ll know how well off I am, and I’ll be able to react to it; something pretty drastic would need to happen to have a very large and lasting effect on my wellbeing. By contrast, I usually simply won’t know how impartial goodness has changed as a result of my actions, and even if I did, it would often be beyond my power to do something about it. If the job I enthusiastically took 10 years ago is now bad for me, I can quit. If the person I rescued from drowning when they were a child is now a dictator wrecking Europe, that’s too bad but I’m stuck with it.
The time horizon is much shorter, and there is limited opportunity for the indirect effects of my actions to affect me. Suppose I’ll still be alive in 60 years. It will, e.g., still be true that my actions will have far-reaching effects on the identities of people that will be born in the next 60 years. However, the number of identities affected, and the indirect effects flowing from this will be much more limited compared to time-horizons that are orders of magnitudes longer; more importantly, most of these indirect effects won’t affect me in any systematic way. While there will be some effects on me depending on which people will be born in, say, Nepal in 40 years, I think the defence that these effects will “cancel out” in expectation works, and similarly for most other indirect effects on my wellbeing.
Maybe most importantly: I think that a large part of the force of the “new problem of cluelessness” (i.e., instances where the defence that “indirect effects cancel out in expectation” doesn’t work) comes from the contingent fact that (according to most plausible axiologies) impartial goodness is freaking weird. I’m not sure how to make this precise, but it seems to me that an important part of the story is that impartial goodness, unlike my own wellbeing, hinges on heavy-tailed phenomena spread out over different scales—e.g., maybe I’m just barely able to guess the sign of the impact of AMF on population size, but assessing the impacts on impartial goodness would also require me to assess the impacts of population size on economic growth, technological progress, the trajectory of farmed animal populations, risks of human extinction, etc. That is, small indirect net effects of my actions on impartial goodness might blow up due to their effects on much larger known and unknown levers, giving rise to the familiar phenomenon of “crucial considerations.” For all I know, in an idealized epistemic state I’d realize that the effects of my actions are dominated by their indirect effects on electron suffering (using this as a token example of “something really weird I haven’t considered”, not to suggest we ought to in fact take electron suffering seriously) - by contrast, I don’t think there could be similar “crucial considerations” for my own well-being. It is not plausible that, say, actually, the effect of my walking into the road on my wellbeing will be dominated by the increased likelihood of seeing a red car; it seems that the “worst” kind of issues I’ll encounter are things like “does drinking one can of Coke Zero per day increase or decrease my life expectancy?”, which is a challenging but not hopeless problem; it’s something I’m uncertain, but not clueless about.
Very interesting comment!
I don’t think this defence works, because some of your current preferences are manifestly about future events. Insisting that all these preferences are ultimately about the most immediate causal antecedent (1) misdescribes our preferences and (2) lacks a sound theoretical justification. You may think that Parfit’s arguments against S provide such a justification, but this isn’t so. One can accept Parfit’s criticism and reject the view that what is rational for an agent is to maximize their lifetime wellbeing, accepting instead a view on which it is rational for the agent to satisfy their present desires (which, incidentally, is not Parfit’s view). This in no way rules out the possibility that some of these present desires are aimed at future events. So the possibility that you may be clueless about which course of action satisfies those future oriented desires remains.
Thank you for raising this, I think I was too quick here in at least implicitly suggesting that this defence would work in all cases. I definitely agree with you that we have some desires that are about the future, and that it would misdescribe our desires to conceive all of them to be about present causal antecedents.
I think a more modest claim I might be able to defend would be something like:
The justification of everyday actions does not require an appeal to preferences with the property that, epistemically, we ought to be clueless about their content.
For example, consider the action of not wandering blindly into the road. I concede that some ways of justifying this action may involve preferences about whose contents we ought to be clueless—perhaps the preference to still be alive in 40 years is such a preference (though I don’t think this is obvious, cf. “dodge the bullet” above). However, I claim there would also be preferences, sufficient for justification, that don’t suffer from this cluelessness problem, even though they may be about the future—perhaps the preference to still be alive tomorrow, or to meet my friend tonight, or to give a lecture next week.
On the biting the bullet answer, that doesn’t seem plausible to me. The preference we have are a product of the beliefs we have about what will make our lives better over the long-run. My preference not to smoke is entirely a product of the fact that I believe that it will increase my risk of premature death. Per proponents of cluelessness, I could argue “maybe it will make me look cool to smoke, and that will increase my chances of getting a desirable partner” or something like that. In that sense the sign of the effect of smoking on my own interests is not certain. Nevertheless, I think it is irrational to smoke. I don’t think a Parfitian understanding of identity would help here because then my refusal to smoke would be altruistic—I would be helping out my future self.
The dodge the bullet answer is more plausible, and I may follow up with more later.
I think this is precisely what I’m inclined to dispute. I think I simply have a preference against premature death, and that this preference doesn’t rest on any belief about my long-run wellbeing. I think my long-run wellbeing is way too weird (in the sense that I’m doing things like hyperbolic discounting anyway) and uncertain to ground such preferences.
Maybe this points to a crux here: I think on sufficiently demanding notions of rationality, I’d agree with you that considerations analogous to cluelessness threaten the claim that smoking is irrational. My impression is that perhaps the key difference between our views is that I’m less troubled by this.
I’m inclined to agree. Just to clarify though, I wasn’t referring to Parfit’s claims about identity, which if I remember correctly are in the second or third part of Reasons and Persons. I was referring to the first part, where he among other things discusses what he calls the “self-interest theory S” (or something like this).