Thanks for writing this! I have a some thoughts/questions:
I think the framing as “goals” to “achieve” suggests that the individual who holds them has to be involved in achieving them, possibly with help. Is this intended? If so, doesn’t this make all life goals at least a little self-oriented, in a sense, by requiring personal involvement, and “optimizing for achievement”? On the other hand, we may prefer others to be better off, even if we’re personally involved in ensuring it, and choose this over having them help us achieve our own life goals, doing less good.
I also often think of deontological constraints against instrumental harm similarly: they seem like a preoccupation with keeping one’s own hands clean, rather than doing what’s best for those involved. A life goal to minimize the harm you personally cause seems more self-oriented than a life goal to minimize harm generally, and even more so than the preference for there to be less harm generally.
Similarly, “be a good person” seems both self-oriented and other-regarding, and these are the terms virtue ethicists think in.
This leads to my next point:
Self-oriented life goals describe what’s good for an individual. If everyone’s life goals were self-oriented, fulfilling life goals would be equivalent to helping people flourish in the ways they prefer for themselves. However, because people can have other-regarding life goals (e.g., consider effective altruism as someone’s sole life goal), we can’t interpret “fulfilling someone’s preferences (or life goals)” as “helping that person flourish.” Therefore, I’d find it slightly strained to interpret “preference utilitarianism” as “altruism/doing good impartially.”
I don’t think it’s that weird to consider that helping someone achieve their life goal to do good (e.g. effective altruism) does in fact help them flourish. Maybe this is more strongly the case if their life goal is to “be a good person” rather than “do good”.
On the other hand, I agree it’s a little weird to say that you’ve helped someone by further satisfying their preferences for others to be better off, if that person doesn’t even know about it or otherwise was not involved. And helping them achieve their other-regarding life goals rather than just doing more good can be worse in their eyes.
I think the framing as “goals” to “achieve” suggests that the individual who holds them has to be involved in achieving them, possibly with help. Is this intended?
I think you’re saying that the word “achieve” has the connotation of actively doing something (and “earning credit for it”)? That’s not the meaning I intended. There are conceivable circumstances where “achieving your life goals” (for specific life goals) implies getting out of the way so others can do something better. (I’m reminded of the recent post here titled I want to be replaced.)
Similarly, “be a good person” seems both self-oriented and other-regarding, and these are the terms virtue ethicists think in.
I agree!
I don’t think it’s that weird to consider that helping someone achieve their life goal to do good (e.g. effective altruism) does in fact help them flourish. Maybe this is more strongly the case if their life goal is to “be a good person” rather than “do good”.
There could be a situation where the best way to benefit Alice’s life goal is by doing something that leads to Alice becoming depressed. E.g., if Alice thinks she’s the best person for some role with a mission that’s in line with her life goal, but you’re confident she’s not, you’d vote against her. I think there’s still a sense in which we can defensibly interpret this as “doing something (ultimately) good for Alice” because there’s something to living with one’s eyes open and not deluding oneself, etc. But my point is that it’s not necessarily the most natural or the only natural interpretation.
Thanks for writing this! I have a some thoughts/questions:
I think the framing as “goals” to “achieve” suggests that the individual who holds them has to be involved in achieving them, possibly with help. Is this intended? If so, doesn’t this make all life goals at least a little self-oriented, in a sense, by requiring personal involvement, and “optimizing for achievement”? On the other hand, we may prefer others to be better off, even if we’re personally involved in ensuring it, and choose this over having them help us achieve our own life goals, doing less good.
I also often think of deontological constraints against instrumental harm similarly: they seem like a preoccupation with keeping one’s own hands clean, rather than doing what’s best for those involved. A life goal to minimize the harm you personally cause seems more self-oriented than a life goal to minimize harm generally, and even more so than the preference for there to be less harm generally.
Similarly, “be a good person” seems both self-oriented and other-regarding, and these are the terms virtue ethicists think in.
This leads to my next point:
I don’t think it’s that weird to consider that helping someone achieve their life goal to do good (e.g. effective altruism) does in fact help them flourish. Maybe this is more strongly the case if their life goal is to “be a good person” rather than “do good”.
On the other hand, I agree it’s a little weird to say that you’ve helped someone by further satisfying their preferences for others to be better off, if that person doesn’t even know about it or otherwise was not involved. And helping them achieve their other-regarding life goals rather than just doing more good can be worse in their eyes.
I think you’re saying that the word “achieve” has the connotation of actively doing something (and “earning credit for it”)? That’s not the meaning I intended. There are conceivable circumstances where “achieving your life goals” (for specific life goals) implies getting out of the way so others can do something better. (I’m reminded of the recent post here titled I want to be replaced.)
I agree!
There could be a situation where the best way to benefit Alice’s life goal is by doing something that leads to Alice becoming depressed. E.g., if Alice thinks she’s the best person for some role with a mission that’s in line with her life goal, but you’re confident she’s not, you’d vote against her. I think there’s still a sense in which we can defensibly interpret this as “doing something (ultimately) good for Alice” because there’s something to living with one’s eyes open and not deluding oneself, etc. But my point is that it’s not necessarily the most natural or the only natural interpretation.