Yes, as I said, for me altruism and selfishness have some convergence. I try to always act altruistically, and enlightened self-interest and open individualism are tools (which I actually do think have some truth to them) that help me tame the selfish part of myself that would otherwise demand much more. They may also be useful in persuading people to be more altruistic.
While I think there is likely only one correct ethical system, I think it is most likely consequentialist, and therefore these conceptual tools are useful for helping me and others to, in practical terms, actually achieve those ethical goals.
I suppose I see it as somewhat of an inner psychological battle, I try to be as altruistic as possible, but I am a weak and imperfect human who is not able to be perfectly altruistic, and often end up acting selfishly.
In addition to this, if I fail to account for proximity I actually become less effective because not sufficiently meeting my own needs makes me less effective in the future, hence some degree of what on the surface appears selfish is actually the best thing I can do altruistically.
You say:
“To clarify, if I apply your proximity principle, or enlightened self-interest, or your recommendations for self-care, but simultaneously hold myself ethically accountable for what I do not do (as your ethic recommends), then it appears as though I am not personally obliged in situations where I am ethically obliged.”
In such a situation the ethical thing to do is whatever achieves the most good. If taking care of yourself right now means that in the future you will be 10% more efficient, and it only takes up 5% of your time or other resources, then the best thing is to help yourself now so that you can better help others in the future.
Sorry if I wasn’t clear! I don’t understand what do you mean by the term “personally obliged”. I looked it up on Google and could not find anything related to it. Could you precisely defined the term and how it differs from ethically obliged? As I said, I don’t really think in terms of obligations, and so maybe this is why I don’t understand it.
I would say ethics could be seen as an accounting system or a set of guidelines of how to live. Maybe you could say ex ante ethics are guidelines, and ex post they are an accounting system.
When I am psychologically able, I will hopefully use ethics as guidelines. If the accounts show that I or others are consistently failing to do good, then that is an indication that part of the ethical system (or something else about how we do good) is broken and in need of repair, so this accounting is useful for the practical project of ethical behavior.
Your last paragraph:
“I think it’s a mistake to discuss your selfish interests as being in service to your altruistic ones. It’s a factual error and logically incoherent besides. You have actual selfish interests that you serve that are not in service to your ethics. Furthermore, selfish interests are in fact orthogonal to altruistic interests. You can serve either or both or neither through the consequences of your actions.”
Hm I’m not sure this is accurate. I read a book that mentioned studies that show happiness and person effectiveness seem to be correlated. I can’t see how not meeting your basic needs allows you to altruistically do more good, or why this wouldn’t extend to optimizing your productivity, which likely includes having relatively high levels of personal physical, mental, and emotional health. No doubt, you shouldn’t spend 100% of your resources maximizing these things, but I think effectiveness requires a relatively high level of personal well-being. This is seems empirical and testable, either high levels of well-being cause greater levels of altruistic success or they don’t. You could believe all of this in purely altruistic framing, without ever introducing selfishness — indeed this is why I use the term proximity, to distinguish it from selfish selfishness. You could say proximity is altruistically strategic selfishness. But I don’t really think the terminology is as important as the empirical claim that taking care of yourself helps you help others more effectively.
“Sorry if I wasn’t clear! I don’t understand what do you mean by the term “personally obliged”. I looked it up on Google and could not find anything related to it. Could you precisely defined the term and how it differs from ethically obliged? As I said, I don’t really think in terms of obligations, and so maybe this is why I don’t understand it.”
OK, a literal interpretation could work for you. So, while your ethics might oblige you to an action X, you yourself are not personally obliged to perform action X. Why are you not personally obliged? Because of how you consider your ethics. Your ethics are subject to limitations due to self-care, enlightened self-interest, or the proximity principle. You also use them as guidelines, is that right? Your ethics, as you describe them, are not a literal description of how you live or a do-or-die set of rules. Instead, they’re more like a perspective, maybe a valuable one incorporating information about how to get along in the world, or how to treat people better, but only a description of what actions you can take in terms of their consequences. You then go on to choose actions however you do and can evaluate your actions from your ethical perspective at any time. I understand that you do not directly say this but it is what I conclude based on what you have written. Your ethics as rules for action appear to me to be aspirational.
I wouldn’t choose consequentialism as an aspirational ethic. I have not shared my ethical rules or heuristics on this forum for a reason. They are somewhat opaque to me. That said, I do follow a lot of personal rules, simple ones, and they align with what you would typically expect from a good person in my current circumstances. But am I a consequentialist? No, but a consequentialist perspective is informative about consequences of my actions, and those concern me in general, whatever my goals.
Interesting points.
Yes, as I said, for me altruism and selfishness have some convergence. I try to always act altruistically, and enlightened self-interest and open individualism are tools (which I actually do think have some truth to them) that help me tame the selfish part of myself that would otherwise demand much more. They may also be useful in persuading people to be more altruistic.
While I think there is likely only one correct ethical system, I think it is most likely consequentialist, and therefore these conceptual tools are useful for helping me and others to, in practical terms, actually achieve those ethical goals.
I suppose I see it as somewhat of an inner psychological battle, I try to be as altruistic as possible, but I am a weak and imperfect human who is not able to be perfectly altruistic, and often end up acting selfishly.
In addition to this, if I fail to account for proximity I actually become less effective because not sufficiently meeting my own needs makes me less effective in the future, hence some degree of what on the surface appears selfish is actually the best thing I can do altruistically.
You say:
“To clarify, if I apply your proximity principle, or enlightened self-interest, or your recommendations for self-care, but simultaneously hold myself ethically accountable for what I do not do (as your ethic recommends), then it appears as though I am not personally obliged in situations where I am ethically obliged.”
In such a situation the ethical thing to do is whatever achieves the most good. If taking care of yourself right now means that in the future you will be 10% more efficient, and it only takes up 5% of your time or other resources, then the best thing is to help yourself now so that you can better help others in the future.
Sorry if I wasn’t clear! I don’t understand what do you mean by the term “personally obliged”. I looked it up on Google and could not find anything related to it. Could you precisely defined the term and how it differs from ethically obliged? As I said, I don’t really think in terms of obligations, and so maybe this is why I don’t understand it.
I would say ethics could be seen as an accounting system or a set of guidelines of how to live. Maybe you could say ex ante ethics are guidelines, and ex post they are an accounting system.
When I am psychologically able, I will hopefully use ethics as guidelines. If the accounts show that I or others are consistently failing to do good, then that is an indication that part of the ethical system (or something else about how we do good) is broken and in need of repair, so this accounting is useful for the practical project of ethical behavior.
Your last paragraph:
“I think it’s a mistake to discuss your selfish interests as being in service to your altruistic ones. It’s a factual error and logically incoherent besides. You have actual selfish interests that you serve that are not in service to your ethics. Furthermore, selfish interests are in fact orthogonal to altruistic interests. You can serve either or both or neither through the consequences of your actions.”
Hm I’m not sure this is accurate. I read a book that mentioned studies that show happiness and person effectiveness seem to be correlated. I can’t see how not meeting your basic needs allows you to altruistically do more good, or why this wouldn’t extend to optimizing your productivity, which likely includes having relatively high levels of personal physical, mental, and emotional health. No doubt, you shouldn’t spend 100% of your resources maximizing these things, but I think effectiveness requires a relatively high level of personal well-being. This is seems empirical and testable, either high levels of well-being cause greater levels of altruistic success or they don’t. You could believe all of this in purely altruistic framing, without ever introducing selfishness — indeed this is why I use the term proximity, to distinguish it from selfish selfishness. You could say proximity is altruistically strategic selfishness. But I don’t really think the terminology is as important as the empirical claim that taking care of yourself helps you help others more effectively.
You wrote:
“Sorry if I wasn’t clear! I don’t understand what do you mean by the term “personally obliged”. I looked it up on Google and could not find anything related to it. Could you precisely defined the term and how it differs from ethically obliged? As I said, I don’t really think in terms of obligations, and so maybe this is why I don’t understand it.”
OK, a literal interpretation could work for you. So, while your ethics might oblige you to an action X, you yourself are not personally obliged to perform action X. Why are you not personally obliged? Because of how you consider your ethics. Your ethics are subject to limitations due to self-care, enlightened self-interest, or the proximity principle. You also use them as guidelines, is that right? Your ethics, as you describe them, are not a literal description of how you live or a do-or-die set of rules. Instead, they’re more like a perspective, maybe a valuable one incorporating information about how to get along in the world, or how to treat people better, but only a description of what actions you can take in terms of their consequences. You then go on to choose actions however you do and can evaluate your actions from your ethical perspective at any time. I understand that you do not directly say this but it is what I conclude based on what you have written. Your ethics as rules for action appear to me to be aspirational.
I wouldn’t choose consequentialism as an aspirational ethic. I have not shared my ethical rules or heuristics on this forum for a reason. They are somewhat opaque to me. That said, I do follow a lot of personal rules, simple ones, and they align with what you would typically expect from a good person in my current circumstances. But am I a consequentialist? No, but a consequentialist perspective is informative about consequences of my actions, and those concern me in general, whatever my goals.
In a submission to the Red Team Contest a few months back, I wrote up my thoughts on beliefs and altruistic decision-making.
I also wrote up some quick thoughts about longtermism in longtermists should self-efface.
I’ve seen several good posts here about longtermism, and one that caught my eye is A Case Against Strong Longtermism
In case you’re wondering, I am not a strong longtermist.
Thanks for the discussion, let me know your feedback and comments on the links I shared if you like.