“Every man is guilty of all the good he did not do.”
~Voltaire
Opportunity Cost Ethics is a term I invented to capture the ethical view that failing to do good ought to carry the same moral weight as doing harm.
You could say that in Opportunity Cost Ethics, sins of omission are equivalent to sins of commission.
In this view, if you walk by a child drowning in a pond, and there is zero cost to you to saving the child, it would be equally morally inexcusable for you not to save the child from drowning as it would be to take a non-drowning child and drown them in the pond.
Moreover, if you have good reasons to suspect there are massive opportunities to do good, such as a reasonable probability you could save millions of lives, Opportunity Cost Ethics states that the moral thing to do is to seek out such opportunities and find the best possible opportunities, after accounting for search costs.
Opportunity Cost Ethics is a consequentialist ethical theory, as it states the only morally relevant fact is the ultimate effect of our actions or non-actions.
The strongest version of Opportunity Cost Ethics, which I hold, states that:
“The best moral action at any given time requires considering as many actions as possible, until marginal search costs exceed the expected value of finding better options; at which point an optimal stopping point has been reached. Considerations of proximity also need to be taken into account.”
We could call the above “Strong Opportunity Cost Ethics.”
Possibly the most important implication of Opportunity Cost Ethics is that we should spend a massive amount of time, energy, and resources trying to determine what the best possible actions we can take are.
It is very difficult to know how much time we should spend trying to figure how good various actions are, and how certain we should be before we act. Opportunity Cost Ethics states that it is morally right to continue searching for the best action you can take until you predict the marginal cost of searching for a better action outweighs the amount of additional good that the better action would achieve.
Due to its obvious importance I assume something like this already exists, if so please point me to it, thank!
edit: Forum user tc pointed out that the “doing / allowing” distinction in ethics, whether causing harm is equivalent to preventing harm, which inspired trolley dilemmas, is highly relevant.
This is the kind of thing I was looking for! I am, however, uncertain if it fully captures what I am saying, which is more about the potential to do a massive amount of good, and the ethical responsibility to seek out such opportunities if we suspect they are available and the search costs do not cancel out the good.
At some point I hope to extend this short-form into a more well-formed robust theory.
I thought over opportunity cost ethics in the context of longtermism, and formed the questions:
how do I decide what I am obliged to do when I am considering ethical vs selfish interests of mine?
if I can be ethically but not personally obliged, then what makes an ethical action obligatory?
am I morally accountable for the absence of consequences of actions that I could have taken?
who decides what the consequences of my absent actions would have been?
how do I compare the altruism of consequences of one choice of action versus another?
do intentions carry ethical weight, that is, when I intend well, do harmful consequences matter?
what do choices that emphasize either selfish or ethical interest mean about my character?
I came up with examples and thought experiments to satisfy my own questions, but since you’re taking such a radically different direction, I recommend the same questions to you, and wonder what your answers will be.
I will offer that shaming or rewarding of behavior in terms of what it means about supposed traits of character (for example, selfishness, kind-heartedness, cruelness) has impact in society, even in our post-modern era of subjectivity-assuming and meta-perspective seeking. We don’t like to be thought of in terms of unfavorable character traits. Beyond personality, if character traits show through that others admire or trust (or dislike or distrust), that makes a big difference to one’s perceived sense of one’s own ethics, regardless of how fair, rational, or applicable the ethics actually are.
As an exercise in meta-cognition, I can see that my own proposals for ethical systems that I comfortably employ will plausibly lack value to others. I take a safe route, equating altruism with the service of other’s interests, and selfishness with the service of my own. Conceptually consistent, lacking in discussion of character traits, avoiding discussion of ethical obligations of any sort.
While I enjoy the mental clarity that confidence in one’s own beliefs provides, I fear a strong mismatch of my beliefs with the world or with the rest of my beliefs. It’s hard to stay clearheaded while also being internally consistent with regard to goals, values, etc. As a practical matter, getting much better at practicing ethics than:
distinguishing selfish from altruistic interests
determining the consequences of actions
seems too difficult for me.
My actual decisions don’t necessarily come from a calculus of ethical weights alongside a parallel set of personal weights. Their actual functioning is suspect. I doubt their utility to me, frankly.
Accordingly, I ignore the concept of ethical obligation and consign pursuit of positive character traits to the realm of personal beliefs. I even go so far as to treat ideas of alternative futures as mere beliefs. By doing so, I reconcile the subjective world of beliefs with a real world of nondeterministic outcomes, on the cheap. There’s just one pathway through time that we know anything about, and it’s the one we take. Everything else is merely believed to have been an alternative. But in a world of actual obligations and intrinsically valuable character traits, everything looks different. I mostly ignore that world when discussing ethics. I suspect that you don’t.
Please let me know your thoughts on these topics, if you like. Thanks!
Wow Noah! I think this is the longest comment I’ve had on any post, despite it being my shortest post haha!
First of all some context. The reason I wrote this shortform was actually just so I could link to it in a post I’m finishing which estimates how many lives longtermist save per minute. Here is the current version of the section in which I link to it, I think it may answer some of your questions:
“The Proximity Principle
The take-away from this post is not that you should agonize over the trillions of trillions of trillionsof men, women, and children you are thoughtlessly murdering each time you splurge on a Starbucks pumpkin spice latte or watch cat videos on tik-tok — or in anyway whatsoever commit the ethical sin of making non-optimal use of your time.[[[this is where I link to the ”Opportunity Cost Ethics shortform]]]
The point of this post is not to create a longtermist “dead children currency” analogue. Instead it is meant to be motivating background information, giving us all the more good reason to be thoughtful about our self-care and productivity.
I call the principle of strategically caring for yourself and those closest to you “The Proximity Principle,” something I discovered after several failed attempts to be perfectly purely altruistic. It roughly states that:
It is easiest to affect those closest to you (in time, space, and relatedness)
Taking care of yourself and those closest to you is high leverage for multiplying your own effectiveness in the future
To account for proximity, perhaps in addition to conversion rates for time and money into lives saved, we also need conversion rates for time and money into increases in personal productivity, personal health & wellbeing, mental health, self-development, personal relationships, and EA community culture.
These things may be hard to quantify, but probably less hard than we think, and seem like fruitful research directions for social-science oriented EAs. I think these areas are highly valuable relative to time and money, even if only valued instrumentally.
In general, for those who feel compelled to over-work to a point that feels unhealthy, have had a tendency to burn out in the past, or think this may be a problem for them, I would suggest erring on the side of over-compensating.
This means finding self-care activities that make you feel happy, energized, refreshed, and a sense of existential hope — and, furthermore, doing these activities regularly, more than the minimum you feel you need to in order to work optimally.
I like to think if this as keeping my tank nearly full, rather than perpetually halfway full or nearly empty. From a systems theory perspective, you are creating a continuous inflow and keeping your energy stocks high, rather than waiting til they are fully depleted and panic mode alerts you to refill.
For me, daily meditation, daily exercise, healthy diet, and good sleep habits are most essential. But each person is different, so find what works for you.
Remember, if you want to change the future, you need to be at your best. You are your most valuable asset. Invest in yourself.”
I will try to answer each question as I understand, let me know if this makes sense
how do I decide what I am obliged to do when I am considering ethical vs selfish interests of mine? ANSWER: Opportinity Cost Ethics itself states what is ethical but NOT what is morally obligatory. In the strong version, you are always obliged do what is altruistic. But this means taking care of yourself “selfishly” in-so-far as it is helpful in making you more productive. I call this taking “proximity” into account
if I can be ethically but not personally obliged, then what makes an ethical action obligatory? ANSWER: I do not understand your distinction between personally obliged and ethically obliged, could you clarify? I take a moral realism stance in which there is only one correct ethical system, whether or not we know what it is. If you are obliged, you can shirk your obligations, but others (and possibly yourself) will be harmed by this.
am I morally accountable for the absence of consequences of actions that I could have taken? ANSWR: YES! This is exactly the point. If you could have saved a child from drowning but don’t (and there are no additional costs to the action), that is precisely equivalent to murdering that child, and all the consequences that follow.
who decides what the consequences of my absent actions would have been? ANSWER: this is something that must be estimated with expected value and best-guesses. As longtermists, we know that our actions could have extremely positive consequences. In the piece I quoted above, I calculated that the average longtermist can conservatively expect to save about a trillion trillion trillion lives per minute of work or dollar donated.
how do I compare the altruism of consequences of one choice of action versus another? ANSWER: again best guesses, using expected value. Possibly the most important implication of Opportunity Cost Ethics is that we should spend a massive amount of time, energy, and resources trying to determine what the best possible actions we can take are.
do intentions carry ethical weight, that is, when I intend well, do harmful consequences matter? ANSWER: this is related to search costs and optimal stopping which I mentioned in the short-form. It is very difficult to know how much time you should spend trying to figure how good various actions are and whether or not they may be harmful, and how certain you should be before you proceed. In this framework, it is morally right to continue searching for the best action you can take until you predict the marginal cost of searching for a better action to outweigh the amount of additional good that the better action would achieve. If negative consequences follow, despite you having done your best to follow this process, then the action was morally wrong because, according to consequentialism, the consequences were bad yet your process and the intentions that led to the action may still have been as morally good as they could have been.
what do choices that emphasize either selfish or ethical interest mean about my character? ANSWER: Since I am using an over-arching consequentialist framework rather than character ethics, I am not sure I have an answer to this. My guess would be that good character would be correlated with altruism, after accounting for proximity.
On your other comments, I do not tend to think in terms of character traits much, except that I may seek to develop good character traits in myself and support them in others, in order to achieve good consequences. To me, good character traits are simply a short-hand which implies the person has habits or heuristics of action that tend to lead to good consequences within their context.
I also don’t tend to think in terms of obligations. I think obligations may be a useful abstraction for some, in that it helps encourage them to take actions with good consequences. Perhaps in a sense, I see moral obligation as a statement that actions which achieve the most good possible are the best actions to take, and we “should” always take the best action we can take; “should” in this context means it would be most good for us to do so.
So it is all basically one big self-referential tautology. You can choose to be moral or not, morality is good, because real people’s lives, happiness, suffering etc. is at stake, but you are free to choose to do what is good or not. I choose to do good, in-so-far as I am able and encourage others to do the same as the massively positive-sum world that results is best for all, including myself.
I also think enlightened self-interest, by which I mean helping others and living a life of purpose makes me much happier than I would otherwise be, plays an important role in my worldview. So does open individualism, the view that all consciousness shares the same identity, which even an extremely, extremely small credence in implies that longtermist work is likely the highest expected value selfish action. When you add The Proximity Principle, for me, selfishness and altruism are largely convergent and capable of integration — not to say there aren’t sometimes extremely difficult trade-offs.
Thanks for all your questions! This brought out a lot of good points I would like to add to a more thorough piece later. Let me know if what I said made sense, and I’d be very curious to hear your impressions of what I said.
Well, your formulation, as stated, would lead me to multiple conceptual difficulties, but the most practical one for me is how to conceptualize altruism. How do you know when you are being altruistic?
When you throw in the concepts of “enlightened self-interest” and “open individualism” to justify longtermism, it appears as though you have partial credence in fundamental beliefs that support your choice of ethical system. But you claim that there is only one correct ethical system. Would you clarify for me ?
You wrote:
“On your other comments, I do not tend to think in terms of character traits much except that I may seek to develop good character traits in myself and support them in others, in order to achieve good consequences.”
“If you are obliged, you can shirk your obligations, but others (and possibly yourself) will be harmed by this.”
“I choose to do good, in-so-far as I am able and encourage others to do the same as the massively positive-sum world that results is best for all, including myself. ”
From how you write, you seem like a kind, well-meaning, and thoughtful person. Your efforts to develop good character traits seem to be paying off for you.
You wrote:
“If I can be ethically but not personally obliged, then what makes an ethical action obligatory? ANSWER: I do not understand your distinction between personally obliged and ethically obliged, could you clarify? I take a moral realism stance in which there is only one correct ethical system, whether or not we know what it is. If you are obliged, you can shirk your obligations, but others (and possibly yourself) will be harmed by this.”
To clarify, if I apply your proximity principle, or enlightened self-interest, or your recommendations for self-care, but simultaneously hold myself ethically accountable for what I do not do (as your ethic recommends), then it appears as though I am not personally obliged in situations where I am ethically obliged.
If you hold yourself ethically accountable but not personally accountable, then you have ethical obligations but not personal obligations, and your ethical system becomes an accounting system, rather than a set of rules to follow. Your actions are weighted separately in terms of their altruistic and personal consequences, with different weights (or scores) for each, and you make decisions however you do. At some point(s) in time, you check the balance sheet of your altruistic and personal consequences and whatever you believe it shows, to decide whether you are in fact a moral person.
I think it’s a mistake to discuss your selfish interests as being in service to your altruistic ones. It’s a factual error and logically incoherent besides. You have actual selfish interests that you serve that are not in service to your ethics. Furthermore, selfish interests are in fact orthogonal to altruistic interests. You can serve either or both or neither through the consequences of your actions.
Yes, as I said, for me altruism and selfishness have some convergence. I try to always act altruistically, and enlightened self-interest and open individualism are tools (which I actually do think have some truth to them) that help me tame the selfish part of myself that would otherwise demand much more. They may also be useful in persuading people to be more altruistic.
While I think there is likely only one correct ethical system, I think it is most likely consequentialist, and therefore these conceptual tools are useful for helping me and others to, in practical terms, actually achieve those ethical goals.
I suppose I see it as somewhat of an inner psychological battle, I try to be as altruistic as possible, but I am a weak and imperfect human who is not able to be perfectly altruistic, and often end up acting selfishly.
In addition to this, if I fail to account for proximity I actually become less effective because not sufficiently meeting my own needs makes me less effective in the future, hence some degree of what on the surface appears selfish is actually the best thing I can do altruistically.
You say:
“To clarify, if I apply your proximity principle, or enlightened self-interest, or your recommendations for self-care, but simultaneously hold myself ethically accountable for what I do not do (as your ethic recommends), then it appears as though I am not personally obliged in situations where I am ethically obliged.”
In such a situation the ethical thing to do is whatever achieves the most good. If taking care of yourself right now means that in the future you will be 10% more efficient, and it only takes up 5% of your time or other resources, then the best thing is to help yourself now so that you can better help others in the future.
Sorry if I wasn’t clear! I don’t understand what do you mean by the term “personally obliged”. I looked it up on Google and could not find anything related to it. Could you precisely defined the term and how it differs from ethically obliged? As I said, I don’t really think in terms of obligations, and so maybe this is why I don’t understand it.
I would say ethics could be seen as an accounting system or a set of guidelines of how to live. Maybe you could say ex ante ethics are guidelines, and ex post they are an accounting system.
When I am psychologically able, I will hopefully use ethics as guidelines. If the accounts show that I or others are consistently failing to do good, then that is an indication that part of the ethical system (or something else about how we do good) is broken and in need of repair, so this accounting is useful for the practical project of ethical behavior.
Your last paragraph:
“I think it’s a mistake to discuss your selfish interests as being in service to your altruistic ones. It’s a factual error and logically incoherent besides. You have actual selfish interests that you serve that are not in service to your ethics. Furthermore, selfish interests are in fact orthogonal to altruistic interests. You can serve either or both or neither through the consequences of your actions.”
Hm I’m not sure this is accurate. I read a book that mentioned studies that show happiness and person effectiveness seem to be correlated. I can’t see how not meeting your basic needs allows you to altruistically do more good, or why this wouldn’t extend to optimizing your productivity, which likely includes having relatively high levels of personal physical, mental, and emotional health. No doubt, you shouldn’t spend 100% of your resources maximizing these things, but I think effectiveness requires a relatively high level of personal well-being. This is seems empirical and testable, either high levels of well-being cause greater levels of altruistic success or they don’t. You could believe all of this in purely altruistic framing, without ever introducing selfishness — indeed this is why I use the term proximity, to distinguish it from selfish selfishness. You could say proximity is altruistically strategic selfishness. But I don’t really think the terminology is as important as the empirical claim that taking care of yourself helps you help others more effectively.
“Sorry if I wasn’t clear! I don’t understand what do you mean by the term “personally obliged”. I looked it up on Google and could not find anything related to it. Could you precisely defined the term and how it differs from ethically obliged? As I said, I don’t really think in terms of obligations, and so maybe this is why I don’t understand it.”
OK, a literal interpretation could work for you. So, while your ethics might oblige you to an action X, you yourself are not personally obliged to perform action X. Why are you not personally obliged? Because of how you consider your ethics. Your ethics are subject to limitations due to self-care, enlightened self-interest, or the proximity principle. You also use them as guidelines, is that right? Your ethics, as you describe them, are not a literal description of how you live or a do-or-die set of rules. Instead, they’re more like a perspective, maybe a valuable one incorporating information about how to get along in the world, or how to treat people better, but only a description of what actions you can take in terms of their consequences. You then go on to choose actions however you do and can evaluate your actions from your ethical perspective at any time. I understand that you do not directly say this but it is what I conclude based on what you have written. Your ethics as rules for action appear to me to be aspirational.
I wouldn’t choose consequentialism as an aspirational ethic. I have not shared my ethical rules or heuristics on this forum for a reason. They are somewhat opaque to me. That said, I do follow a lot of personal rules, simple ones, and they align with what you would typically expect from a good person in my current circumstances. But am I a consequentialist? No, but a consequentialist perspective is informative about consequences of my actions, and those concern me in general, whatever my goals.
Opportunity Cost Ethics
“Every man is guilty of all the good he did not do.”
~Voltaire
Opportunity Cost Ethics is a term I invented to capture the ethical view that failing to do good ought to carry the same moral weight as doing harm.
You could say that in Opportunity Cost Ethics, sins of omission are equivalent to sins of commission.
In this view, if you walk by a child drowning in a pond, and there is zero cost to you to saving the child, it would be equally morally inexcusable for you not to save the child from drowning as it would be to take a non-drowning child and drown them in the pond.
Moreover, if you have good reasons to suspect there are massive opportunities to do good, such as a reasonable probability you could save millions of lives, Opportunity Cost Ethics states that the moral thing to do is to seek out such opportunities and find the best possible opportunities, after accounting for search costs.
Opportunity Cost Ethics is a consequentialist ethical theory, as it states the only morally relevant fact is the ultimate effect of our actions or non-actions.
The strongest version of Opportunity Cost Ethics, which I hold, states that:
“The best moral action at any given time requires considering as many actions as possible, until marginal search costs exceed the expected value of finding better options; at which point an optimal stopping point has been reached. Considerations of proximity also need to be taken into account.”
We could call the above “Strong Opportunity Cost Ethics.”
Possibly the most important implication of Opportunity Cost Ethics is that we should spend a massive amount of time, energy, and resources trying to determine what the best possible actions we can take are.
It is very difficult to know how much time we should spend trying to figure how good various actions are, and how certain we should be before we act. Opportunity Cost Ethics states that it is morally right to continue searching for the best action you can take until you predict the marginal cost of searching for a better action outweighs the amount of additional good that the better action would achieve.
Due to its obvious importance I assume something like this already exists, if so please point me to it, thank!
edit: Forum user tc pointed out that the “doing / allowing” distinction in ethics, whether causing harm is equivalent to preventing harm, which inspired trolley dilemmas, is highly relevant.
This is the kind of thing I was looking for! I am, however, uncertain if it fully captures what I am saying, which is more about the potential to do a massive amount of good, and the ethical responsibility to seek out such opportunities if we suspect they are available and the search costs do not cancel out the good.
At some point I hope to extend this short-form into a more well-formed robust theory.
Hi, Jordan.
I thought over opportunity cost ethics in the context of longtermism, and formed the questions:
how do I decide what I am obliged to do when I am considering ethical vs selfish interests of mine?
if I can be ethically but not personally obliged, then what makes an ethical action obligatory?
am I morally accountable for the absence of consequences of actions that I could have taken?
who decides what the consequences of my absent actions would have been?
how do I compare the altruism of consequences of one choice of action versus another?
do intentions carry ethical weight, that is, when I intend well, do harmful consequences matter?
what do choices that emphasize either selfish or ethical interest mean about my character?
I came up with examples and thought experiments to satisfy my own questions, but since you’re taking such a radically different direction, I recommend the same questions to you, and wonder what your answers will be.
I will offer that shaming or rewarding of behavior in terms of what it means about supposed traits of character (for example, selfishness, kind-heartedness, cruelness) has impact in society, even in our post-modern era of subjectivity-assuming and meta-perspective seeking. We don’t like to be thought of in terms of unfavorable character traits. Beyond personality, if character traits show through that others admire or trust (or dislike or distrust), that makes a big difference to one’s perceived sense of one’s own ethics, regardless of how fair, rational, or applicable the ethics actually are.
As an exercise in meta-cognition, I can see that my own proposals for ethical systems that I comfortably employ will plausibly lack value to others. I take a safe route, equating altruism with the service of other’s interests, and selfishness with the service of my own. Conceptually consistent, lacking in discussion of character traits, avoiding discussion of ethical obligations of any sort.
While I enjoy the mental clarity that confidence in one’s own beliefs provides, I fear a strong mismatch of my beliefs with the world or with the rest of my beliefs. It’s hard to stay clearheaded while also being internally consistent with regard to goals, values, etc. As a practical matter, getting much better at practicing ethics than:
distinguishing selfish from altruistic interests
determining the consequences of actions
seems too difficult for me.
My actual decisions don’t necessarily come from a calculus of ethical weights alongside a parallel set of personal weights. Their actual functioning is suspect. I doubt their utility to me, frankly.
Accordingly, I ignore the concept of ethical obligation and consign pursuit of positive character traits to the realm of personal beliefs. I even go so far as to treat ideas of alternative futures as mere beliefs. By doing so, I reconcile the subjective world of beliefs with a real world of nondeterministic outcomes, on the cheap. There’s just one pathway through time that we know anything about, and it’s the one we take. Everything else is merely believed to have been an alternative. But in a world of actual obligations and intrinsically valuable character traits, everything looks different. I mostly ignore that world when discussing ethics. I suspect that you don’t.
Please let me know your thoughts on these topics, if you like. Thanks!
Wow Noah! I think this is the longest comment I’ve had on any post, despite it being my shortest post haha!
First of all some context. The reason I wrote this shortform was actually just so I could link to it in a post I’m finishing which estimates how many lives longtermist save per minute. Here is the current version of the section in which I link to it, I think it may answer some of your questions:
“The Proximity Principle
The take-away from this post is not that you should agonize over the trillions of trillions of trillions of men, women, and children you are thoughtlessly murdering each time you splurge on a Starbucks pumpkin spice latte or watch cat videos on tik-tok — or in anyway whatsoever commit the ethical sin of making non-optimal use of your time.[[[this is where I link to the ”Opportunity Cost Ethics shortform]]]
The point of this post is not to create a longtermist “dead children currency” analogue. Instead it is meant to be motivating background information, giving us all the more good reason to be thoughtful about our self-care and productivity.
I call the principle of strategically caring for yourself and those closest to you “The Proximity Principle,” something I discovered after several failed attempts to be perfectly purely altruistic. It roughly states that:
It is easiest to affect those closest to you (in time, space, and relatedness)
Taking care of yourself and those closest to you is high leverage for multiplying your own effectiveness in the future
To account for proximity, perhaps in addition to conversion rates for time and money into lives saved, we also need conversion rates for time and money into increases in personal productivity, personal health & wellbeing, mental health, self-development, personal relationships, and EA community culture.
These things may be hard to quantify, but probably less hard than we think, and seem like fruitful research directions for social-science oriented EAs. I think these areas are highly valuable relative to time and money, even if only valued instrumentally.
In general, for those who feel compelled to over-work to a point that feels unhealthy, have had a tendency to burn out in the past, or think this may be a problem for them, I would suggest erring on the side of over-compensating.
This means finding self-care activities that make you feel happy, energized, refreshed, and a sense of existential hope — and, furthermore, doing these activities regularly, more than the minimum you feel you need to in order to work optimally.
I like to think if this as keeping my tank nearly full, rather than perpetually halfway full or nearly empty. From a systems theory perspective, you are creating a continuous inflow and keeping your energy stocks high, rather than waiting til they are fully depleted and panic mode alerts you to refill.
For me, daily meditation, daily exercise, healthy diet, and good sleep habits are most essential. But each person is different, so find what works for you.
Remember, if you want to change the future, you need to be at your best. You are your most valuable asset. Invest in yourself.”
I will try to answer each question as I understand, let me know if this makes sense
how do I decide what I am obliged to do when I am considering ethical vs selfish interests of mine? ANSWER: Opportinity Cost Ethics itself states what is ethical but NOT what is morally obligatory. In the strong version, you are always obliged do what is altruistic. But this means taking care of yourself “selfishly” in-so-far as it is helpful in making you more productive. I call this taking “proximity” into account
if I can be ethically but not personally obliged, then what makes an ethical action obligatory? ANSWER: I do not understand your distinction between personally obliged and ethically obliged, could you clarify? I take a moral realism stance in which there is only one correct ethical system, whether or not we know what it is. If you are obliged, you can shirk your obligations, but others (and possibly yourself) will be harmed by this.
am I morally accountable for the absence of consequences of actions that I could have taken? ANSWR: YES! This is exactly the point. If you could have saved a child from drowning but don’t (and there are no additional costs to the action), that is precisely equivalent to murdering that child, and all the consequences that follow.
who decides what the consequences of my absent actions would have been? ANSWER: this is something that must be estimated with expected value and best-guesses. As longtermists, we know that our actions could have extremely positive consequences. In the piece I quoted above, I calculated that the average longtermist can conservatively expect to save about a trillion trillion trillion lives per minute of work or dollar donated.
how do I compare the altruism of consequences of one choice of action versus another? ANSWER: again best guesses, using expected value. Possibly the most important implication of Opportunity Cost Ethics is that we should spend a massive amount of time, energy, and resources trying to determine what the best possible actions we can take are.
do intentions carry ethical weight, that is, when I intend well, do harmful consequences matter? ANSWER: this is related to search costs and optimal stopping which I mentioned in the short-form. It is very difficult to know how much time you should spend trying to figure how good various actions are and whether or not they may be harmful, and how certain you should be before you proceed. In this framework, it is morally right to continue searching for the best action you can take until you predict the marginal cost of searching for a better action to outweigh the amount of additional good that the better action would achieve. If negative consequences follow, despite you having done your best to follow this process, then the action was morally wrong because, according to consequentialism, the consequences were bad yet your process and the intentions that led to the action may still have been as morally good as they could have been.
what do choices that emphasize either selfish or ethical interest mean about my character? ANSWER: Since I am using an over-arching consequentialist framework rather than character ethics, I am not sure I have an answer to this. My guess would be that good character would be correlated with altruism, after accounting for proximity.
On your other comments, I do not tend to think in terms of character traits much, except that I may seek to develop good character traits in myself and support them in others, in order to achieve good consequences. To me, good character traits are simply a short-hand which implies the person has habits or heuristics of action that tend to lead to good consequences within their context.
I also don’t tend to think in terms of obligations. I think obligations may be a useful abstraction for some, in that it helps encourage them to take actions with good consequences. Perhaps in a sense, I see moral obligation as a statement that actions which achieve the most good possible are the best actions to take, and we “should” always take the best action we can take; “should” in this context means it would be most good for us to do so.
So it is all basically one big self-referential tautology. You can choose to be moral or not, morality is good, because real people’s lives, happiness, suffering etc. is at stake, but you are free to choose to do what is good or not. I choose to do good, in-so-far as I am able and encourage others to do the same as the massively positive-sum world that results is best for all, including myself.
I also think enlightened self-interest, by which I mean helping others and living a life of purpose makes me much happier than I would otherwise be, plays an important role in my worldview. So does open individualism, the view that all consciousness shares the same identity, which even an extremely, extremely small credence in implies that longtermist work is likely the highest expected value selfish action. When you add The Proximity Principle, for me, selfishness and altruism are largely convergent and capable of integration — not to say there aren’t sometimes extremely difficult trade-offs.
Thanks for all your questions! This brought out a lot of good points I would like to add to a more thorough piece later. Let me know if what I said made sense, and I’d be very curious to hear your impressions of what I said.
Well, your formulation, as stated, would lead me to multiple conceptual difficulties, but the most practical one for me is how to conceptualize altruism. How do you know when you are being altruistic?
When you throw in the concepts of “enlightened self-interest” and “open individualism” to justify longtermism, it appears as though you have partial credence in fundamental beliefs that support your choice of ethical system. But you claim that there is only one correct ethical system. Would you clarify for me ?
You wrote:
“On your other comments, I do not tend to think in terms of character traits much except that I may seek to develop good character traits in myself and support them in others, in order to achieve good consequences.”
“If you are obliged, you can shirk your obligations, but others (and possibly yourself) will be harmed by this.”
“I choose to do good, in-so-far as I am able and encourage others to do the same as the massively positive-sum world that results is best for all, including myself. ”
From how you write, you seem like a kind, well-meaning, and thoughtful person. Your efforts to develop good character traits seem to be paying off for you.
You wrote:
“If I can be ethically but not personally obliged, then what makes an ethical action obligatory? ANSWER: I do not understand your distinction between personally obliged and ethically obliged, could you clarify? I take a moral realism stance in which there is only one correct ethical system, whether or not we know what it is. If you are obliged, you can shirk your obligations, but others (and possibly yourself) will be harmed by this.”
To clarify, if I apply your proximity principle, or enlightened self-interest, or your recommendations for self-care, but simultaneously hold myself ethically accountable for what I do not do (as your ethic recommends), then it appears as though I am not personally obliged in situations where I am ethically obliged.
If you hold yourself ethically accountable but not personally accountable, then you have ethical obligations but not personal obligations, and your ethical system becomes an accounting system, rather than a set of rules to follow. Your actions are weighted separately in terms of their altruistic and personal consequences, with different weights (or scores) for each, and you make decisions however you do. At some point(s) in time, you check the balance sheet of your altruistic and personal consequences and whatever you believe it shows, to decide whether you are in fact a moral person.
I think it’s a mistake to discuss your selfish interests as being in service to your altruistic ones. It’s a factual error and logically incoherent besides. You have actual selfish interests that you serve that are not in service to your ethics. Furthermore, selfish interests are in fact orthogonal to altruistic interests. You can serve either or both or neither through the consequences of your actions.
Interesting points.
Yes, as I said, for me altruism and selfishness have some convergence. I try to always act altruistically, and enlightened self-interest and open individualism are tools (which I actually do think have some truth to them) that help me tame the selfish part of myself that would otherwise demand much more. They may also be useful in persuading people to be more altruistic.
While I think there is likely only one correct ethical system, I think it is most likely consequentialist, and therefore these conceptual tools are useful for helping me and others to, in practical terms, actually achieve those ethical goals.
I suppose I see it as somewhat of an inner psychological battle, I try to be as altruistic as possible, but I am a weak and imperfect human who is not able to be perfectly altruistic, and often end up acting selfishly.
In addition to this, if I fail to account for proximity I actually become less effective because not sufficiently meeting my own needs makes me less effective in the future, hence some degree of what on the surface appears selfish is actually the best thing I can do altruistically.
You say:
“To clarify, if I apply your proximity principle, or enlightened self-interest, or your recommendations for self-care, but simultaneously hold myself ethically accountable for what I do not do (as your ethic recommends), then it appears as though I am not personally obliged in situations where I am ethically obliged.”
In such a situation the ethical thing to do is whatever achieves the most good. If taking care of yourself right now means that in the future you will be 10% more efficient, and it only takes up 5% of your time or other resources, then the best thing is to help yourself now so that you can better help others in the future.
Sorry if I wasn’t clear! I don’t understand what do you mean by the term “personally obliged”. I looked it up on Google and could not find anything related to it. Could you precisely defined the term and how it differs from ethically obliged? As I said, I don’t really think in terms of obligations, and so maybe this is why I don’t understand it.
I would say ethics could be seen as an accounting system or a set of guidelines of how to live. Maybe you could say ex ante ethics are guidelines, and ex post they are an accounting system.
When I am psychologically able, I will hopefully use ethics as guidelines. If the accounts show that I or others are consistently failing to do good, then that is an indication that part of the ethical system (or something else about how we do good) is broken and in need of repair, so this accounting is useful for the practical project of ethical behavior.
Your last paragraph:
“I think it’s a mistake to discuss your selfish interests as being in service to your altruistic ones. It’s a factual error and logically incoherent besides. You have actual selfish interests that you serve that are not in service to your ethics. Furthermore, selfish interests are in fact orthogonal to altruistic interests. You can serve either or both or neither through the consequences of your actions.”
Hm I’m not sure this is accurate. I read a book that mentioned studies that show happiness and person effectiveness seem to be correlated. I can’t see how not meeting your basic needs allows you to altruistically do more good, or why this wouldn’t extend to optimizing your productivity, which likely includes having relatively high levels of personal physical, mental, and emotional health. No doubt, you shouldn’t spend 100% of your resources maximizing these things, but I think effectiveness requires a relatively high level of personal well-being. This is seems empirical and testable, either high levels of well-being cause greater levels of altruistic success or they don’t. You could believe all of this in purely altruistic framing, without ever introducing selfishness — indeed this is why I use the term proximity, to distinguish it from selfish selfishness. You could say proximity is altruistically strategic selfishness. But I don’t really think the terminology is as important as the empirical claim that taking care of yourself helps you help others more effectively.
You wrote:
“Sorry if I wasn’t clear! I don’t understand what do you mean by the term “personally obliged”. I looked it up on Google and could not find anything related to it. Could you precisely defined the term and how it differs from ethically obliged? As I said, I don’t really think in terms of obligations, and so maybe this is why I don’t understand it.”
OK, a literal interpretation could work for you. So, while your ethics might oblige you to an action X, you yourself are not personally obliged to perform action X. Why are you not personally obliged? Because of how you consider your ethics. Your ethics are subject to limitations due to self-care, enlightened self-interest, or the proximity principle. You also use them as guidelines, is that right? Your ethics, as you describe them, are not a literal description of how you live or a do-or-die set of rules. Instead, they’re more like a perspective, maybe a valuable one incorporating information about how to get along in the world, or how to treat people better, but only a description of what actions you can take in terms of their consequences. You then go on to choose actions however you do and can evaluate your actions from your ethical perspective at any time. I understand that you do not directly say this but it is what I conclude based on what you have written. Your ethics as rules for action appear to me to be aspirational.
I wouldn’t choose consequentialism as an aspirational ethic. I have not shared my ethical rules or heuristics on this forum for a reason. They are somewhat opaque to me. That said, I do follow a lot of personal rules, simple ones, and they align with what you would typically expect from a good person in my current circumstances. But am I a consequentialist? No, but a consequentialist perspective is informative about consequences of my actions, and those concern me in general, whatever my goals.
In a submission to the Red Team Contest a few months back, I wrote up my thoughts on beliefs and altruistic decision-making.
I also wrote up some quick thoughts about longtermism in longtermists should self-efface.
I’ve seen several good posts here about longtermism, and one that caught my eye is A Case Against Strong Longtermism
In case you’re wondering, I am not a strong longtermist.
Thanks for the discussion, let me know your feedback and comments on the links I shared if you like.