I understand that you aren’t saying that altruism is completely unemotional, but I still want to emphasize the role that emotion plays. I do not distinguish too sharply between things that I want for personal reasons, and things that I want for altruistic concerns. Personally, when I learned about utility functions, it was a watershed moment for my understanding of ethics.
If you describe an agent as having a utility function, it means that all of its preferences are commensurate. To put it another way, the agent might want to have a cup of coffee and also want world peace. Importantly, the two preferences are the same type—I don’t distinguish between moral wants and non-moral wants.
Therefore, when I say that I am altruistic, I am not saying that it is my duty to be so. If I were to put my biases aside and dispassionately calculate the action with the highest utility, it is because I truly believe that being dispassionate is the best way to get what I want. I would do the same for actions which concern my own life, and feelings.
Splitting our motivation into two pieces, one personal, and one moral, seems like a remnant of our evolutionary past. It seems to me that people naturally believe in social norms, moral standards, duty, virtues and these don’t always align with what they personally want. I seek to dissolve this whole dichotomy: there is simply a world that I want to be in, and I am trying to do whatever is necessary to make that world the real one.
I’d argue that humans would actually be better understood as an aggregate of agents, each with their own utility function. In your case, these agents might cooperate so well that your internal experience is that you’re just one agent, but that’s certainly not a human universal.
Yeah, there are many possible ways to frame this. I like the idea of a coherent agent, but that might just be the part of me capable of putting verbal thoughts on a forum page. In any case, over time I’ve experienced a shift from viewing preferences as different types which compete, to viewing preferences as all existing together in one coherent thread. Of course, my introspection is not perfect, but this is how I feel when I look inward to find what I really want.
I do not claim that this is what other people feel. However, to the extent which I find the idea pleasing, I certainly would like if people shared my view.
At the very least, I agree that one coherent thread is more healthy and something to strive for, but in choosing a thread you might want to be aware of the various stakeholders and their incentives.
I find that counting myself and my needs into my moral framework makes my moral framework more robust.
I understand that you aren’t saying that altruism is completely unemotional, but I still want to emphasize the role that emotion plays. I do not distinguish too sharply between things that I want for personal reasons, and things that I want for altruistic concerns. Personally, when I learned about utility functions, it was a watershed moment for my understanding of ethics.
If you describe an agent as having a utility function, it means that all of its preferences are commensurate. To put it another way, the agent might want to have a cup of coffee and also want world peace. Importantly, the two preferences are the same type—I don’t distinguish between moral wants and non-moral wants.
Therefore, when I say that I am altruistic, I am not saying that it is my duty to be so. If I were to put my biases aside and dispassionately calculate the action with the highest utility, it is because I truly believe that being dispassionate is the best way to get what I want. I would do the same for actions which concern my own life, and feelings.
Splitting our motivation into two pieces, one personal, and one moral, seems like a remnant of our evolutionary past. It seems to me that people naturally believe in social norms, moral standards, duty, virtues and these don’t always align with what they personally want. I seek to dissolve this whole dichotomy: there is simply a world that I want to be in, and I am trying to do whatever is necessary to make that world the real one.
I’d argue that humans would actually be better understood as an aggregate of agents, each with their own utility function. In your case, these agents might cooperate so well that your internal experience is that you’re just one agent, but that’s certainly not a human universal.
Yeah, there are many possible ways to frame this. I like the idea of a coherent agent, but that might just be the part of me capable of putting verbal thoughts on a forum page. In any case, over time I’ve experienced a shift from viewing preferences as different types which compete, to viewing preferences as all existing together in one coherent thread. Of course, my introspection is not perfect, but this is how I feel when I look inward to find what I really want.
I do not claim that this is what other people feel. However, to the extent which I find the idea pleasing, I certainly would like if people shared my view.
At the very least, I agree that one coherent thread is more healthy and something to strive for, but in choosing a thread you might want to be aware of the various stakeholders and their incentives. I find that counting myself and my needs into my moral framework makes my moral framework more robust.