Oh wow, Saulius, it is so exciting to read this! You described exactly how I think, also. I, too, only follow utilitarianism as a way of making moral decisions when it comports with what my moral emotions tell me to do. And the reason I love utilitarianism is just that it matches my moral emotions about 90% of the time. The main time I get off the utilitarian train is when I consider the utilitarian idea that it should be morally just as good to give one additional unit of joy to a being who is already happy, as it is to relieve an unhappy being from one unit of suffering. I’d rather relieve the suffering of the unhappy. So I relate to you not following the idea that utilitarianism led you to when it felt wrong to you emotionally. (That said, I actually love the idea of lots of blissed out minds filling the universe, so I guess our moral emotions tell us different things.)
When interacting with pure utilitarians, I’ve often felt embarrassed that I used moral emotions to guide my moral decisions. Thanks for making me feel more comfortable coming “out” about this emotional-semi-utilitarian way of thinking, Saulius!
Also, I love that you acknowledged that selfishness, of course, also influences our decision making. It does for me, too. And I think declaring that fact is the most responsible thing for us to do, for multiple reasons. It is more honest, and it helps others realize they can do good while still being human.
> I’d rather relieve the suffering of the unhappy.
In case you didn’t know, It is called prioritarianism (https://en.wikipedia.org/wiki/Prioritarianism). I’ve met more people who think this so you are not alone. I wouldn’t be surprised if the majority of EAs think this.
> When interacting with pure utilitarians, I’ve often felt embarrassed that I used moral emotions to guide my moral decisions.
To me, the question is how did they decide to be utilitarian in the first place? How did they decide whether they should be negative utilitarians, or classical utilitarians, or Kantians? How did they decide that they should minimize suffering rather than say maximize the number of paperclips? I imagine there are various theories on this but personally, I’m convinced that emotions are at the bottom of it. There is no way to use math or anything like that to prove that suffering is bad, so emotions are the only possible source of this moral intuition that I can see. So in my opinion, those pure utilitarians also used emotions to guide moral decisions, just more indirectly.
Once I realized that, I started questioning: how did I decide that some moral intuitions/emotions (e.g. suffering is bad) are part of my moral compass while other emotions (e.g. hedonium shockwave is bad, humans matter much more than animals) are biases that I should try to ignore? The choice of which moral emotions to trust seems totally arbitrary to me. So I don’t think that there is a reason at all why you should feel embarrassed about using emotions because of that. This is just my amateur reasoning though, there are probably thick moral philosophy books that disprove this position. But then again, who has time to read those when there are so many animals we could be helping instead.
One more thought: I think that people who chose only very few moral intuitions/emotions to trust and then follow them to their logical conclusions are the ones that are more likely to stay on the train longer. This is not expressing any opinion on how long we should stay on the train. As I said, I think the choice of how many moral intuitions to trust is arbitrary.
Personally, especially in the past, I also stayed on the train longer because I wanted to be different from other people, because I was a contrarian. That was a bad reason.
Thank you so much Saulius! I never heard of prioritarianism. That is amazing! Thanks for telling me!!
I’m not the best one to speak for the pure utilitarians in my life, but yes, I think it was what you said: Starting with one set of emotions (the utilitarian’s personal experience of preferring the feeling of pleasure over the feeling of suffering in his own life), and extrapolating based on logic to assume that pleasure is good no matter who feels it and that suffering is bad no matter who feels that.
Oh wow, Saulius, it is so exciting to read this! You described exactly how I think, also. I, too, only follow utilitarianism as a way of making moral decisions when it comports with what my moral emotions tell me to do. And the reason I love utilitarianism is just that it matches my moral emotions about 90% of the time. The main time I get off the utilitarian train is when I consider the utilitarian idea that it should be morally just as good to give one additional unit of joy to a being who is already happy, as it is to relieve an unhappy being from one unit of suffering. I’d rather relieve the suffering of the unhappy. So I relate to you not following the idea that utilitarianism led you to when it felt wrong to you emotionally. (That said, I actually love the idea of lots of blissed out minds filling the universe, so I guess our moral emotions tell us different things.)
When interacting with pure utilitarians, I’ve often felt embarrassed that I used moral emotions to guide my moral decisions. Thanks for making me feel more comfortable coming “out” about this emotional-semi-utilitarian way of thinking, Saulius!
Also, I love that you acknowledged that selfishness, of course, also influences our decision making. It does for me, too. And I think declaring that fact is the most responsible thing for us to do, for multiple reasons. It is more honest, and it helps others realize they can do good while still being human.
Hey Alene!
> I’d rather relieve the suffering of the unhappy.
In case you didn’t know, It is called prioritarianism (https://en.wikipedia.org/wiki/Prioritarianism). I’ve met more people who think this so you are not alone. I wouldn’t be surprised if the majority of EAs think this.
> When interacting with pure utilitarians, I’ve often felt embarrassed that I used moral emotions to guide my moral decisions.
To me, the question is how did they decide to be utilitarian in the first place? How did they decide whether they should be negative utilitarians, or classical utilitarians, or Kantians? How did they decide that they should minimize suffering rather than say maximize the number of paperclips? I imagine there are various theories on this but personally, I’m convinced that emotions are at the bottom of it. There is no way to use math or anything like that to prove that suffering is bad, so emotions are the only possible source of this moral intuition that I can see. So in my opinion, those pure utilitarians also used emotions to guide moral decisions, just more indirectly.
Once I realized that, I started questioning: how did I decide that some moral intuitions/emotions (e.g. suffering is bad) are part of my moral compass while other emotions (e.g. hedonium shockwave is bad, humans matter much more than animals) are biases that I should try to ignore? The choice of which moral emotions to trust seems totally arbitrary to me. So I don’t think that there is a reason at all why you should feel embarrassed about using emotions because of that. This is just my amateur reasoning though, there are probably thick moral philosophy books that disprove this position. But then again, who has time to read those when there are so many animals we could be helping instead.
One more thought: I think that people who chose only very few moral intuitions/emotions to trust and then follow them to their logical conclusions are the ones that are more likely to stay on the train longer. This is not expressing any opinion on how long we should stay on the train. As I said, I think the choice of how many moral intuitions to trust is arbitrary.
Personally, especially in the past, I also stayed on the train longer because I wanted to be different from other people, because I was a contrarian. That was a bad reason.
Thank you so much Saulius! I never heard of prioritarianism. That is amazing! Thanks for telling me!!
I’m not the best one to speak for the pure utilitarians in my life, but yes, I think it was what you said: Starting with one set of emotions (the utilitarian’s personal experience of preferring the feeling of pleasure over the feeling of suffering in his own life), and extrapolating based on logic to assume that pleasure is good no matter who feels it and that suffering is bad no matter who feels that.