> I’d rather relieve the suffering of the unhappy.
In case you didn’t know, It is called prioritarianism (https://en.wikipedia.org/wiki/Prioritarianism). I’ve met more people who think this so you are not alone. I wouldn’t be surprised if the majority of EAs think this.
> When interacting with pure utilitarians, I’ve often felt embarrassed that I used moral emotions to guide my moral decisions.
To me, the question is how did they decide to be utilitarian in the first place? How did they decide whether they should be negative utilitarians, or classical utilitarians, or Kantians? How did they decide that they should minimize suffering rather than say maximize the number of paperclips? I imagine there are various theories on this but personally, I’m convinced that emotions are at the bottom of it. There is no way to use math or anything like that to prove that suffering is bad, so emotions are the only possible source of this moral intuition that I can see. So in my opinion, those pure utilitarians also used emotions to guide moral decisions, just more indirectly.
Once I realized that, I started questioning: how did I decide that some moral intuitions/emotions (e.g. suffering is bad) are part of my moral compass while other emotions (e.g. hedonium shockwave is bad, humans matter much more than animals) are biases that I should try to ignore? The choice of which moral emotions to trust seems totally arbitrary to me. So I don’t think that there is a reason at all why you should feel embarrassed about using emotions because of that. This is just my amateur reasoning though, there are probably thick moral philosophy books that disprove this position. But then again, who has time to read those when there are so many animals we could be helping instead.
One more thought: I think that people who chose only very few moral intuitions/emotions to trust and then follow them to their logical conclusions are the ones that are more likely to stay on the train longer. This is not expressing any opinion on how long we should stay on the train. As I said, I think the choice of how many moral intuitions to trust is arbitrary.
Personally, especially in the past, I also stayed on the train longer because I wanted to be different from other people, because I was a contrarian. That was a bad reason.
Thank you so much Saulius! I never heard of prioritarianism. That is amazing! Thanks for telling me!!
I’m not the best one to speak for the pure utilitarians in my life, but yes, I think it was what you said: Starting with one set of emotions (the utilitarian’s personal experience of preferring the feeling of pleasure over the feeling of suffering in his own life), and extrapolating based on logic to assume that pleasure is good no matter who feels it and that suffering is bad no matter who feels that.
Hey Alene!
> I’d rather relieve the suffering of the unhappy.
In case you didn’t know, It is called prioritarianism (https://en.wikipedia.org/wiki/Prioritarianism). I’ve met more people who think this so you are not alone. I wouldn’t be surprised if the majority of EAs think this.
> When interacting with pure utilitarians, I’ve often felt embarrassed that I used moral emotions to guide my moral decisions.
To me, the question is how did they decide to be utilitarian in the first place? How did they decide whether they should be negative utilitarians, or classical utilitarians, or Kantians? How did they decide that they should minimize suffering rather than say maximize the number of paperclips? I imagine there are various theories on this but personally, I’m convinced that emotions are at the bottom of it. There is no way to use math or anything like that to prove that suffering is bad, so emotions are the only possible source of this moral intuition that I can see. So in my opinion, those pure utilitarians also used emotions to guide moral decisions, just more indirectly.
Once I realized that, I started questioning: how did I decide that some moral intuitions/emotions (e.g. suffering is bad) are part of my moral compass while other emotions (e.g. hedonium shockwave is bad, humans matter much more than animals) are biases that I should try to ignore? The choice of which moral emotions to trust seems totally arbitrary to me. So I don’t think that there is a reason at all why you should feel embarrassed about using emotions because of that. This is just my amateur reasoning though, there are probably thick moral philosophy books that disprove this position. But then again, who has time to read those when there are so many animals we could be helping instead.
One more thought: I think that people who chose only very few moral intuitions/emotions to trust and then follow them to their logical conclusions are the ones that are more likely to stay on the train longer. This is not expressing any opinion on how long we should stay on the train. As I said, I think the choice of how many moral intuitions to trust is arbitrary.
Personally, especially in the past, I also stayed on the train longer because I wanted to be different from other people, because I was a contrarian. That was a bad reason.
Thank you so much Saulius! I never heard of prioritarianism. That is amazing! Thanks for telling me!!
I’m not the best one to speak for the pure utilitarians in my life, but yes, I think it was what you said: Starting with one set of emotions (the utilitarian’s personal experience of preferring the feeling of pleasure over the feeling of suffering in his own life), and extrapolating based on logic to assume that pleasure is good no matter who feels it and that suffering is bad no matter who feels that.