The analogy resonated to me too. It reminded me of a part of my journey where I went to what to me was crazy town and came back. I’d like to share my story, partly to illustrate the concept. And if others would share their stories, I think that could be valuable or at least interesting.
At one point I decided that by far the best possible outcome for the future would be the so-called hedonium shockwave. The way I imagined it at that time, it would be an AI filling the universe as fast as possible with a homogenous substance that experiences extreme bliss. E.g. nano chips that simulate sentient minds in the constant state of extreme bliss. And those minds might be just sentient enough to experience bliss to save computation power for more minds. And since this is so overwhelmingly important, I thought that the goal of my life should be to increase the probability of a hedonium shockwave.
But then I procrastinated doing anything about it. When thinking why, I realized that the prospect of hedonium shockwave doesn’t excite me. In fact, this scenario seemed sad and worrying. After more contemplation, I think I figured out why. I viewed myself as an almost pure utilitarian (except some selfishness). And this seemed like the correct conclusion from the utilitarian POV, hence I concluded that this is what I want. But while utilitarianism might do a fine job at approximating my values in most situations, it did a bad job in this edge case. Utilitarianism was a map not the territory. So nowadays I still try to figure out what utilitarianism would suggest to do but then try to remember to ask myself: is this what I really want (or really think)? My model of myself might be different from the real me. In my diary at the time I made this drawing to illustrate it. It’s superfluous to the text but drawings help me to remember things.
Oh wow, Saulius, it is so exciting to read this! You described exactly how I think, also. I, too, only follow utilitarianism as a way of making moral decisions when it comports with what my moral emotions tell me to do. And the reason I love utilitarianism is just that it matches my moral emotions about 90% of the time. The main time I get off the utilitarian train is when I consider the utilitarian idea that it should be morally just as good to give one additional unit of joy to a being who is already happy, as it is to relieve an unhappy being from one unit of suffering. I’d rather relieve the suffering of the unhappy. So I relate to you not following the idea that utilitarianism led you to when it felt wrong to you emotionally. (That said, I actually love the idea of lots of blissed out minds filling the universe, so I guess our moral emotions tell us different things.)
When interacting with pure utilitarians, I’ve often felt embarrassed that I used moral emotions to guide my moral decisions. Thanks for making me feel more comfortable coming “out” about this emotional-semi-utilitarian way of thinking, Saulius!
Also, I love that you acknowledged that selfishness, of course, also influences our decision making. It does for me, too. And I think declaring that fact is the most responsible thing for us to do, for multiple reasons. It is more honest, and it helps others realize they can do good while still being human.
> I’d rather relieve the suffering of the unhappy.
In case you didn’t know, It is called prioritarianism (https://en.wikipedia.org/wiki/Prioritarianism). I’ve met more people who think this so you are not alone. I wouldn’t be surprised if the majority of EAs think this.
> When interacting with pure utilitarians, I’ve often felt embarrassed that I used moral emotions to guide my moral decisions.
To me, the question is how did they decide to be utilitarian in the first place? How did they decide whether they should be negative utilitarians, or classical utilitarians, or Kantians? How did they decide that they should minimize suffering rather than say maximize the number of paperclips? I imagine there are various theories on this but personally, I’m convinced that emotions are at the bottom of it. There is no way to use math or anything like that to prove that suffering is bad, so emotions are the only possible source of this moral intuition that I can see. So in my opinion, those pure utilitarians also used emotions to guide moral decisions, just more indirectly.
Once I realized that, I started questioning: how did I decide that some moral intuitions/emotions (e.g. suffering is bad) are part of my moral compass while other emotions (e.g. hedonium shockwave is bad, humans matter much more than animals) are biases that I should try to ignore? The choice of which moral emotions to trust seems totally arbitrary to me. So I don’t think that there is a reason at all why you should feel embarrassed about using emotions because of that. This is just my amateur reasoning though, there are probably thick moral philosophy books that disprove this position. But then again, who has time to read those when there are so many animals we could be helping instead.
One more thought: I think that people who chose only very few moral intuitions/emotions to trust and then follow them to their logical conclusions are the ones that are more likely to stay on the train longer. This is not expressing any opinion on how long we should stay on the train. As I said, I think the choice of how many moral intuitions to trust is arbitrary.
Personally, especially in the past, I also stayed on the train longer because I wanted to be different from other people, because I was a contrarian. That was a bad reason.
Thank you so much Saulius! I never heard of prioritarianism. That is amazing! Thanks for telling me!!
I’m not the best one to speak for the pure utilitarians in my life, but yes, I think it was what you said: Starting with one set of emotions (the utilitarian’s personal experience of preferring the feeling of pleasure over the feeling of suffering in his own life), and extrapolating based on logic to assume that pleasure is good no matter who feels it and that suffering is bad no matter who feels that.
Thinking about it more now, I’m still unsure if this is the right way to think about things. As a moral relativist, I don’t think there is moral truth, although I’m unsure of it because others disagree. But I seem to have concluded that I should just follow my emotions in the end, and only care about moral arguments if they convince my emotional side. As almost all moral claims, that is contentious. It’s also making me think that perhaps how far towards the crazy town you go can depend a lot on how much you trust your emotions vs argumentation.
I had a similar journey. I still think that utilitarian is a good description for me, since it seems basically right to me in all non-sci-fi scenarios, but I don’t have any confidence in the extreme edge cases.
The analogy resonated to me too. It reminded me of a part of my journey where I went to what to me was crazy town and came back. I’d like to share my story, partly to illustrate the concept. And if others would share their stories, I think that could be valuable or at least interesting.
At one point I decided that by far the best possible outcome for the future would be the so-called hedonium shockwave. The way I imagined it at that time, it would be an AI filling the universe as fast as possible with a homogenous substance that experiences extreme bliss. E.g. nano chips that simulate sentient minds in the constant state of extreme bliss. And those minds might be just sentient enough to experience bliss to save computation power for more minds. And since this is so overwhelmingly important, I thought that the goal of my life should be to increase the probability of a hedonium shockwave.
But then I procrastinated doing anything about it. When thinking why, I realized that the prospect of hedonium shockwave doesn’t excite me. In fact, this scenario seemed sad and worrying. After more contemplation, I think I figured out why. I viewed myself as an almost pure utilitarian (except some selfishness). And this seemed like the correct conclusion from the utilitarian POV, hence I concluded that this is what I want. But while utilitarianism might do a fine job at approximating my values in most situations, it did a bad job in this edge case. Utilitarianism was a map not the territory. So nowadays I still try to figure out what utilitarianism would suggest to do but then try to remember to ask myself: is this what I really want (or really think)? My model of myself might be different from the real me. In my diary at the time I made this drawing to illustrate it. It’s superfluous to the text but drawings help me to remember things.
Oh wow, Saulius, it is so exciting to read this! You described exactly how I think, also. I, too, only follow utilitarianism as a way of making moral decisions when it comports with what my moral emotions tell me to do. And the reason I love utilitarianism is just that it matches my moral emotions about 90% of the time. The main time I get off the utilitarian train is when I consider the utilitarian idea that it should be morally just as good to give one additional unit of joy to a being who is already happy, as it is to relieve an unhappy being from one unit of suffering. I’d rather relieve the suffering of the unhappy. So I relate to you not following the idea that utilitarianism led you to when it felt wrong to you emotionally. (That said, I actually love the idea of lots of blissed out minds filling the universe, so I guess our moral emotions tell us different things.)
When interacting with pure utilitarians, I’ve often felt embarrassed that I used moral emotions to guide my moral decisions. Thanks for making me feel more comfortable coming “out” about this emotional-semi-utilitarian way of thinking, Saulius!
Also, I love that you acknowledged that selfishness, of course, also influences our decision making. It does for me, too. And I think declaring that fact is the most responsible thing for us to do, for multiple reasons. It is more honest, and it helps others realize they can do good while still being human.
Hey Alene!
> I’d rather relieve the suffering of the unhappy.
In case you didn’t know, It is called prioritarianism (https://en.wikipedia.org/wiki/Prioritarianism). I’ve met more people who think this so you are not alone. I wouldn’t be surprised if the majority of EAs think this.
> When interacting with pure utilitarians, I’ve often felt embarrassed that I used moral emotions to guide my moral decisions.
To me, the question is how did they decide to be utilitarian in the first place? How did they decide whether they should be negative utilitarians, or classical utilitarians, or Kantians? How did they decide that they should minimize suffering rather than say maximize the number of paperclips? I imagine there are various theories on this but personally, I’m convinced that emotions are at the bottom of it. There is no way to use math or anything like that to prove that suffering is bad, so emotions are the only possible source of this moral intuition that I can see. So in my opinion, those pure utilitarians also used emotions to guide moral decisions, just more indirectly.
Once I realized that, I started questioning: how did I decide that some moral intuitions/emotions (e.g. suffering is bad) are part of my moral compass while other emotions (e.g. hedonium shockwave is bad, humans matter much more than animals) are biases that I should try to ignore? The choice of which moral emotions to trust seems totally arbitrary to me. So I don’t think that there is a reason at all why you should feel embarrassed about using emotions because of that. This is just my amateur reasoning though, there are probably thick moral philosophy books that disprove this position. But then again, who has time to read those when there are so many animals we could be helping instead.
One more thought: I think that people who chose only very few moral intuitions/emotions to trust and then follow them to their logical conclusions are the ones that are more likely to stay on the train longer. This is not expressing any opinion on how long we should stay on the train. As I said, I think the choice of how many moral intuitions to trust is arbitrary.
Personally, especially in the past, I also stayed on the train longer because I wanted to be different from other people, because I was a contrarian. That was a bad reason.
Thank you so much Saulius! I never heard of prioritarianism. That is amazing! Thanks for telling me!!
I’m not the best one to speak for the pure utilitarians in my life, but yes, I think it was what you said: Starting with one set of emotions (the utilitarian’s personal experience of preferring the feeling of pleasure over the feeling of suffering in his own life), and extrapolating based on logic to assume that pleasure is good no matter who feels it and that suffering is bad no matter who feels that.
Thinking about it more now, I’m still unsure if this is the right way to think about things. As a moral relativist, I don’t think there is moral truth, although I’m unsure of it because others disagree. But I seem to have concluded that I should just follow my emotions in the end, and only care about moral arguments if they convince my emotional side. As almost all moral claims, that is contentious. It’s also making me think that perhaps how far towards the crazy town you go can depend a lot on how much you trust your emotions vs argumentation.
I had a similar journey. I still think that utilitarian is a good description for me, since it seems basically right to me in all non-sci-fi scenarios, but I don’t have any confidence in the extreme edge cases.