Imagine a world populated by many, many (trillions) of people. These people’s lives aren’t purely full of joy, and do have a lot of misery as well. But each person thinks that their life is worth living. Their lives might be a be bit boring or they might be full of huge ups and downs, but on the whole they are net-positive.
From this view it seems really strange to think that it would be good for every person in this world to die/not exist/never have existed in order to allow a very small number of privileged people to live spectacular lives. It seems bad to stop many people from living a life that they mostly enjoy, in order to allow the flourishing of the few.
I think this hypothetical is a decent intuition pump for why the Repugnant Conclusion isn’t actually repugnant. But I do think it might be a little bit dishonest or manipulative. It frames the situation in terms of fairness and equality; we can sympathize with the many slightly happy people who are maybe being denied the right to exist, and think of the few extremely happy people as the privileged elite. It also takes advantage of status quo bias; by beginning with the many slightly happy people it seems worse to then ‘remove’ them.
I’ve always thought the Repugnant Conclusion was mostly status quo bias, anyway, combined with the difficulty of imagining what such a future would actually be like.
I think the Utility Monster is a similar issue. Maybe it would be possible to create something with a much richer experience set than humans, which should be valued more highly. But any such being would actually be pretty awesome, so we shouldn’t resent giving it a greater share of resources.
Humans seem like (plausible) utility monsters compared to ants, and many religious people have a conception of god that would make Him a utility monster (“maybe you don’t like prayer and following all these rules, but you can’t even conceive of the - ‘joy’ doesn’t even do it justice—how much grander it is to god if we follow these rules than even the best experiences in our whole lives!”). Anti-utility monster sentiments seem to largely be coming from a place where someone imagines a human that’s pretty happy by human standards, and thinks the words “orders of magnitude happier than what any human feels”, and then they notice their intuition doesn’t track the words “orders of magnitude”.
Imagine there is a box with a ball inside it, and you believe the ball is red. But you also believe that in the future you will update your belief and think that the ball is blue (the ball is a normal, non-color-changing ball). This seems like a very strange position to be in, and you should just believe that the ball is blue now.
This is an example of how we should deal with beliefs in general; if you think in the future you will update a belief in a specific direction then you should just update now.
I think the same principle applies to moral beliefs. If you think that in the future you’ll believe that it’s wrong to do something, then you should believe that it’s wrong now.
As an example of this, if you think that in the future you’ll believe eating meat is wrong, then you sort of already believe eating meat is wrong. I was in exactly this position for a while, thinking in the future I would stop eating meat, while also continuing to eat meat. A similar case to this is deliberately remaining ignorant about something because learning would change your moral beliefs. If you’re avoiding learning about factory farming because you think it would cause you to believe eating factory farmed meat is bad, then you already on some level believe that.
Another case of this is in politics when a politician says it’s ‘not the time’ for some political action but in the future it will be. This is ‘fine’ if it’s ‘not the time’ due to political reasons, such as the electorate not reelecting the politician. But I don’t think it’s consistent to say an action is currently not moral, but will be moral in the future. Obviously this only works if the action now and in the future are actually equivalent.
Flipping the Repugnant Conclusion
Imagine a world populated by many, many (trillions) of people. These people’s lives aren’t purely full of joy, and do have a lot of misery as well. But each person thinks that their life is worth living. Their lives might be a be bit boring or they might be full of huge ups and downs, but on the whole they are net-positive.
From this view it seems really strange to think that it would be good for every person in this world to die/not exist/never have existed in order to allow a very small number of privileged people to live spectacular lives. It seems bad to stop many people from living a life that they mostly enjoy, in order to allow the flourishing of the few.
I think this hypothetical is a decent intuition pump for why the Repugnant Conclusion isn’t actually repugnant. But I do think it might be a little bit dishonest or manipulative. It frames the situation in terms of fairness and equality; we can sympathize with the many slightly happy people who are maybe being denied the right to exist, and think of the few extremely happy people as the privileged elite. It also takes advantage of status quo bias; by beginning with the many slightly happy people it seems worse to then ‘remove’ them.
I’ve always thought the Repugnant Conclusion was mostly status quo bias, anyway, combined with the difficulty of imagining what such a future would actually be like.
I think the Utility Monster is a similar issue. Maybe it would be possible to create something with a much richer experience set than humans, which should be valued more highly. But any such being would actually be pretty awesome, so we shouldn’t resent giving it a greater share of resources.
Humans seem like (plausible) utility monsters compared to ants, and many religious people have a conception of god that would make Him a utility monster (“maybe you don’t like prayer and following all these rules, but you can’t even conceive of the - ‘joy’ doesn’t even do it justice—how much grander it is to god if we follow these rules than even the best experiences in our whole lives!”). Anti-utility monster sentiments seem to largely be coming from a place where someone imagines a human that’s pretty happy by human standards, and thinks the words “orders of magnitude happier than what any human feels”, and then they notice their intuition doesn’t track the words “orders of magnitude”.
I like this perspective. I’ve never really understood why people find the repugnant conclusion repugnant!
Updating Moral Beliefs
Imagine there is a box with a ball inside it, and you believe the ball is red. But you also believe that in the future you will update your belief and think that the ball is blue (the ball is a normal, non-color-changing ball). This seems like a very strange position to be in, and you should just believe that the ball is blue now.
This is an example of how we should deal with beliefs in general; if you think in the future you will update a belief in a specific direction then you should just update now.
I think the same principle applies to moral beliefs. If you think that in the future you’ll believe that it’s wrong to do something, then you should believe that it’s wrong now.
As an example of this, if you think that in the future you’ll believe eating meat is wrong, then you sort of already believe eating meat is wrong. I was in exactly this position for a while, thinking in the future I would stop eating meat, while also continuing to eat meat. A similar case to this is deliberately remaining ignorant about something because learning would change your moral beliefs. If you’re avoiding learning about factory farming because you think it would cause you to believe eating factory farmed meat is bad, then you already on some level believe that.
Another case of this is in politics when a politician says it’s ‘not the time’ for some political action but in the future it will be. This is ‘fine’ if it’s ‘not the time’ due to political reasons, such as the electorate not reelecting the politician. But I don’t think it’s consistent to say an action is currently not moral, but will be moral in the future. Obviously this only works if the action now and in the future are actually equivalent.