Why I Hope (Certain) Hedonic Utilitarians Don’t Control the Long-term Future

TL;DR

Maximizing pleasure is a good proxy for maximizing the number of happy lives given the way the world is right now. However, once technology advances sufficiently, it will probably become a very bad proxy, because we will be able to instantiate pleasure without creating anything remotely like a happy life. If we nonetheless continue to aim at maximizing pleasure and minimizing pain, we will probably be aiming at a world that isn’t very valuable.

Or, if that TL;DR is too long:

TL;DR was TL; DR

If you want to maximize the number of happy lives, you should not be a hedonic utilitarian, because maximizing pleasure is not the same thing as that.

Maximizing Pleasure Will Probably Come Apart from Maximizing Happy Lives

A certain kind of utilitarian thinks that how good the world is just depends on the sum of total pleasure minus total pain experienced over the history of the universe. If the total amount of pleasure goes up, that makes the world a better place. So the thing to do is always whatever makes the world the best place out of the available options.

Longtermists often assume that what utilitarianism will counsel us to do, once we’ve reached a sufficiently advanced level of technological ability, is to create as many happy lives as possible. For example, Nick Bostrom in “Astronomical Waste” supposes that “what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create,” which he seems to equate with the idea that “our concern is to maximize the total amount of well-being.” Of course, he is neutral in that paper about whether the kinds of happy lives we want to maximize will involve fleshy humans like us, or rather will be simulated.

It’s true that, given our current level of technological ability, maximizing the total number of happy lives is a good way to maximize the total amount of well-being, if we understand “well-being” as pleasure and the absence of pain. That’s so because the kinds of things that can feel pleasure and pain right now are things with lives. We’re extended through time, we make memories, we form and try to execute plans, we update our beliefs, etc. If you want to increase the total amount of pleasure in the world, you have to do it by improving the life of one of the things like us.

But as technology advances, we will be probably in a position to make things that are not at all like us, which nonetheless are capable of feeling pleasure and pain. That is to say, right now humans and other organisms are the only value containers around. But eventually it will be possible to create very different kinds of value containers, such as simulated lives. My concern is that once we can make very different kinds of value containers, the things we would do to maximize the amount of pleasure in the world will be very different from the things we would do to maximize the number of happy lives.

When I try to get in the mood of thinking that it would be good for the universe to be tiled with the largest possible number of simulated happy lives, even then I’m thinking of something basically like human lives, just taking place in a virtual space. People would come into existence, interact with other simulated people, gain skills and knowledge, create works of art, pursue life goals, etc.

Most people on Earth are already no longer on board when we get to this point. But some utilitarians can work themselves into a state of mind where it seems pretty good. If we just bite the bullet and say that simulated lives are as valuable as happy ones, then there doesn’t seem to be anything wrong with this scenario. The problem is that there is actually very little reason to think that maximizing pleasure and minimizing pain would look anything like this. Sticking with utilitarianism would involve biting a whole number of other bullets.

Here’s one example. Right now, the value containers that are around (humans and some animals, maybe plants) live basically linearly – we can’t go back to our pasts, we plan for the future and not the past, at each moment in time we experience only that moment in time, etc. This is a pretty core part of what it means to live a life, if you think of it as a sort of narrative extended through time. But eventually we’ll be able to create value containers that live the same moment over and over. We could just simulate (say) the mental state of a person who’s about to propose to his girlfriend, and then simulate the perceptual experience of her saying yes, and run that on a loop. That’s a great way to produce a lot of pleasure. It’s probably much more efficient than simulating the entire history of the relationship up to that point over and over.

So one element of a life as traditionally understood – that it proceeds linearly and doesn’t repeat the same moment on a loop – might or might not end up being correlated with maximizing pleasure. And any number of other aspects of a happy life might end up coming apart from maximizing pleasure, too. Maybe it will be most computationally efficient for simulated beings to be able to experience any point in simulated time as they please; maybe they’ll never update their beliefs; maybe they won’t make plans; maybe they won’t be associated with a limited, contiguous area of physical or virtual space; maybe they won’t have anything worth calling a mind; maybe instances of pleasure won’t require a deeper substrate at all. It could turn out that we discover a way to just generate moments of pleasure that aren’t embedded in any kind of entity extended through time. Maximizing pleasure might just entail using the maximum amount of compute to simulate whatever pleasurable experience has the best ratio of pleasingness to computational complexity.

Whether implementing utilitarianism actually leads to anything like a large number of simulated happy lives, then, will depend on contingent facts about what’s computationally most efficient, exactly what kind of mental processes are needed to get pleasure off the ground, and so on. There is little reason to think that anything like the ontological nature of a human life will turn out to be the optimal way to generate pleasure.

To see the relevance of contingent computational facts, consider World of Warcraft. Because WoW is an MMO, players’ actions have to be broadcast to every other player in a certain area. That is, updating the computational state of the entire game requires updating the server itself and each player’s computer’s representation of that state. For that reason, computational costs go up dramatically as you increase the number of players in an area. 100 players all playing separately, nowhere near each other, just requires one connection each between the player and the server. But 100 players interacting with each other requires connections each between player and the server and connections between every player. So computational costs go up quickly, and people start to lag and get really mad whenever there are more than 50 or so other people on screen.

So what about that? Well, if you were like me, then the image of the maximally-hedonic future you try to conjure up when getting into the mood to like the idea of simulating a ton of happy lives involved a bunch of people interacting with each other. But whether that’s actually the optimific scenario will depend on the tradeoff between the hedonic benefits of interacting with other beings and the computational costs of simulating those other beings. If your simulated worlds involve people interacting with each other, then each simulated person will have to form a representation of the other simulated people as well (this isn’t quite the same thing as the problem in WoW, but I think the comparison helps illustrate the general idea). That might be a great deal more complex than a bunch of separate simulated Robinson Crusoes, who only have to represent relatively computationally light things like trees and rocks or whatever. 1,000 people interacting and forming internal representations of each other is a lot more complex than 1,000 people only forming separate internal representations of simple entities. Even if the Robinson Crusoes were less happy than they would be in a shared simulated universe (an assumption I don’t see much reason to endorse, assuming that we can basically shape the hedonic profile of the simulated beings at will), it still might turn out that we could generate so many more of them for the same computational cost that that was the way to go.

That means that utilitarians aren’t entitled to the assumption that the kinds of beings it’s optimal to simulate will even be interacting with one another. Even if it did turn out to be the case that the optimal beings would be living something worth calling a life, we need to be careful not to assume that their lives would have any of the contingent elements we currently associate with a life, including the ones that we currently imagine when we think about what makes a life valuable. We can’t smuggle in the assumption that the pleasure-optimized universe will involve relationships between minded entities. And that’s just one example.

So here is a descriptive claim: maximizing pleasure might diverge significantly from maximizing the number of happy lives, especially if we think of “happy lives” as anything remotely like what we’re living right now.

Why This Is Bad

Obviously, I would like you to move from that descriptive claim to the following normative one: we shouldn’t try to maximize pleasure as such. I’ll try to make the case for that inference.

I think happy lives are good. I think simulated happy lives might be good too. But I don’t think maximizing whatever ontological substrate turns out to be the most efficient kind of value container for pleasure is necessarily good. If I got the news the way to make the universe maximally pleasure-dense would be to simulate the exact same experience of some particularly excitable butterfly sipping nectar from a marigold, I would not think that we should make the universe maximally pleasure-dense (better to be Socrates mildly satisfied than an astronomical number of momentarily-existing butterfly simulations perfectly satisfied).

I think that utilitarianism is a plausible approximation to the correct value system, if we restrict our attention to circumstances where maximizing pleasure roughly correlates with treating people well. But once we are in a position to make all kinds of different value containers, ones that are not like existing people at all, then utilitarianism will become a terrible approximation to the correct value system. It would have us optimize for things that are nowhere near as good as happy lives.

There are two versions of this worry. The first one requires it to be the case that, as a matter of fact, maximizing happy lives won’t correlate with maximizing pleasure. I think this is really plausible, especially if we understand “happy life” in any kind of thick way. If you think a happy life has to be anything like a happy human life today, then it’s wildly unlikely that maximizing pleasure will also mean maximizing happy lives. We would have to have gotten incredibly lucky to find ourselves basically already in line with the optimal way to maximize pleasure. We weren’t designed by an intelligent force trying to create the optimal pleasure-experiencer, so it would be shocking if evolution had accidentally hit upon that anyway.

But I can imagine arguments to the effect that maximizing pleasure will correlate with maximizing happy lives. For instance, someone could say that feeling pleasure requires being conscious, and being conscious requires some of the contingent elements of life that I’ve been supposing are dispensable. Maybe being conscious requires being aware of oneself as extended through time. Maybe it requires having beliefs that change with evidence, or forming plans, or having desires. If that were true, then anything that wasn’t a whole lot like us wouldn’t actually be able to feel pleasure, and so utilitarianism wouldn’t tell us to make a lot of those things.

There are a number of ways it could turn out that maximizing pleasure correlates with maximizing happy lives. One way that might be true would be if that feeling pleasure requires having something very similar to a human life. Another way would be if simulations actually can’t feel pleasure, and only carbon-based things with bodies can. And just to illustrate the point from another angle, another way would be if God existed and promised to give us a whole lot of extra resources so long as we devoted them to producing more happy, corporeal human lives. So even if simulating instantaneous moments of butterfly minds experiencing bliss was the most resource-efficient, maybe we’d maximize total pleasure by generating relatively inefficient carbon-based lifeforms, since we’d get more resources total as a gift from God.

The second version of my worry covers this sort of reply (which, for the record, I don’t find very plausible anyway). The issue is that staking one’s evaluative views in this way on contingent empirical facts, about the nature of pleasure, or consciousness, or God’s preferences, is misguided. Suppose you agree that if maximizing pleasure didn’t correlate very strongly with maximizing happy lives in a certain advanced stage of technological development, then maximizing pleasure wouldn’t be the thing to do. If you agree with that, then you agree that maximizing pleasure is not fundamentally what matters. Arguing that maybe pleasure does correlate with the thing that fundamentally matters anyway would be neither here nor there.

By analogy: suppose you thought that GDP correlates with welfare, but that GDP is what fundamentally matters. Someone might come along and give all sorts of arguments that GDP doesn’t correlate perfectly with welfare, and in fact they come dramatically apart in counterfactual scenarios. You could reply by trying to argue that actually GDP would correlate with welfare in those scenarios. But as long as you think that, if GDP didn’t correlate with welfare in those scenarios, then we shouldn’t seek to maximize GDP, then it doesn’t actually matter whether GDP does as a matter of fact correlate with welfare. The existence of a counterfactual scenario where the two come apart, and where you agree that we shouldn’t maximize GDP in that case, shows that GDP isn’t what fundamentally matters to you.

Here’s another way of putting this point. One way to try to rule out the possibility that maximizing pleasure won’t correlate with maximizing happy lives is to say that pleasure requires consciousness, which requires living something like a human life. But this way of viewing things stakes the evaluative significance of pleasure on a contingent empirical matter (what an entity has to be like in order to be conscious) that you might be wrong about. So instead of making consciousness do this work, why not make your values do this work? I.e. why not say that in order for a certain mental state to be valuable, it has to be embedded in something approximately like a human life? Then you get the evaluative conclusion you want either way, without having to tie it to a debatable empirical matter.

To be sure, a certain kind of utilitarian might just bite the bullet here. They could say: well, if maximizing pleasure turns out to look nothing like maximizing the number of happy human-ish lives, so be it. Bostrom was just wrong to say that we want to maximize the number of worthwhile lives. Pleasure is the only good thing. So we should create whatever kind of value containers allow us to fill the world up with as much pleasure as possible. If humans aren’t the right kind of value container, that’s just too bad.

This view is conceivable, but pretty absurd to me. I’d just like to point out that if you are this kind of bullet-biting utilitarian, you are not in a position, when painting your picture of the ideal future, to assume that it will involve anything like a bunch of happy lives. You would be committed to maximizing pleasure whatever that ends up consisting in. By implication, if you start from the idea that a valuable future will have to have things remotely resembling a human life, then you need to reject this kind of bullet-biting utilitarianism.

One thing that’s notable about utilitarianism here is that the ways that it diverges from people’s ordinary values are often viewed as improvements. Ordinary people don’t pull the lever, and they let five people die for no good reason. Ordinary people fetishize rules and are concerned about their “integrity” or “keeping their hands clean.” Utilitarians just do the things that make the world a better place. Instead of being viewed as a decent-but-not-perfect proxy for what people actually value, utilitarianism is viewed as a cleaned-up and improved version of people’s values.

That fact is concerning if you want the future to be filled with happy lives. Once maximizing pleasure starts coming apart from maximizing happy lives, utilitarians might view concern for maximizing happy lives the same way they view concern for keeping one’s hands clean or maintaining one’s integrity – a decent proxy for what really matters in primitive conditions, but increasingly a suboptimal attitude for making the world a good place.

Replies

What about preference utilitarianism?

Some utilitarians think that what’s valuable isn’t pleasure, but instead it’s preferences being satisfied. Does this argument apply to that view too?

Probably yes, although I think the case for hedonic utilitarianism is stronger, so that’s why I talked about it. I think preferences can probably be instantiated in weird free-floating ways that aren’t tied to a whole life or a persistent entity, just like experiences of pleasure. One complication is that if you instantiate an entity that’s cognitively sophisticated enough to have preferences, it might have preferences to keep existing, or not to be reset back to a previous state, so it might not be okay to terminate or reset it right after it gets its preferences satisfied. However, we could probably just make sure the things we create to farm utility don’t have preferences like this, so it would be fine to instantiate events of preference satisfaction that aren’t embedded in entire lives.

What if we say pleasure has to be part of a life?

Suppose the utilitarian says that what’s valuable is pleasure in the context of something reasonably like a human life. Does that avoid this worry?

Yes it does, as far as I’ve argued here. I’d be less worried about utilitarians being in control of the future if they all started saying this.

I do think that once the utilitarian starts making concessions like this, though, it won’t be as plausible that pleasure is the only intrinsically good thing. There would have to be some reason pleasure has to be in the context of a life for it to be valuable, and that reason might suggest that there’s something else about lives that’s valuable. And this isn’t a parameter utilitarians can change freely without affecting anything else. For example, if they say only the pleasure of people or entities that have beliefs and desires matters, that might entail that the pleasure of certain animals doesn’t matter (see Shelly Kagan’s “What’s Wrong with Speciesism?” for a related point).

What if simulated pleasure isn’t good?

It might be relatively easy to instantiate simulated pleasure in ontologically weird ways that aren’t tied to a life. But suppose simulated pleasure isn’t good. Maybe then it’s not so easy to make pleasure occur in weird ways. So pleasure and happy lives will still coincide.

This reply doesn’t work, for two reasons. The first is that it’s still a problem if there’s any conceptually possible level of technological ability where we can instantiate pleasure in weird ways in biological substrates. If maximizing such weird pleasure would be worse than maximizing happy lives, then hedonic utilitarianism is false. The second is that it seems like this conceptual possibility is plausible in the real world. If we get arbitrarily technologically advanced, we’ll probably be able to instantiate pleasure in all kinds of weird ways, even if we’re restricted to doing it with biological matter.

Who else says they want to maximize happy lives?

I only gave one example of a longtermist utilitarian who seemed to equate maximizing pleasure/​well-being with maximizing happy lives. If other utilitarians don’t do this, then maybe it was inaccurate for me to insinuate that they’re implicitly drawing on the plausible goodness of there being a bunch of happy lives to support their utilitarian views that actually conflict with that picture.

Reply 1: if longtermist utilitarians don’t care about there being a ton of happy lives, and they just care about pleasure however it’s instantiated, so much the worse for them. (Some people do explicitly disavow the idea that people matter, and say that it’s just experiences that matter. The people who talk about this sometimes conflate “experiences” with “person-moments.” That seems to me like they are still assuming that pleasure has to be instantiated in something like a person.)

Reply 2: everybody in population ethics talks about lives, although they aren’t all longtermists. If you ctrl-f “lives” in The Precipice, you’ll see Toby Ord talking about the future containing “many new lives of high wellbeing,” and worrying about scenarios where the future is filled with “people cursed with lives with extremely negative wellbeing.” Joe Carlsmith in “On Infinite Ethics” says that we should focus on “agents” as the location of value rather than e.g. times, and he talks about “infinite people living the best possible (painless) lives you can imagine.” I predict that Will MacAskill’s forthcoming book What We Owe the Future will be full of talk of the value of future lives, and will not have a lot of talk about the value of future free-floating experiences of pleasure independent of any agentic or lifelike substrate. This prediction is cheating a little, since before I wrote it down I looked at his NYT article “The Case for Longtermism” and saw sentences like “Future people count. There could be a lot of them. And we can make their lives better.” (Sorry if any of these people object to being called utilitarians – the point is more just that most parties to these disputes talk about happy lives)

A lot of people talk about lives. That’s either an oversight they all just didn’t notice when thinking about how pleasure and pain might be instantiated in the future, or a sign that something about happy lives matters to longtermists.

Everyone already knows maximizing the number of happy lives is not what utilitarianism advises.

I spoke as if the ideal future involves maximizing the number of happy lives. But that’s not true. Utilitarians want to maximize aggregate utility, and that means summing up all the happiness in all the happy lives. Creating 1 trillion pretty happy lives is not as good as creating 900 billion ecstatic lives. So it’s false to say utilitarians want to maximize the number of happy lives.

I was just saying “maximize the number of happy lives” because it’s more perspicuous and other people use that phrase. I don’t think the more precise version changes the argument. When longtermists think about the ideal future, they do think of pleasure as being instantiated in the context of a life, not just as floating independently, so in fact their envisioned scenarios do not optimize for the total amount of pleasure however it happens to be instantiated.

What about this thing this one person says that you didn’t address?

I’m probably unaware of some relevant literature, and maybe something I said has already been pointed out/​dealt with/​refuted. If that’s so, please let me know about it!

Conclusion

So that’s why I hope utilitarians of this particular sort don’t control the long-term future. Maximizing pleasure happens to be a decent proxy for maximizing happy lives, right now. But once we can create different kinds of value containers, the two will probably come apart. And in that scenario, I hope we create happy lives, rather than generate whatever weird things are the most resource-efficient way to instantiate pleasure.