Welfare and felt duration (Andreas Mogensen)
This paper was published as a GPI working paper in November 2023.
Introduction
An experience of pain is worse for you the longer it goes on. This much seems obvious. But how should we understand the duration of a pleasant or unpleasant sensation?
The question is worth raising because we seem able to distinguish between subjective and objective time. A minute sometimes feels much longer than a minute, and sometimes much shorter. It’s possible that different kinds of minds – those of small, high-metabolism animals (Prosser 2016: 85–87; Schukraft 2020; Yong 2022: 74–76) or of digital persons of the not-too-distant future (Bostrom and Yudkowsky 2014; Hanson 2016; Shulman and Bostrom 2021) – might vary dramatically in their experience of time’s passage, living through a much greater amount of subjectively experienced time within a given unit of objective time. To them, the experience of pain filling mere seconds or minutes might in some sense be more like our experience of a pain that lasts many hours or days.
How well or badly someone’s life goes is naturally understood as something to be assessed from her perspective (Railton 1986; Rosati 1996; Sumner 1996; Hall and Tiberius 2016; Dorsey 2017). Therefore, it seems intuitive that a valenced experience that’s subjectively experienced as longer makes a greater difference to your welfare, holding fixed its intensity, objective duration, and any other evaluatively significant properties, while a valenced experience that’s objectively longer makes no greater difference to your welfare, holding fixed its intensity, subjectively experienced duration, and any other evaluatively significant properties (compare Lee 2013; Bostrom and Yudkowsky 2014; Schukraft 2020; Shulman and Bostrom 2021). As Terry Pratchett (1990: 10) writes: “the important thing is not how long your life is, but how long it seems.”
I argue against the claim that the subjective duration of a valenced experience is the important thing. More exactly, I argue against the claim that a valenced experience that’s subjectively experienced as longer makes a greater difference to your welfare, holding fixed its intensity, objective duration, and any other evaluatively significant properties. I do not also present a positive argument for the contrary claim that the extensive magnitude of a valenced experience should instead be measured in terms of its objective duration. As the natural alternative, I do think that position is a lot more plausible than it might initially appear. However, I also give some credence to the idea that perhaps neither subjective nor objective duration has any fundamental evaluative significance and that what makes longer pains worse ultimately has to be explained in terms that have nothing essentially to do with length of time or experience thereof (see section 4).
I start in section 2 by clarifying some basic conceptual issues and explaining the importance of getting clear on how, if at all, subjectively experienced duration modulates welfare. In section 3, I look at two analyses of the nature of subjective time experience in the recent philosophical literature that strike me as especially attractive. I argue that, although each may be plausible as an account of what felt duration consists in, on neither is it plausible that felt duration per se modulates the contribution of valenced experience to individual welfare. In section 4, I rebut an intuition pump appealing to digital reproductions of conscious experiences that many people find persuasive as an argument for measuring the duration of valenced experiences in terms of subjective time. Section 5 provides a brief summary and conclusion.
I think there are some interesting arguments here, but the argument in “4.3 Computational Equivalence” can probably still be saved, because it shouldn’t depend on any controversial parts of computational theories.
Instead, imagine two identical brains undergoing the same physical events, but one doing so at twice the speed and over a period of time half as long.[1] Neural signals travel twice as fast, the time between successive neuron firings is halved, etc..
In my view, any plausible theory of consciousness and moral value should assign the same inherent hedonistic value to the two brains over their respective time intervals.[2] Computational theories are just one class of theories that do. But we can abstract away different physical details.
On the other hand, I can imagine two people with identical preferences living (nearly) identical lives over the same objective time intervals (from the same reference frame), but one with twice the subjective rate of experience as the other, and this making little difference to the moral value on preference accounts. Each person has preferences and goals like getting married, having children, for there to be less suffering in the world, etc., and while how they care subjectively about those matters, the subjective rate of experience doesn’t make their preferences more or less important (at least not in a straightforward multiplicative way). Rather, we might model them as having preferences over world states or world histories, and their subjective appreciation of how things turn out isn’t what matters, it’s just that things turn out more or less the way they prefer.
Maybe the thought experiment is in fact physically impossible, except through relativity, which the author addresses, but I don’t think the right theory of consciousness should depend on those details, and we can make the brains and events different enough while still preserving value and having corresponding events unfold at twice the speed.
And I’d guess the same subjective value from the perspectives of those brains in most cases, but people can care about their relationships with the world in different ways that we might care about, e.g. maybe they want the timing of their subjective experiences or neural activity to match some external events. That seems like an odd preference, but I’m still inclined to care about it.
Thanks, Michael—Sorry for the delay in replying to this!
What I was trying to argue in 4.3 is that the following is a bad reason to think that the different experiences have the same value in spite of lasting for very different amounts of clock time: a computational theory of consciousness is true and the time that a given computation needs in order to complete when physically instantiated ought to be irrelevant to the character of mind, since there’s nothing in a Turing-machine model of computation corresponding to the amount of time the machine spends in a given configuration or requires when transitioning from one configuration to another.
I take it that that style of argument does depend on assuming that mind is Turing-style computation. I can see that you could perhaps have some other kind of theory of mind and say that according to this theory, mental processing is to be modelled as this kind of state, followed by this kind of state, and we make no reference in our model to the amount of time the system is in each state or the amount of time required between transitions, although what is happening is not to be understood as computation. You might then argue in a somewhat similar way that because you model mental processing thusly and the model omits any time dimension, the time required for the physical instiatiation of the modelled process ought to be irrelevant to the value of a given experience.
However, if you try to say something along those lines, then I think a very similar objection arises to the one I outline in the paper. Since what’s described appears to be an atemporal model of mental processing, whereas experience in fact unfolds in time, the model has got to be incomplete and needs to be supplemented somehow if it’s to properly describe the basis of experience, and thus we seem to be drawing inferences about the phenomena we are trying to model that simply reflect gaps and abstractions in our models of them, i.e., ways in which our models fail to capture the reality of what’s actually going on. That seems like a mistake. So I think that what I say in the paper about computationalism can be recast as applied to any similar way of drawing inferences from any model of what realizes experience that is essentially atemporal. In that sense, I don’t think that retreat from the computational theory of mind helps.
That having been said, I want to emphasize that the argument in section 4.3 is not intended to show that it is false to judge that the clock time required for a physical process to complete is irrelevant to the value of the realized experience. It’s merely intended to show that a particular argument for making that judgment isn’t a very good one. In that sense, I am not giving any kind of positive argument against the claim that the two brains you describe realize experiences with the same hedonic value. In 4.3, I’m just trying to say that a particular argument that one might give for a view like that isn’t a good one, and so if you think that the amount of clock time a person is in pain is irrelevant to the disvalue of their experience in this sort of case, you need a different reason for holding that view. In some sense, the rest of the paper might be taken as arguing that other reasons of that kind don’t seem to be available.
Fair enough about the objection being more general.
However, I don’t see why such a model (including a computational one) must be incomplete. What specific and important observations (or intuitions) about consciousness does it fail to explain?
The mere fact that experience unfolds over time doesn’t seem important to me. Maybe the disagreement is over just that? Or do you have something more in mind?
No, it’s just the fact that the experience unfolds in time. It seems clear to me that that’s important from the perspective of explaining consciousness as an empirical phenomenon. I obviously agree we might have our doubts about whether the way experience unfolds in clock time matters from an ethical perspective.
Oh my god I am so excited for this, I’ve been trying to put together a thesis paper on this exact subject! I have had such a hard time finding prior relevant work.