The Cryonics reductio against pure time preference: a rhetorical low-hanging fruit—or “Do we discount the future only because we won’t live in it?”
Thanks to Gavin, Vinicius, Fernando and Bruno for comments on early drafts.
The Cryonics reductio: If “pure time preference” is rational (i.e., if an experience – e.g., eating a cupcake—is more valuable now than in the future), then cryonics (i.e., freezing your brain to wake up in the future where lifespans are way longer) is irrational: when you wake up (say, in 100y), your future longer life will be way less valuable than in the past[1]. Similarly: if a person is frozen and wakes up in the future (like the guy in Futurama), one will suffer a great harm[2].
(Seriously, this is my main point; is this reasoning persuasive? Would anyone say “yeah, cryonics is nonsense because my life does not have relevant value in 2300?”)
Maybe I should add: I’m not advocating for cryonics. As far as I know, there might be many objections against it; but I don’t think the fact that it would “take me to the future” is one of them—quite the opposite, it’s one of the best arguments for it, since the present sucks.
Maybe I should add_2: This is not about discount rates per se, but about time preference / temporal neutrality. I don’t want to turn this into a long argument about discount rates (this was supposed to be a shortform post, and then the caveats ran out of control), as I believe others have done a better job than I could do here (and here, and here, here, etc. - I’d say social discount rates, or SDR, is one of those subjects where, despite being ignored by the general public, there’s a long high-quality literature no one can expect to review in one lifetime). In summary, a “pure time preference” rate would be just one of the parameters for computing your discount rate (with something like a Ramsey formula) - besides the marginal elasticy of utility and the risk of death / extinction, which no one denies are valid parameters .
I googled “pure time preference” + “cryonics” and found no argument / example like the one above [3]. Well, it is not exactly a new argument. The overall rationale of this objection to pure time preference has appeared before in the literature in the next paragraph; and I vaguely remember cryonics adovates suggesting that one of the pros of cryonics would be to make you more interested in the long-term future – as there is a relevant chance that your existence will occur in it. But I think that framing this reasoning clearly is important, since in philosophical arguments a lot of weight (too much, maybe) relies on what intuitions an example appeals to. I wouldn’t say it is very “down-to-earth”, though I guess it’s likely more digestible to the lay person as a matter of individual decision-making than the arguments in the literature mentioned below.
(Very) similar (and formally way better) arguments:
Tyler Cowen’s sci-fi example in Stubborn Attachments (right after ft. 50 in my unpaged draft): if we build a spaceship that travels close to lightspeed, it’s no use to worry about breaks, since the travelers will likely stop only in the distant future (given relativity) – so we might just let them die.
Wiblin & Ord’s Tutankhamun argument: the Pharaoh’s life was worth billions of lives in the present. Along the same line, Greaves mentions that a consistent and general pure time preference rate implies past experiences are more valuable.
Sarah’s example (i.e., time preference violates the Pareto principle), according to Cowen and Greaves (section 7.1).[4]
Philosophers vs economists on discounting (Carl Shulman, 2012)
Maybe I’m repeating myself: even if you don’t think cryonics is quite appealing for you now, it’s unlikely that you think that living in the future would be bad for you in any way. I think this example (or any example that actually makes someone consider the idea of living in the future) highlights that what lurks behind temporal discounting is the fact that we won’t live in it, because we have very limited lifespans: if there’s a chance of dying tomorrow, and no chance of surviving more, say, 100 years (part of it in slow decay), a “reasonable allocation” of your resources and experiences along your existence might include a preference for the present (and possibly hyperbolical discount)[5]. And, of course, good old short-sightedness and selfishness.
(Really, that’s all.)
P.S.: I imagine someone might object that people who favor Nordhaus against Stern in the debate over (long-term) SDR adopt the so-called the positive or descriptivist approach where, instead of using the Ramsey Formula, we use as an SDR the long-term risk-neutral rate of interest (or, in a similar vein, credit markets interest rates). This way, they totally bypass the (normative) debate over “pure time preference.” However, I believe there’s a catch here: market rates reflect our individual economic decisions on investment and consumption, and thus do not reflect our judgments on the value of the far future—and that is expected, since we have limited lifespans (as my explanation of the cryonics example above emphasized) and ability to process information. And these individual judgments express some sort of pure time preference (or, as I claim, a preference for times you live in). But why should judgments about the value of the future incorporate these biases, since societies (and humanity itself) have longer lifespans (and, I’d like to say, more computing power)?
(Damn, I said I wouldn’t argue about SDRs, but so be it…)
I noticed there are two plausible usual defenses of this positive approach:
efficiency: this rate incorporates the opportunity costs of investing on future projects, and a different SDR could be exploitable through arbitrage (or crowd-out private investment). However, there are two general objections:
bite the bullet, but keep things apart: Fleurbaey and Zuber (2013) claim this argument mixes subjects that should be separated: matters concerning the SDR (i.e., comparing present and future consumption and welfare) and the profile of possible investments / decisions (that’s what opportunity costs are about); so, according to this rationale, if market returns are so high, we should save for future investment instead of consuming resources now. Patient philanthropists show how to bite this bullet.
market failure: Intergenerational affairs and global public goods are a paradigmatic case of market failure (even of government failure), so I don’t quite see what the arbitrage / crowd-out arguments are doing here. It’s not like big-tech companies would be investing on the long-term future, like starting-up space exploration, to reap profits in the next decade – instead, they eventually do it because it’s awesome.
b. collective choice: the positive SDR is a result of decentralized individual decisions, instead of a central authority. Ok, this is not that plausible; I’m sort of surprised by this argument, but Nordhaus actually mentions it, and I’ve seen others follow him. I think the other objections above also apply: long-term future is a market failure, partly caused by biases explainable by our short lifespans. Besides that, it is not clear to me why our judgments on the value of the future should, at the social level, be aggregated like prices in financial markets. However, this suggests that these judgments could instead be made through democratic politics… ideally, I’d agree with that, and I’d really like to see more research on how citizens would respond to something like “how to choose our SDR for intergenerational projects…” but, except for the cases cited in this Future Perfect, I’m not very optimistic about it. I think fiscal and monetary policy often pose analogous intertemporal dilemmas, with a track record of outcomes biased towards the short-term.
[1] Of course, you might say that a cupcake tomorrow (t+1) will have for you now (t) the same utility that a cupcake on t+1+10^6 will have for you on t+10^6; along this reasoning, cryonics could be assessed as rational by your future self. But this does not solve the dilemma: our reference frame for the evaluation is now – if you have pure time preferences, then your cupcake on t+1+10^6 has negligible value now, because your present self has no relevant interest in the experiences that will happen after t+10^6.
In some way, I agree with that: since I don’t think personal identity is particularly relevant from a moral point of view (at least from what we could call a Theory of Good), I don’t care (much) more about what would happen to that guy than to some other person living after t+10^6. I’m just following Parfit here. I guess that’s likely the best objection one could offer against cryonics (well, not against cryonics per se: death is still bad and a waste).
[2] Are my intuitions wrong here? I always say that a modus tollens for someone might be a modus ponens for others. I mean, someone could just bite the bullet and say “yeah, future-me would be less valuable because they will only exist in 2300.” The reasoning in the reductio is still valid. For this person, from an “objective” POV, cryonics would be a waste; even if maybe not from a subjective POV—i.e., for the person who just woke up from cryo-sleep, only seconds have passed.
[3] I found these results: Long-term investment fund at Founders Pledge—EA Forum; Search results for `time irreversibility` - PhilArchive; Cryonics Questions—LessWrong.
[4] “suppose that a particular person – Sarah, say – could live either in this century or the next. Consider two states of affairs that differ over when Sarah lives. Suppose that Sarah’s well-being is slightly better in the state of affairs in which she lives later, while everyone else’s well-being is unchanged. Then according to the Pareto principle, the ‘Sarah lives later’ state of affairs is better. But according to a value function whose rate of pure time preference is positive, this state of affairs may be worse. Thus δ ≠ 0 is inconsistent with the Pareto principle.”
[5] Again, no originality here: Phil Goetz brought it up in the LW debate about cryonics 11y ago. It’s another idea that often seems evident for LW and EA people, but it should be explicitly stated in order to dissolve some of the intuitions in favor of pure time preference.
Given that very few people are signed up for cryonics, being inconsistent with support cryonics doesn’t seem like much of a reductio in general. It seems plausible to me that part of the reason people don’t sign up is the far future doesn’t seem ‘real’ to them, which is sort of like discounting.
Thanks for your comment.
Do you think that, for the people “don’t sign up is the far future doesn’t seem ‘real’ to them”, it would be equivalent to pure time prefence? I mean, they would bite the bullet and say “True, experiences in the future are less valuable than in the past, even if it’s me having them?”
I don’t rule out this possibility, but I think it might be important to clarify why people discount the future; I don’t see much problem in discounting the future because of uncertainty—e.g., the risk of death / extinction, or the risk that a project may be unsuccessful. But we might reduce uncertainty—not “pure time preference”
I ultimately agree with you (pure time discounting is wrong…even if our increasing wealth makes it a useful practical assumption) but I don’t think you’re argument is quite as strong as you think (nor is Cowan’s argument very good).
In particular, I’d distinguish my selfish emotional desires regarding my future mental states from my ultimate judgements about the goodness or badness of particular world states. But I think we can show these have to be distinct notions[1]. Someone who was defending pure time discounting could just say: well while, as far as my selfish preferences go, I don’t care whether I have another 10 happy years now or in 500 years it’s nevertheless true that morally speaking the world in which that utility is realized now is much better than the one it is realized later.
This is also where Cowan’s argument falls apart. The pareto principle is only violated if a world in which one person is made better off and everyone else’s position is unchanged isn’t preferable to the default. But he then makes the unjustified assumption that Sarah isn’t ‘made worse off’ by having her utility moved into the future. But that just begs the question since,if we believe in pure time discounting, Sarah’s future happiness really is worth only a fraction of what it would be worth now. In other words we are just being asked to assume that only Sarah’s subjective experience and not the time at which it happens affect her contribution to overall utility/world value.
Having said all this, I think that every reason one has for adopting something like utilitarianism (or hell any form of consequentialism) screams out against accepting pure time preferences even if not formally required. The only reason people are even entertaining pure discounting is that they are worried about the paradoxes you get into if you end up having infinite total utility (yes, difficulties remain even if you just try and directly define a preference relation on possible worlds)
—-
^1: I mean your argument basically assumes that, other things being equal, if a world where my selfish desires are satisfied is better than one in which it is not. While that is a coherent position to hold (it’s basically what preference satisfaction accounts of morality hold) it’s not (absent some a priori derivation of morality) required.
For instance, I’m a pure utilitarian so what I’d say is that while I selfishly wish to continue existing I realize that if I suddenly disappeared in a poof of smoke (suppose I’m a hermit with not affected friends or relatives) and was replaced by an equally happy individual that would be just as good a possible world as the one in which I continued to exist.
I’m very grateful for your comment.
Do you think I should add an explicit caveat remarking that the reductio assumes on lyself-regarding reasons / preferences?
For instance, I’m not in favor of cryonics for myself—I currently consider that, given the required investment plus all the uncertainties, I’m likely better off, from a moral point of view, by donating to effective charities (or even to another project I might value even after death, such as making my loved ones happy). But notice this has nothing to do with time preference (quite the opposite).
About Sarah’s example… Well, I agree with you; but notice that the reasoning in the Cryonics reductio is still valid—and that was my whole point. I’m not advocating for cryonics; I’m basically asking if one thinks that it’s a bad option because it aims at future experiences. I think someone could consistently bite this bullet. Actually, my whole point (which is still quite entalgled, I admit—and I thank your comment for exposing it) is that we often mix some types of reasoning connected to a subjective / contextual / (philosophically) relativistic notion of time (i.e., “Sarah in the present” vs. “Sarah in the future”) to some sort of (quasi-) objective / t-series notion (“Sarah in t”) - something like the “point of view of the universe” or “the point of view of humanity.” (Again, thanks to Gavin for directing my attention to this.) When we specifiy what point of view we are doing the evaluation from, most conundrums seem to disappear… except the next one.
I’m very interested in reading more about this:
Of course, this is a real theoretical problem. However, I guess discounting because of uncertainty (and the possibility of extinction, etc.) might be enough to avoid it—as Nicholas Stern proposes. But I really get lost when we start talking about infinities.
I’m not sure I complete followed #1 but maybe this will answer what you are getting at.
I agree that the following argument is valid:
Either the time discounting rate is 0 or it is morally preferable to use your money/resources to produce utility now than to freeze yourself and produce utility later.
However, I still don’t think you can make the argument that I can’t think that time discounting is irrelevant to what I selfishly prefer while believing that you shouldn’t apply discounting when evaluating what is morally preferable. And I think this substantially reduces just how compelling the point is. I mean I do lots of things I’m aware are morally non-optimal. I probably should donate more of my earnings to EA causes etc.. etc.. but sometimes I choose to be selfish and when I consider cryonics it’s entirely as a selfish choice (I agree that even without discounting it’s a waste in utilitarian terms).
(Note that I’d make a distinction between saying something is morally favorable and that it is bad or blameworthy to do it but that’s getting a bit into the weeds).
—-
Regarding the theoretical problems I agree that they aren’t enough of a reason to accept a true discounting rate. Indeed, I’d go further and say that one is making a mistake to infer things about what’s morally good because we’d like our notion of morality to have certain nice properties. We don’t get to assume that morality is going to behave like we would like it to …we’ve just got to do our best with the means of inference we have.
You might think it’s reasonable to discount based on psychological similarity: something is less valuable to your later self the less like you that person is. Cf. The Time-Relative Interest Account of the badness of death (e.g. Holtug 2011). This wouldn’t justify a pure time preference, but it would justify a contingent time preference: in reality, you value stuff less the further in the future it happens, but not because of time per se, but because of reduced psychological connectededness, which so happens to occur of time.
I point this out to show that someone accept your reductio but get much the same practical result by other means.
Of course, someone who took this view would agree that some harm of size S that befalls you just before you enter the cryo chamber would be just as bad as one that befalls you as soon as you get out.