In Defence of Temporal Discounting in Longtermist Ethics

Introduction

My thoughts on temporal discounting in longtermist ethics.

Two Senses of Normative Ethics

There are two basic senses of a normative moral system:
1. Criterion of judgment: “what is right/​good; wrong/​evil?”
2. Decision procedure: “how should I act? What actions should I take?”

We can — and I argue we should — distinguish between these two senses.


Consider consequentialism:

Criterion of Judgment

Using consequentialism as a criterion of judgment, we can evaluate the actual ex post consequences of actions (perhaps over a given timeframe: e.g. to date, or all of time [if you assume omniscience]) to decide whether an action was right/​wrong.

Decision Procedure

However, when deciding what action to take, we cannot know what the actual consequences will be.

For a decision procedure to be useful at all — for it to even qualify as a decision procedure — it must be actionable. That is, it must be possible — not only in principle, but also in practice — to act in accordance with the decision procedure. That is, the decision procedure must be:

  • Directly evaluable or

  • Approximately evaluable or

  • Robustly estimable or

  • Etc.

We should have a way of determining what course of action the procedure actually recommends in a given scenario. As such, the procedure must be something we can evaluate/​approximate/​estimate ex ante, before we know the actual consequences of our actions.

Because of coherence arguments, I propose that a sensible decision procedure for consequentialists is the “ex ante expected consequences[1] of (policies over) actions”[2].


Against Discounting

When considering how we should value the longterm future, I feel like MacAskill/​Ord conflate/​elide between the two senses[3].

They make a compelling argument that we shouldn’t discount the interests (wellbeing/​preferences) of future people:

Assigning a fixed discount rate per year (e.g. 1%) to future people and extrapolating back in time, you get a conclusion like the interests of Ancient Egyptian royalty (e.g. Cleopatra) outweigh the interests of everyone alive today, and this seems obviously lacking.

Temporal Discounting and the Two Senses of Normative Ethics

I agree that from a perspective of “morality as criterion of judgment”, we should not discount the interests of future people. Plausibly, in a consequentialist-criterion-of-judgment-framework almost all the value of any action is determined by its impact on far future people.


However, it does not follow that in a consequentialist-decision-procedure-framework almost all the value of any action is determined by its impact on far future people.

Rather, it seems to me that this is quite unlikely to be the case.


It is difficult for us to evaluate the effect of our actions on future people (What future people will counterfactually [depending on which actions we take] exist? What are their interests? What will be the effects of our actions on those interests? Etc.), and the further out they are, the greater our uncertainty of the effect of our actions.

Alan Hájek makes the case for the difficulty of objective consequentialism as a decision procedure in his interview with Robert Wiblin.

To a first approximation[4], the uncertainty of the aggregate moral value of a particular action grows exponentially the further out in time we consider in our evaluation window[5].


As such, I think a temporal discount rate does make sense within a consequentialist-decision-procedure-framework. At least if you agree that the relevant consideration when deciding what action to take is “ex ante expected consequences”.


The conclusion of the above is that while the interests of Cleopatra in her time do not in actuality outweigh the interests of everyone alive today, Cleopatra should not have considered the people alive today in her moral decision making.


And I endorse that conclusion? (If it’s a bullet, then it must be the easiest bullet I’ve ever bitten.) I don’t think Cleopatra could have usefully evaluated the consequences of her actions on people alive today. The current global geopolitical macrostate is probably something Cleopatra’s accessible world models could not readily conceive.

In her decision making, Cleopatra should have considered the interests of her direct subjects and those in the near future; they are the only people for whom she could usefully reason about.

To summarise this argument in a less nuanced but more memetically fit form:

We should care about the interests of our children and grandchildren; and leave the interests of our great grandchildren to our grandchildren; they are better positioned to evaluate and act upon them.

Conclusions/​Summary

  • We shouldn’t discount the interests of future people within a consequentialist-criterion-of-judgment-framework

    • Actual, existent people don’t intrinsically matter any more or less based on their temporal location

    • Cleopatra’s interests do not outweigh the interests of the eight billion people alive today

  • We should discount the interests of future people within a consequentialist-decision-procedure-framework

    • Due to our uncertainty about:

      • The counterfactual existence of such people

      • Their interests

      • The effects of our actions upon them

      • Etc.

    • This uncertainty grows (exponentially? hyperbolically?) the more distant they are from us in time

  1. ^

    Subject to bounded computing constraints. Evaluating the full ex ante expected consequences may be computationally intractable. So some sort of “best effort” estimate of said consequences may be needed.

  2. ^

    More sophisticated consequentialists may want higher order abstractions in their decision procedures (policies for selecting policies for … for selecting actions).

  3. ^

    Note, I’m going of my vague recollections of their writings, so this may be somewhat inaccurate.

  4. ^

    There is a maximum level of uncertainty: maximum entropy, and so beyond some certain horizon in time, we have roughly constant (maximum) uncertainty about future people, and wouldn’t further discount people that come into existence after that horizon based on temporal displacement.

    As such a proper temporal discounting due to uncertainty may not be exponential but perhaps hyperbolic or similar?

  5. ^

    I suspect the growth rate of said uncertainty is probably significantly higher than 1% (at least before the maximum entropy time horizon).

Crossposted to LessWrong (23 points, 4 comments)