Longtermism is defined as holding that “what most matters about our actions is their very long term effects”. What does this mean, formally? Below I set up a model of a social planner maximizing social welfare over all generations. With this model, we can give a precise definition of longtermism.
A model of a longtermist social planner
Consider an infinitely-lived representative agent with population size Nt. In each period there is a risk of extinction via an extinction rate δt.
The basic idea is that economic growth is a double-edged sword: it increases our wealth, but also increases the risk of extinction. In particular, ‘consumption research’ develops new technologies At, and these technologies increase both consumption and extinction risk.
Here are the production functions for consumption and consumption technologies:
ct=Aαt⋅(number of consumption workers)
dAtdt=Aϕt⋅(number of consumption scientists)
However, we can also develop safety technologies to reduce extinction risk. Safety research produces new safety technologies Bt, which are used to produce ‘safety goods’ ht.
Specifically,
ht=Bαt⋅(number of safety workers)
dBtdt=Bϕt⋅(number of safety scientists)
The extinction rate is δt=h−βtAηt, where the number At of consumption technologies directly increases risk, and the number ht of safety goods directly reduces it.
Let Mt=exp(−∫t0δsds)=P(being alive at time t).
Now we can set up the social planner problem: choose the number of scientists (vs workers), the number of safety scientists (vs consumption scientists), and the number of safety workers (vs consumption workers) to maximize social welfare. That is, the planner is choosing an allocation of workers for all generations:
L={C scientists, C workers, S scientists, S workers}∞t=0
The social welfare function is:
U=∫∞0MtNtu(ct)e−ρtdt
The planner maximizes utility over all generations (t=0 to ∞), weighting by population size Nt, and accounting for extinction risk via Mt. The optimal allocation L∗ is the allocation that maximizes social welfare.
The planner discounts using ρ=ζ+γg (the Ramsey equation), where we have the discount rate ρ, the exogenous extinction risk ζ, risk-aversion γ (i.e., diminishing marginal utility), and the growth rate g. (Note that ρ could be time-varying.)
Here there is no pure time preference; the planner values all generations equally. Weighting by population size means that this is a total utilitarian planner.
Defining longtermism
With the model set up, now we can define longtermism formally. Recall the informal definition that “what most matters about our actions is their very long term effects”. Here are two ways that I think longtermism can be formalized in the model:
(1) The optimal allocation in our generation, L∗0, should be focused on safety work: the majority (or at least a sizeable fraction) of workers should be in safety research of production, and only a minority in consumption research or production. (Or, Lt for small values of t (say t<5) to capture that the next few generations need to work on safety.) This is saying that our time has high hingeyness due to existential risks. It’s also saying that safety work is currently uncrowded and tractable.
(2) Small deviations from L∗0 (the optimal allocation in our generation) will produce large decreases in total social welfare U, driven by generations t>100 (or some large number). In other words, our actions today have very large effects on the long-term future. We could plot U against t for L∗0 and some suboptimal alternative ¯L0, and show that U(¯L0) is much smaller than U(L∗0) in the tail.
While longtermism has an intuitive foundation (being intergenerationally neutral or having zero pure time preference), the commonly-used definition makes strong assumptions about tractability and hingeyness.
This model focuses on extinction risk; another approach would look at trajectory changes.
Also, it might be interesting to incorporate Phil Trammell’s work on optimal timing/giving-now vs giving-later. Eg, maybe the optimal solution involves the planner saving resources to be invested in safety work in the future.
Longtermism is defined as holding that “what most matters about our actions is their very long term effects”. What does this mean, formally? Below I set up a model of a social planner maximizing social welfare over all generations. With this model, we can give a precise definition of longtermism.
A model of a longtermist social planner
Consider an infinitely-lived representative agent with population size Nt. In each period there is a risk of extinction via an extinction rate δt.
The basic idea is that economic growth is a double-edged sword: it increases our wealth, but also increases the risk of extinction. In particular, ‘consumption research’ develops new technologies At, and these technologies increase both consumption and extinction risk.
Here are the production functions for consumption and consumption technologies:
ct=Aαt⋅(number of consumption workers)
dAtdt=Aϕt⋅(number of consumption scientists)
However, we can also develop safety technologies to reduce extinction risk. Safety research produces new safety technologies Bt, which are used to produce ‘safety goods’ ht.
Specifically,
ht=Bαt⋅(number of safety workers)
dBtdt=Bϕt⋅(number of safety scientists)
The extinction rate is δt=h−βtAηt, where the number At of consumption technologies directly increases risk, and the number ht of safety goods directly reduces it.
Let Mt=exp(−∫t0δsds)=P(being alive at time t).
Now we can set up the social planner problem: choose the number of scientists (vs workers), the number of safety scientists (vs consumption scientists), and the number of safety workers (vs consumption workers) to maximize social welfare. That is, the planner is choosing an allocation of workers for all generations:
L={C scientists, C workers, S scientists, S workers}∞t=0
The social welfare function is:
U=∫∞0MtNtu(ct)e−ρtdt
The planner maximizes utility over all generations (t=0 to ∞), weighting by population size Nt, and accounting for extinction risk via Mt. The optimal allocation L∗ is the allocation that maximizes social welfare.
The planner discounts using ρ=ζ+γg (the Ramsey equation), where we have the discount rate ρ, the exogenous extinction risk ζ, risk-aversion γ (i.e., diminishing marginal utility), and the growth rate g. (Note that ρ could be time-varying.)
Here there is no pure time preference; the planner values all generations equally. Weighting by population size means that this is a total utilitarian planner.
Defining longtermism
With the model set up, now we can define longtermism formally. Recall the informal definition that “what most matters about our actions is their very long term effects”. Here are two ways that I think longtermism can be formalized in the model:
(1) The optimal allocation in our generation, L∗0, should be focused on safety work: the majority (or at least a sizeable fraction) of workers should be in safety research of production, and only a minority in consumption research or production. (Or, Lt for small values of t (say t<5) to capture that the next few generations need to work on safety.) This is saying that our time has high hingeyness due to existential risks. It’s also saying that safety work is currently uncrowded and tractable.
(2) Small deviations from L∗0 (the optimal allocation in our generation) will produce large decreases in total social welfare U, driven by generations t>100 (or some large number). In other words, our actions today have very large effects on the long-term future. We could plot U against t for L∗0 and some suboptimal alternative ¯L0, and show that U(¯L0) is much smaller than U(L∗0) in the tail.
While longtermism has an intuitive foundation (being intergenerationally neutral or having zero pure time preference), the commonly-used definition makes strong assumptions about tractability and hingeyness.
This model focuses on extinction risk; another approach would look at trajectory changes.
Also, it might be interesting to incorporate Phil Trammell’s work on optimal timing/giving-now vs giving-later. Eg, maybe the optimal solution involves the planner saving resources to be invested in safety work in the future.
You might be interested in Existential Risk and Growth
My model here is based on the same Jones (2016) paper.