Yes, I’m saying that it happens to be the case that, in practice, fanatical tradeoffs never come up.
Furthermore, you’d have to assign p=0 when V=∞, which means perfect certainty in an empirical claim, which seems wrong.
Hm, doesn’t claiming V=∞ also require perfect certainty? Ie, to know that V is literally infinite rather than some large number.
What is n? It seems all the work is being done by having n in the exponent.
How about this: fanaticism is fine in principle, but in practice we never face any actual fanatical choices. For any actions with extremely large value V, we estimate p < 1/V, so that the expected value is <1, and we ignore these actions based on standard EV reasoning.
in order to assess the value (or normative status) of a particular action we can in the first instance just look at the long-run effects of that action (that is, those after 1000 years), and then look at the short-run effects just to decide among those actions whose long-run effects are among the very best.
Is this not laughable? How could anyone think that “looking at the 1000+ year effects of an action” is workable?
What are the comparative statics for how uncertainty affects decisionmaking? How does a decisionmaker’s behavior differ under some uncertainty compared to no uncertainty?
Consider a social planner problem where we make transfers to maximize total utility, given idiosyncratic shocks to endowments. There are two agents, A and B, with endowments eA=5 (with probability 1) and eB=0∼p,10∼1−p. So B either gets nothing or twice as much as A.
We choose a transfer T to solve:maxT u(5−T)+p⋅u(0+T)+(1−p)⋅u(10+T) s.t. 0≤T≤5
For a baseline, consider p=0.5 and u=ln. Then we get an optimal transfer of T∗=1.8. Intuitively, as p→0, T∗→0 (if B gets 10 for sure, don’t make any transfer from A to B), and as p→1,T∗→2.5 (if B gets 0 for sure, split A’s endowment equally).
So that’s a scenario with risk (known probabilities), but not uncertainty (unknown probabilities). What if we’re uncertain about the value of p?
Suppose we think p∼F, for some distribution F over [0,1]. If we maximize expected utility, the problem becomes:maxT E[u(5−T)+p⋅u(0+T)+(1−p)⋅u(10+T)] s.t. 0≤T≤5
Since the objective function is linear in probabilities, we end up with the same problem as before, except with E[p] instead of p. If we know the mean of F, we plug it in and solve as before.
So it turns out that this form of uncertainty doesn’t change the problem very much.
Questions:- if we don’t know the mean of F, is the problem simply intractable? Should we resort to maxmin utility?- what if we have a hyperprior over the mean of F? Do we just take another level of expectations, and end up with the same solution?- how does a stochastic dominance decision theory work here?
Do you think Will’s three criteria are inconsistent with the informal definition I used in the OP (“what most matters about our actions is their very long term effects”)?
In my setup, I could say ∫∞t=0MtNtu(ct)e−ρtdt≈∫∞t=TMtNtu(ct)e−ρtdt for some large T; ie, generations 0 to T−1 contribute basically nothing to total social utility . But I don’t think this captures longtermism, because this is consistent with the social planner allocating no resources to safety work (and all resources to consumption of the current generation); the condition puts no constraints on L∗. In other words, this condition only matches the first of three criteria that Will lists:
(i) Those who live at future times matter just as much, morally, as those who live today;(ii) Society currently privileges those who live today above those who will live in the future; and(iii) We should take action to rectify that, and help ensure the long-run future goes well.
(i) Those who live at future times matter just as much, morally, as those who live today;
(ii) Society currently privileges those who live today above those who will live in the future; and
(iii) We should take action to rectify that, and help ensure the long-run future goes well.
I’m a bit skeptical about the value of formal modelling here. The parameter estimates would be almost entirely determined by your assumptions, and I’d expect the confidence intervals to be massive.
I think a toy model would be helpful for framing the issue, but going beyond that (to structural estimation) seems not worth it.
and also a world where shorttermism is true
On Will’s definition, longtermism and shorttermism are mutually exclusive.
Suppose you’re taking a one-off action d∈D, and then you get (discounted) reward r1(d),r2(d),…
I’m a bit confused by this setup. Do you mean that d is analogous to L0, the allocation for t=0? If so, what are you assuming about Lt, for t>0? In my setup, I can compare U(¯L0,L∗1,L∗2,...) toU(L∗0,L∗1,L∗2,...). , so we’re comparing against the optimal allocation, holding fixed L∗t for t>0.
∀d∈D,∑∞t=0rt(d)≈∑∞t=t′rt(d) where t′ is some large number.
I’m not sure this works. Consider: this condition would also be satisfied in a world with no x-risk, where each generation becomes successively richer and happier, and there’s no need for present generations to care about improving the future. (Or are you defining rt(d) as the marginal utility of d on generation t, as opposed to the utility level of generation t under d?)
My model here is riffing on Jones (2016); you might look there for solving the model.
Re infinite utility, Jones does say (fn 6): “As usual, ρ must be sufficiently large given growth so that utility is finite.”
Assumption Based Planning – having a written version of an organization’s plans and then identify load-bearing assumptions and assessing the vulnerability of the plan to each assumption.Exploratory Modeling – rather than trying to model all available data to predict the most likely outcome these models map out a wide range of assumptions and show how different assumptions lead to different consequencesScenario planning  – identifying the critical uncertainties, developing a set of internally consistent descriptions of future events based on each uncertainty, then developing plans that are robust  to all options.
Assumption Based Planning – having a written version of an organization’s plans and then identify load-bearing assumptions and assessing the vulnerability of the plan to each assumption.
Exploratory Modeling – rather than trying to model all available data to predict the most likely outcome these models map out a wide range of assumptions and show how different assumptions lead to different consequences
Scenario planning  – identifying the critical uncertainties, developing a set of internally consistent descriptions of future events based on each uncertainty, then developing plans that are robust  to all options.
Can you clarify how these tools are distinct? My (ignorant) first impression is that they just boil down to “use critical thinking”.
Re algebra, are you defending the numbers you gave as reasonable? Otherwise, if we’re just making up numbers, might as well do the general case.
Would ‘countercyclical altruism’ also capture this view?
I think this would be easier to explain with a two-sector model: ie, just H and F. Also, would it be easier to just work with algebra? Ie, H=[−a,b]×[−c,d].
Assuming a budget of 6 units
How does this fit with H+4F+5W? That’s 10 units, no?
I will assume, for simplicity, constant marginal cost-effectiveness across each domain/effect/worldview
It’s worth emphasizing that this assumption rules out the diminishing returns case for diversifying; this is a feature, since we want to isolate the uncertainty-case for diversifying.
One version of the phase change model that I think is worth highlighting: S-curve growth.
Basically, the set of transformative innovations is finite, and we discovered most of them over the past 200 years. Hence, the Industrial Revolution was a period of fast technological growth, but that growth will end as we run out of innovations.The hockey-stick graph will level out and become an S-curve, as g→0.
Although, is it the case that growth(GDP) increased during the modern era (ie, growth(population) has been rising)? My recollection is that the IR was a structural break, with g jumping from 0.5% to 2% (or something).
Right, growth(GDP) > growth(GDP per capita) when growth(population)>0.
while the author agrees that growth rates have been increasing in the modern era (roughly, the Industrial Revolution and everything after)
I think this is a misunderstanding. The common view is that the growth rate has been constant in the modern era.