You say that:
I will [...] focus instead on a handful of simple model cases. [...] These models will be very simple. In my opinion, nothing of value is being lost by proceeding in this way.
I agree in the sense that I think your simple models succeed in isolating an important consideration that wouldn’t itself be qualitatively altered by looking at a more complex model.
However, I do think (without implying that this contradicts anything you have said in the OP) that there are other crucial premises for the argument concluding that reducing existential risk is the best strategy for most EAs. I’d like to highlight three, without implying that this list is comprehensive.
One important question is how growth and risk interact. Specifically, it seems that we face existential risks of two different types: (a) ‘exogenous’ risks with the property that their probability per wall-clock time doesn’t depend on what we do (perhaps a freak physics disaster such as vacuum decay); and (b) ‘endogenous’ risks due to our activities (e.g. AI risk). The probability of such endogenous risks might correlate with proxies such as economic growth or technological progress, or more specific kinds of these trends. As an additional complication, the distinction between exogenous and endogenous risks may not be clear-cut, and arguably is itself endogenous to the level of progress—for example, an asteroid strike could be an existential risk today but not for an intergalactic civilization. Regarding growth, we might thus think that we face a tradeoff where faster growth would on one hand reduce risk by allowing us to more quickly reach thresholds that would make us invulnerable to some risks, but on the other hand might exacerbate endogenous risks that increase with the rate of growth. (A crude model for why there might be risks of the latter kind: perhaps ‘wisdom’ increases at fixed linear speed, and perhaps the amount of risk posed by a new technology decreases with wisdom.)
I think “received wisdom” is roughly that most risk is endogenous, and that more fine-grained differential intellectual or technological progress aimed at specifically reducing such endogenous risk (e.g. working on AI safety rather then generically increasing technological progress) is therefore higher-value than shortening the window of time during which we’re exposed to some exogenous risks.
See for example Paul Christiano, On Progress and Prosperity
A somewhat different lense is to ask how growth will affect the willingness of impatient actors—i.e., those that discount future resources at a higher rate than longtermists—to spend resources on existential risk reduction. This is part of what Leopold Aschenbrenner has examined in his paper on Existential Risk and Economic Growth.
More generally, the value of existential risk reduction today depends on the distribution of existential risk over time, including into the very long-run future, and on whether todays effort would have permanent effects on that distribution. This distribution might in turn depend on the rate of growth, e.g. for the reasons mentioned in the previous point. For an excellent discussion, see Tom Sittler’s paper on The expected value of the long-term future. In particular, the standard argument for existential risk reduction requires the assumption that we will eventually reach a state with much lower total risk than today.
A somewhat related issue is the distribution of opportunities to improve the long-term future over time. Specifically, will there be more efficient longtermist interventions in, say, 50 years? If yes, this would be another reason to favor growth over reducing risk now. Though more specifically it would favor growth, not of the economy as a whole, but of the pool of resources dedicated to improving the long-term future—for example, through ‘EA community building’ or investing to give later. Relatedly, the observation that longtermists are unusually patient (i.e. discount future resources at a lower rate) is both a reason to invest now and give later, when longtermists control a larger share of the pie—and a consideration increasing the value of “ensuring that the future proceeds without disruptions”, potentially by using resources now to reduce existential risk. For more, see e.g.:
Toby Ord, The timing of labour aimed at reducing existential risk
Owen Cotton-Barratt, Allocating risk mitigation across time
Will MacAskill, Are we living at the most influential time in history?
Phil Trammel, Philanthropic timing and the Hinge of History
You’re right that these are indeed important considerations that I swept under the rug… Thanks again for all the references.