Assumptions about the far future and cause priority
Abstract. This article examines the position that cause areas related to existential risk reduction, such as AI safety, should be virtually infinitely preferred to other cause areas such as global poverty. I will explore some arguments for and against this position. My first goal is to raise greater awareness of the crucial importance of a particular assumption concerning the far future, which negates the possibility of long-term exponential growth of our utility function. I will also discuss the classical decision rule based on the maximization of expected values. My second goal is to question this assumption and this decision rule. In particular, I wonder whether exponential growth could be sustained through the exploration of increasingly complex patterns of matter; and whether, when attempting to maximize the expected values of different actions, we might forget to take into account possibly large costs caused by later updates on our beliefs on the far future. While I consider the ideas presented here to be highly speculative, my hope is to elicit a more thorough analysis of the arguments underlying the case for existential risk reduction.
A fictitious conversation on cause priority
The considerations below could be put into complicated-looking mathematical models involving integrals and probability measures. I will not follow this path, and focus instead on a handful of simple model cases. For convenience of exposition, there will be fictitious characters who are supposed to hold each model as their belief on the far future. These models will be very simple. In my opinion, nothing of value is being lost by proceeding in this way.
The characters in the fictitious story have a fantastic opportunity to do good: they are about to spend 0.1% of the world GDP on whatever they want. They will debate what to do in light of their beliefs on the far future. They agree on many things already: they are at least vaguely utilitarian and concerned about the far future; they believe that in exactly 100 years from now (unless they intervene), there will be a 10% chance that all sentient life suddenly goes extinct (and otherwise everything goes on just fine); outside of this event they believe that there will be no such existential risk; finally, they believe that sentient life must come to an end in 100 billion years. Also, we take it that all these beliefs are actually correct.
The characters in the story hesitate between a “growth” intervention, which they estimate would instantaneously raise their utility function by 1%,[1] and an “existential” intervention, which would reduce the probability of extinction they will face in 100 years to 9.9% instead of 10%.[2]
Alice believes that unless extinction occurs, our utility function always grows at a roughly constant rate, until everything stops in 100 billion years. She calculates that in her model, the growth intervention moves the utility function upwards by 1% at any point in the future. In particular, the expectation of the total utility created in the future increases by 1% if she chooses the growth intervention. With the existential intervention, she calculates that (up to a minuscule error) this expectation moves up by 0.1%. Since 1% > 0.1%, she argues for the growth intervention.
Bob’s view on the far future is different. He believes that the growth rate of our utility function will first accelerate, as we discover more and more technologies. In fact, it will accelerate so fast that we will quickly have discovered all discoverable technologies. We will similarly quickly figure out the best arrangement of matter to maximize our utility function locally, and all that will be left to do is colonize space and fill it with this optimal arrangement of matter. However, we are bound by the laws of physics and cannot colonize space faster than the speed of light. This implies that in the long run, our utility function cannot grow faster than t^3 (where t is time). The growth rate of this function[3] decays to zero quickly, like 1/t. So in effect, we may as well suppose that our utility function will spike up quickly, and then plateau at a value that can essentially be regarded as a constant[4]. For the existential intervention, he finds that the expected utility of the future increases by about 0.1%, in agreement with Alice’s assessment. However, he reaches a very different conclusion when evaluating the growth intervention. Indeed, in his model, the growth intervention only improves the fate of the future before the onset of the plateau, and brings this onset a bit closer to the present. In particular, it has essentially no effect on the utility function after the plateau is reached. But this is where the vast majority of the future resides. So the growth intervention will barely budge the total utility of the future. He therefore argues for the existential intervention.
Clara holds a more sophisticated model of the far future than both Alice and Bob. She acknowledges that we cannot be certain about our predictions. She holds that there is a range of different possible scenarios for the far future, to which she assigns certain probabilities. In fact, she puts a weight of 50% on Alice’s model, and a weight of 50% on Bob’s model. Her calculations will depend crucially on comparing the expected value of the total utility of the future under each model. She considers that the growth in utility in Alice’s model is slower than in Bob’s, so much so that the plateau that appears in Bob’s model is never within sight in Alice’s model. She thus concludes that the far future has much greater utility in Bob’s model. Or else, she reasons that Alice must have failed to properly take into account the slowdown appearing in Bob’s model. In any case, she rejoins Bob in arguing for the existential intervention.
In the next sections, we dig deeper into some of the arguments appearing in the preceding discussion.
A closer look at Bob’s arguments (no exponential growth)
As far as I can tell, some version of Bob’s view that our utility function ultimately reaches a plateau (or grows no faster than t^3) is the more typical view among EA people who have thought about the problem.[5] I will focus now on the examination of this point.
This view relies on the assumption that we[6] will quickly discover the essentially optimal way to organize matter in the region of space that we occupy. Once this is done, all that is left to do is to expand in space and reproduce this optimal arrangement of matter over and over again.
It seems extremely unlikely to me that we will come remotely close to discovering the utility-maximizing pattern of matter that can be formed even just here on Earth. There are about 10^50 atoms on Earth. In how many different ways could these atoms be organized in space? To keep things simple, suppose that we just want to delete some of them to form a more harmonious pattern, and otherwise do not move anything. Then there are already 2^(10^50) possible patterns for us to explore.
The number of atoms on our planet is already so large that it is many orders of magnitude beyond our intuitive grasp (in comparison, 100 billion years almost feels like you can touch it). So I’m not sure what to say to give a sense of scale for 2^(10^50); but let me give it a try. We can write down the number of atoms on Earth as a one followed by 50 zeros. If we try to write down 2^(10^50) similarly, then we would basically have to write a one followed by as many zeros as a third of the number of atoms on Earth.
Let me also stress that 2^(10^50) is in fact a very pessimistic lower bound on the number of different patterns that we can explore. Atoms are not all the same. They are made up of smaller parts that we can split up and play with separately. We are not restricted to using only the atoms on Earth, and we can move them further distances away. Also, I do not see why the optimizer of our utility function should be constant in time.[7] In comparison to the potential number of patterns accessible to us, a time scale of 100 billion years is really, really, REALLY ridiculously short.
In order to rescue Bob’s argument, it seems necessary to make the case that, although the space of possible patterns is indeed huge, exploring this space of patterns has only a very limited impact on the growth of our utility function. It find it very difficult to decide whether this is true or not. One has to answer questions such as (1) how rapidly are we capable to explore this space of patterns? (2) Should we expect our speed of exploration to increase over time? If so, by how much? (3) How does our utility function increase as we keep improving the quality of the patterns we discover?
I do not know how to answer these questions. But perhaps it will help to broaden our imagination if I suggest a simple mental image of what it could look like for a civilization to be mostly busy trying to explore the space of patterns available to them. Possibly, our future selves will find that the greatest good will be achieved by preparing for and then realizing a coordinated dance performance of cosmic dimension, spanning a region greater than that of the solar system and lasting millions of years. While they will not completely disregard space colonization, they will find greater value in optimizing over the choreography of their cosmic dance, preparing for the success of their performance, and then realizing it.[8] Generalizing on what I aim to capture with this example, I find it plausible that highly advanced sentient beings will be very interested in extremely refined and intricate doings comparable to art forms, which we cannot even begin to imagine, but which will score particularly highly for their utility function.
A somewhat Bayesian objection to the idea I am defending here could be: if indeed Bob’s view is invalid, then how come the point defended here has not already become more commonplace within the EA community? This is more tangential and speculative, so I will push a tentative answer to this question into a long footnote.[9]
A closer look at Alice’s point of view (exponential growth)
Aside from Bob’s and Clara’s objections, another type of argument that can be raised against Alice’s view is that, somewhat implicitly, it may conflate the utility function with something that at least vaguely looks like the world GDP; and that in truth, if there were a simple relationship between the utility function and the world GDP, it would rather be that our utility function is the logarithm of the world GDP.
This argument would put Alice’s belief that our utility function can grow at a steady rate over long periods of time into very serious doubt. Under a default scenario where GDP growth is constant, it would mean that our utility function only grows by some fixed amount per unit of time.
It is difficult to argue about the relationship between a state of the world and what the value of our utility function should be. I will only point out that the argument is very fragile to the precise functional relation we postulate between our utility function and world GDP. Indeed, if we decide that our utility function is some (possibly small) power of the world GDP, instead of its logarithm, then a steady growth rate of GDP does again imply a steady growth rate of our utility function (as opposed to adding a constant amount per unit of time). If there were a relationship between our utility function and the world GDP, then I do not know how I could go about and decide whether our utility function looks more like log(GDP) or more like, say, (GDP)^0.1. If anything, postulating that our utility function looks like (GDP)^x for some exponent x between 0 and 1 gives us more freedom for adjustment between reality and our model of it. I feel that it would also work better under certain circumstances; for instance, if we duplicate our world and create an identical copy of it, I would find it bizarre if our utility function only increases by a constant amount, and find it more reasonable if it is multiplied by some factor.[10]
Finally, I want to point out that Alice’s view is not at all sitting at the tail end of some spectrum of possible models. Indeed, I see no obstacle in the laws of physics to the idea that the growth rate of our utility function will not only remain positive, but will in fact continue to increase without bound for the next 100 billion years. Indeed, the space of possible patterns of matter we can potentially explore between time t and time 2 t grows like exp(t^4);[11] and the growth rate of this function indeed goes up to infinity. If one takes the position that the growth rate of our utility function can increase without bound, then one would be led to the conclusion that growth interventions are always to be preferred over existential interventions.
A closer look at Clara’s arguments (expectation maximization)
The reasoning that leads Clara to conclude in favor of Bob’s conclusion is, in my opinion, very fragile.[12] Although I do not think that it is necessary to do so, I find it clearest to explain this by supposing that Alice revises her model and says: “In fact I don’t know how our utility function will grow in the far future. The only thing I am certain about is that the growth rate of our utility function will always be at least 3% per year (outside of the possibility of extinction and of the interventions we do).” This is a weaker assumption on the future than her original model[13], so it should only increase the weight Clara is willing to put on Alice’s model. But with this formulation, whatever Bob’s prediction is for the future, Alice could say that maybe Bob is right up until the moment when he predicts a growth rate below 3%, but then she would insist on being more optimistic and keep the growth rate at (or above) 3%. In this way, Alice’s updated model is guaranteed to yield higher expected utility than Bob’s. Roughly speaking, Clara’s procedure essentially consists in selecting the most optimistic model around.[14]
I suppose that a typical line of defense for the expectation-maximization procedure has something to do with the idea that it is, in some sense, “provably” best; in other words, that there is some mathematical reasoning justifying its superiority. I want to challenge this view here with two counter-arguments.[15]
First, the classical argument for expectation maximization relies on the law of large numbers. This law deals with a series of relatively independent variables which we then sum up. It asserts that, in situations where each term contributes little to the overall sum, the total sum becomes concentrated around the sum of the expected values of each contribution, with comparatively small fluctuations. In such situations, it therefore makes sense to maximize the expected value of each of our actions. But, for all I know, there is only one universe in which we are taking bets on the long-term future.[16] If, say, we never update our bet for the long-term future, then there will be no averaging taking place. In such a circumstance, maximizing expected values seems to me rather arbitrary, and I would see no contradiction if someone decided to optimize for some different quantity.
My most important objection to Clara’s reasoning is that, in my opinion, it fails to take into account certain effects which I will call “switching costs”.[17] Although I explored the opposite hypothesis in the previous paragraph, I find it more likely that we will regularly update our prediction on the far future. And, in view of the scale of the uncertainties, I expect that our beliefs about it will actually mostly look like random noise.[18] Finally, I expect that the standard expectation-maximization prescription will be all-or-nothing: only do growth interventions; or only do existential interventions. It seems to me that Clara’s calculation is too short-sighted, and fails to take into account the cost associated with revising our opinion in the future. To illustrate this, suppose that Alice, Bob and Clara run a charitable organization called ABCPhil which invests 0.1% of world GDP each year to do the maximal amount of good. Imagine if for the next 10 years, ABCPhil was only financing existential interventions; and then it would suddenly switch to only financing growth interventions for the next 10 years; and so on, completely switching every 10 years. Now, compare this with the scenario where ABCPhil finances both equally all the time. While this comparison is not straightforward, I would be inclined to believe that the second scenario is superior. In any case, my point here is that Clara’s reasoning, as stated, simply ignores this question, and this may be an important problem.
Tentative guidelines for concrete decision taking
In view of the discussion in the previous section, and in particular of the possible problem of “switching costs”, I do not believe that there can be a simple recipe that Clara could just apply to discover the optimal decision she should take. (We will take Clara’s point of view here since she is more reasonably acknowledging her uncertainty about her model of the future.) The best that can be done is to indicate a few guidelines for decision taking, which then need to be complemented by some amount of “good judgment”.
I find it most useful to think in terms of “reference points”, a small set of possible decisions that each are the result of a certain type of thinking. Once these reference points are identified, “good judgment” can then weigh in and bend the final decision more or less toward a reference point, depending on rough guesses as to the magnitude of the effects that are not captured well under each perspective.
One such reference point is indeed that resulting from the maximization of expected values (which is what Clara was doing in the original story). A second reference point, which I will call the “hedging” decision rule, is as follows. First, Clara calculates what is the best action under each model of the future; in fact, Alice and Bob have done this calculation for her already. Then, she aggregates the decisions according to the likelihood she places on each model. In other words, she gives money to Alice and Bob in proportion to how much she believes each is right, and then let them do what they think is best.[19]
I want to stress again that I do not claim the hedging decision rule to be always superior to expectation maximization. However, it is also true that, in circumstances in which we believe that the switching costs are large, expectation maximization[20] will lead to conclusions that are inferior to those derived from the hedging decision rule.
The hedging decision rule is designed to be more robust to switching costs. The task of “good judgment” then is to try to evaluate whether these costs (and possibly other considerations) are likely to be significant or not. If not, then one should deviate only very little from expectation maximization. If yes, then one should be more inclined to favor the hedging decision rule.
It is interesting to notice that it is only with Alice’s assumptions that one needs to actually look at the actual efficiency of each intervention, and that one could come up with a concrete rule for comparing them which is not all-or-nothing.[21] While this has limitations, I find it very useful to have a concrete rule of thumb for comparing the efficiency of different interventions. In a final round of adjustment of Clara’s decisions, I believe that this additional information should also be taken into account. The extent of this final adjustment is again left to Clara’s “good judgment”.[22]
How I came to write this
In this section, I want to explain what led me to write the present article. It comes from my attempt to understand the career recommendations given on the 80k website. The advice given there changed recently. In my opinion, the recent version strongly suggests that one should give much higher priority to careers related to existential risk reduction than to careers related to, say, improving health in poor countries.[23]
Before spreading the word, I wanted to make sure that I understood and agreed with it. The present article is a summary of my best effort, and my conclusion is that I still don’t understand it.[24]
In a nutshell, I am worried that switching costs have not been estimated properly. This can be because people at 80k feel more certain than I am about the trajectory of the far future; or because they think that switching costs are not very high. I have already discussed my opinion on the trajectory of the far future at length, so I will now only focus on the sort of switching costs I am worried about.
Suppose that I am very interested in improving health in poor countries; and that I am not all that convinced by relatively convoluted arguments about what will happen to sentient life in billions of years. Even if everyone in the EA community has the best intentions, I would personally find it depressing to be surrounded by people who think that what I intend to do is of negligible importance. I would also feel the pressure to switch to topics such as AI safety, an extremely competitive topic requiring a lot of expertise. I think I would be very likely to simply leave the group.
Imagine now that in 10 years, someone comes up with a great argument which suddenly convinces the EA community that growth interventions are actually vastly superior to existential interventions. If most people interested in growth interventions have left the group, it will be extremely difficult for the EA community to bear the transition. And next, at least some people working on AI safety would consider that the cost of switching is too high, and that working on AI safety still kind of makes sense. As time passes, and supposing that the EA community has managed to transition to growth interventions without just disintegrating, people working on AI safety would grow tired of being reminded that their work is of negligible importance, and would tend to leave the group. Up until the next switch of opinion.
Notice also that in the fictitious scenario outlined above, it will in fact be quite difficult for the “great argument” to emerge from the EA community, and then also very hard for it to be known and acknowledged, since people with different interests no longer interact. And I am not even discussing possible synergistic effects between EA people working on different cause areas, which in my opinion can also be very significant.
Conclusion
This article examined the view that interventions aiming to reduce existential risk are virtually infinitely superior to those that aim to accelerate growth.
In my understanding, this view relies crucially on the assumption that the utility of the future cannot have exponential growth in the long term, and will instead essentially reach a plateau. While I do not intend to rule out this possibility, I tried to explain why I personally find the alternative possibility of a sustained exponential growth at least plausible.
One attempt to aggregate these different predictions on the far future can be to compute the expected value of different interventions, taking our uncertainty on the far future into account. In my opinion, this approach has important limitations, in particular because it ignores certain “switching costs”.
The present article is a summary of my attempt to understand some of the ideas which I consider central to the EA movement. I suppose that the people working full-time on the problems discussed here have a much deeper understanding of the issues at stake, and a much finer position than the one I have outlined here. I hope that this article will signal that it may currently be very difficult to reverse-engineer what this finer position is. If nothing else, this article can thus be taken as a request for clarification.
Acknowledgements. I would like to warmly thank the members of the French EA community, and in particular Laura Green and Lennart Stern, for their support and very useful feedback.
- ↩︎
The point here is not about wondering if this number is reasonable. Rather, it is about seeing how this number enters (or does not enter) the decision process. But, to give some substance to it, if we very crudely conflate our utility function with world GDP, then I think it is reasonable to place a return of at least a factor of 10 on some of the better growth investments.
- ↩︎
Again I want to stress that the point here is not to debate these numbers (I was told that a decrease of the extinction risk of 0.1 percentage point for an investment of 0.1% of the world GDP was reasonable, but found it difficult to find references; I would appreciate comments pointing to relevant references).
- ↩︎
The growth rate measures the instantaneous increase of the function, in proportion to the size of the function. In formulas, if y(t) is the function, then the growth rate is y’(t)/y(t) (this is also the derivative of log(y(t))).
- ↩︎
If you feel uncomfortable with the idea that the function t^3 looks like a constant, let me stress that all the reasonings here are based on rates of growth. So if we were plotting these curves, it would be much more informative that we draw the logarithm of these functions. And, really, log(t^3) does look like a constant when t is large. To be more precise, one can check that Bob’s conclusions will hold as long as the growth rate falls down to essentially zero sufficiently quickly compared with the 100 billion year time scale, so that we can conflate any such scenario with a “plateau” scenario. In formulas, denote the present time by t1 and the final time by T = t1 + 100 billion years. If we postulate that our utility function at time t is t^3, then the total utility of the future would be the integral of this function for t ranging between t1 and T. The growth intervention allows to replace this function by (t+s)^3, where s is such that (t1 + s)^3/t1^3 = 1.01. When we calculate the total utility of the future for this function, we find that it amounts to integrating the function t^3 for t varying between t1 + s and T + s. The utility gain caused by the intervention is thus essentially the integral between T and T+s of the function t^3 (the discrepancy near t1 is comparatively very small). This is approximately s T^3, which is extremely small compared with the total integral, which is of the order of T^4 (the ratio is of the order of s/T, which essentially compares the speedup brought by the intervention with the time scale of 100 billion years).
- ↩︎
This was my experience when talking to people, and has been confirmed by my searching through the literature. In particular, this 80k article attempts to survey the views of the community (to be precise, “mostly people associated with CEA, MIRI, FHI, GCRI, and related organisations”), and states that although a growth intervention “looks like it may have a lasting speedup effect on the entire future”, “the current consensus is that this doesn’t do much to change the value of the future. Shifting everything forward in time is essentially morally neutral.” Nick Bostrom argues in more detail for a plateau in The future of humanity (2007), and in footnote 20 of his book Superintelligence. The “plateau” view is somewhat implicit in Astronomical waste (2003), as well as in the concept of “technological maturity” in Existential risk as global priority (2013). This view was perhaps best summarized by Holden Karnofsky here, where he says: “we’ve encountered numerous people who argue that charities working on reducing the risk of sudden human extinction must be the best ones to support, since the value of saving the human race is so high that “any imaginable probability of success” would lead to a higher expected value for these charities than for others.” (See however here for an update on Karnofsky’s ideas.)
- ↩︎
I use the word “we” but I do not mean to imply that sentient beings in the far future will necessarily look like the present-day “we”.
- ↩︎
To take a trivial example, I clearly prefer to watch a movie from beginning to end than to stare at a given frame for two hours. So my utility function is not simply a function of the situation at a given time which I then integrate. Rather, it takes into account the whole trajectory of what is happening as time flows.
- ↩︎
As another illustration, Elon Musk is surprised that space colonization is not receiving more attention, while many others counter that there is already a lot of useful work to be done here on Earth. I am suggesting here that this sort of situation may remain broadly unchanged even for extremely advanced civilizations. Incidentally, this would go some way into mitigating Fermi’s paradox: maybe other advanced civilizations have not come to visit us because they are mostly busy optimizing their surrounding environment, and don’t care all that much about colonizing space.
- ↩︎
For one, I wonder if my cultural background in continental Europe makes me more likely to defend the point of view expressed here. It seems to me that the opposite view is more aligned with a rather “atomist” view of the ideal society as a collection of relatively small and mostly self-reliant agents, and that this sort of view is more popular in the US (and in particular in its tech community) than elsewhere. It also blends better with the hypothesis of a coming singularity. On a different note, we must acknowledge that the EA community is disproportionately made of highly intellectualizing people who find computer science, mathematics or philosophy to be very enjoyable activities. I bet it will not surprise anyone reading these lines if I say that I do math research for a living. And, well, I feel like I am in a better position to contribute if the best thing we can do is to work on AI safety, than if it is to distribute bed nets. In other words, the excellent alignment between many EAs’ interests and the AI safety problem is a warning sign, and suggests that we should be particularly vigilant that we are not fooling ourselves. This being said, I certainly do not mean to imply that EAs have a conscious bias; in fact I believe that the EA community is trying much harder than is typical to be free of biases. But a lot of our thinking processes happen unconsciously, and, for instance, if there is an idea around that looks reasonably well thought-of and whose conclusion I feel really happy with, then my subconscious thinking will not think as hard about whether there is a flaw in the argument as if I was strongly displeased with the idea. Or perhaps it will not bring it as forcefully to my conscious self. Or perhaps some vague version of a doubt will reach my conscious self, but I will not be most willing to play with this vague doubt until it can mature into something sufficiently strong to be communicable and credible. Other considerations related to the sociology of our community appear in Ben Garfinkel’s talk How sure are we about this AI stuff? (EAG 2018). A separate problem is that questioning an important idea of the EA movement requires, in addition to a fairly extensive knowledge of the community and its current thinking, at least some willingness to engage with a variety of ideas in mathematics, computer science, philosophy, physics, economics, etc., placing the “barrier to entry” quite high.
- ↩︎
I suppose that total utilitarians would want the utility function to be doubled in such a circumstance (which would force the exponent x to be 1). My personal intuition is more confused, and I would not object to a model in which the total utility is multiplied by some number between 1 and 2. (Maybe I feel that there should be some sort of premium to originality, so that creating an identical copy of something is not quite as good as twice having only one version of it; or maybe I’m just willing to accept that we are just looking for a simple workable model, which will necessarily be imperfect.)
- ↩︎
To be precise, this estimate implicitly postulates that we find matter to play with roughly proportionally to the volume of space we occupy, but in reality matter in space is rather more sparsely distributed. For a more accurate estimate, we can consider the effective dimension d of the repartition of matter around us, so that the amount of matter within distance r from us grows roughly like r^d. The estimate in the main text should then be exp(t^(1+d)), and this number d is in fact most likely below 3. However I think any reasonable estimate of d will suggest that it is positive, and this suffices to ensure the validity of the conclusion that a growth rate that increases without bound is possible.
- ↩︎
I do not mean to imply that Clara’s view is typical or even common within the EA community (I just don’t know). I mostly want to use this character as an excuse to discuss expectation maximization.
- ↩︎
To be precise, Alice had not specified the constant growth rate she had in mind; here I take it that her original claim was “our utility function will always grow at a rate of about 3% per year”.
- ↩︎
For the record, Bostrom wonders in The vulnerable world hypothesis (2018) whether there would be ways, “using physics we don’t currently understand well, to initiate fast-growing processes of value creation (such as by creating an exponential cascade of baby-universes whose inhabitants would be overwhelmingly happy)”. I wonder how a hard-core expectation maximizer would deal with this possibility.
- ↩︎
- ↩︎
I wonder if people will counter with appeals to quantum mechanics and multiple universes. At any rate, if this is the sort of reasoning that underpins our decisions, then I would like it to be made explicit.
- ↩︎
Related considerations have been discussed here.
- ↩︎
I’ll make the optimistic assumption that we are not biased in some direction.
- ↩︎
My understanding is that this is in the ballpark of how OpenPhil operates.
- ↩︎
at least in the “short-sighted” form expressed by Clara in the story
- ↩︎
Let me recall this decision rule here: if the growth intervention causes a growth of our utility function of x%, and the existential intervention reduces the probability of extinction by y percentage points, then we choose the growth intervention when x > y, and the existential intervention otherwise.
- ↩︎
Strictly speaking, I think that we could actually write down complicated models encoding the strength of the switching costs etc., and identify explicit formulas that take every aspect discussed so far into consideration. But I am skeptical that we are in a good position to evaluate all the parameters that would enter such complicated models, and I prefer to just push the difficulties into “good judgement”. I would personally give more credibility to someone telling me that they have done some “good judgement” adjustments, and explaining me in words the rough considerations they put into it, than with someone telling me that they have written down very complicated models, estimated all the relevant parameters and then applied the formula.
- ↩︎
I think this is most strongly expressed in this article, where it is stated that “Speeding up events in society this year looks like it may have a lasting speedup effect on the entire future—it might make all of the future events happen slightly earlier than they otherwise would have. In some sense this changes the character of the far future, although the current consensus is that this doesn’t do much to change the value of the future. Shifting everything forward in time is essentially morally neutral.” The key ideas article states that interventions such as improving health in poor countries “seem especially promising if you don’t think people can or should focus on the long-term effects of their actions”. This implicitly conveys that if you think that you can and should focus on long-term effects, then you should not aim to work in such areas.
- ↩︎
See also here for similar concerns.
- The ‘far future’ is not just the far future by 16 Jan 2020 15:30 UTC; 30 points) (
- Safety regulators: A tool for mitigating technological risk by 21 Jan 2020 13:07 UTC; 13 points) (LessWrong;
- Safety regulators: A tool for mitigating technological risk by 21 Jan 2020 13:09 UTC; 10 points) (
Thank you, I think this is an excellent post!
I also sympathize with your confusion. - FWIW, I think that a fair amount of uncertainty and confusion about the issues you’ve raised here is the epistemically adequate state to be in. (I’m less sure whether we can reliably reduce our uncertainty and confusion through more ‘research’.) I tentatively think that the “received longtermist EA wisdom” is broadly correct—i.e. roughly that the most good we can do usually (for most people in most situations) is by reducing specific existential risks (AI, bio, …) -, but I think that
(i) this is not at all obvious or settled, and involves judgment calls on my part which I could only partly make explicit and justify; and
(ii) the optimal allocation of ‘longtermist talent’ will have some fraction of people examining whether this “received wisdom” is actually correct, and will also have some distribution across existential risk reduction, what you call growth interventions, and other plausible interventions aimed at improving the long-term future (e.g. “moral circle expansion”) - for basically the “switching cost” and related reasons you mention [ETA: see also sc. 2.4 of GPI’s research agenda].
One thing in your post I might want to question is that, outside of your more abstract discussion, you phrase the question as whether, e.g., “AI safety should be virtually infinitely preferred to other cause areas such as global poverty”. I’m worried that this is somewhat misleading because I think most of your discussion rather concerns the question whether, to improve the long-term future, it’s more valuable to (a) speed up growth or to (b) reduce the risk of growth stopping. I think AI safety is a good example of a type-(b) intervention, but that most global poverty interventions likely aren’t a good example of a type-(a) intervention. This is because I would find it surprising if an intervention that has been selected to maximize some measure of short-term impact also turned out to be optimal for speeding up growth in the long-run. (Of course, this is a defeatable consideration, and I acknowledge that there might be economic arguments that suggest that accelerating growth in currently poor countries might be particularly promising to increase overall growth.) In other words, I think that the optimal “growth intervention” Alice would want to consider probably isn’t, say, donating to distribute bednets; I don’t have a considered view on what it would be instead, but I think it might be something like: doing research in a particularly dynamic field that might drive technological advances; or advocating changes in R&D or macroeconomic policy. (For some related back-of-the-envelope calculations, see Paul Christiano’s post on What is the return to giving?; they suggest “that good traditional philanthropic opportunities have a return of around 10 and the best available opportunities probably have returns of 100-1000, with most of the heavy hitters being research projects that contribute to long term tech progress and possibly political advocacy”, but of course there is a lot of room for error here. See also this post for how maximally increasing technological progress might look like.)
Lastly, here are some resources on the “increase growth vs. reduce risk” question, which you might be interested in if you haven’t seen them:
Paul Christiano’s post on (literal) Astronomical waste, where he considers the permanent loss of value from delayed growth due to cosmological processes (expansion, stars burning down, …). In particular, he also mentions the possibility that “there is a small probability that the goodness of the future scales exponentially with the available resources”, though he ultimately says he favors roughly what you called the plateau view.
In an 80,000 Hours podcast, economist Tyler Cowen argues that “our overwhelming priorities should be maximising economic growth and making civilization more stable”.
For considerations about how to deal with uncertainty over how much utility will grow as a function of resources, see GPI’s research agenda, in particular the last bullet point of section 1.4. (This one deals with the possibility of infinite utilities, which raises somewhat similar meta-normative issues. I thought I remembered that they also discuss the literal point you raised—i.e. what if utility will in the long-run grow exponentially? -, but wasn’t able to find it.)
I might follow up in additional comments with some pointers to issues related to the one you discuss in the OP.
I have two comments concerning your arguments against accelerating growth in poor countries. One is more “inside view”, the other is more “outside view”.
The “inside view” point is that Christiano’s estimate only takes into account the “price of a life saved”. But in truth GiveWell’s recommendations for bednets or deworming are to a large measure driven by their belief, backed by some empirical evidence, that children who grow up free of worms or malaria become adults who can lead more productive lives. This may lead to better returns than what his calculations suggest. (Micronutrient supplementation may also be quite efficient in this respect.)
The “outside view” point is that I find our epistemology really shaky and worrisome. Let me transpose the question into AI safety to illustrate that the point is not related to growth interventions. If I want to make progress on AI safety, maybe I can try directly to “solve AI alignment”. Let’s say that I hesitate between this and trying to improve the reliability of current-day AI algorithms. I feel that, at least in casual conversations (perhaps especially from people who are not actually working in the area), people would be all too willing to jump to “of course the first option is much better because this is the real problem, if it succeeds we win”. But in truth there is a tradeoff with being able to make any progress at all, it is not just better to turn your attention to the most maximally long-term thing you can think of. And, I think it is extremely useful to have some feedback loop that allows you to track what you are doing, and by necessity this feedback loop will be somewhat short term. To summarize, I believe that there is a “sweet spot” where you choose to focus on things that seem to point in the right direction and also allow you at least some modicum of feedback over shorter time scales.
Now, consider the argument “this intervention cannot be optimal in the long run because it has been optimized for the short term”. This argument essentially allows you to reject any intervention that has shown great promise based on the observations we can gather. So, effective altruism started as being “evidence based” etc., and now we reached a situation where we have built a theoretical construct that, not only allows to place certain interventions above all others without us having to give any empirical evidence backing this, but moreover, if another intervention is proposed that comes with good empirical backing, we can use this fact as an argument against the intervention!
I may be pushing the argument a bit too far. This still makes me feel very uncomfortable.
Regarding your “outside view” point: I agree with what you say here, but think it cannot directly undermine my original “outside view” argument. These clarifications may explain why:
My original outside view argument appealed to the process by which certain global health interventions such as distributing bednets have been selected rather than their content. The argument is not “global health is a different area from economic growth, therefore a health intervention is unlikely to be optimal for accelerating growth”; instead it is “an intervention that has been selected to be optimal according to some goal X is unlikely to also be optimal according to a different goal Y”.
In particular, if GiveWell had tried to identify those interventions that best accelerate growth, I think my argument would be moot (no matter what interventions they had come up with, in particular in the hypothetical case where distributing bednets had been the result of their investigation).
In general, I think that selecting an intervention that’s optimal for furthering some goal needs to pay attention to all of importance, tractability, and neglectedness. I agree that it would be bad to exclusively rely on the heuristics “just focus on the most important long-term outcome/risk” when selecting longtermist interventions, just as it would be bad to just rely on the heuristics “work on fighting whatever disease has the largest disease burden globally” when selecting global health interventions. But I think these would just be bad ways to select interventions, which seems orthogonal to the question when an intervention selected for X will also be optimal for Y. (In particular, I don’t think that my original outside view argument commits me to the conclusion that in the domain of AI safety it’s best to directly solve the largest or most long-term problem, whatever that is. I think it does recommend to deliberately select an intervention optimized for reducing AI risk, but this selection process should also take into account feedback loops and all the other considerations you raised.)
The main way I can see to undermine this argument would be to argue that a certain pair of goals X and Y is related in such a way that interventions optimal for X are also optimal for Y (e.g., X and Y are positively correlated, though this in itself wouldn’t be sufficient). For example, in this case, such an argument could be of the type “our best macroeconomic models predict that improving health in currently poor countries would have a permanent rate effect on growth, and empirically it seems likely that the potential for sustained increases in the growth rate is largest in currently poor countries” (I’m not saying this claim is true, just that I would want to see something like this).
Ok, I understand your point better now, and find that it makes sense. To summarize, I believe that the art of good planning to a distant goal is to find a series of intermediate targets that we can focus on, one after the other. I was worried that your argument could be used against any such strategy. But in fact your point is that as it stands, health interventions have not been selected by a “planner” who was actually thinking about the long-term goals, so it is unlikely that the selected interventions are the best we can find. That sounds reasonable to me. I would really like to see more research into what optimizing for long-term growth could look like (and what kind of “intermediate targets” this would select). (There is some of this in Christiano’s post, but there is clearly room for more in-depth analysis in my opinion.)
.
I think this is a fair point. Specifically, I agree that GiveWell’s recommendations are only partly (in the case of bednets) or not at all (in the case of deworming) based on literally averting deaths. I haven’t looked at Paul Christiano’s post in sufficient detail to say for sure, but I agree it’s plausible that this way of using “price of a life saved” calculations might effectively ignore other benefits, thus underestimating the benefits of bednet-like interventions compared to GiveWell’s analysis.
I would need to think about this more to form a considered view, but my guess is this wouldn’t change my mind on my tentative belief that global health interventions selected for their short-term (say, anything within the next 20 years) benefits aren’t optimal growth interventions. This is largely because I think the dialectical situation looks roughly like this:
The “beware suspicious convergence” argument implies that it’s unlikely (though not impossible) that health interventions selected for maximizing certain short-term benefits are also optimal for accelerating long-run growth. The burden of proof is thus with the view that they are optimal growth interventions.
In addition, some back-of-the-envelope calculations suggest the same conclusion as the first bullet point.
You’ve pointed out a potential problem with the second bullet point. I think it’s plausible to likely that this significantly to totally removes the force of the second bullet point. But even if the conclusion of the calculations were completely turned on their head, I don’t think they would by themselves succeed in defeating the first bullet point.
Having settled most of the accessible universe we’ll have hundreds of billions or even trillions of years to try to keep improving how we’re using the matter and energy at our disposal.
Doesn’t it seems almost certain that over such a long time period our annual rate if improvement in the value generated by the best configuration would eventually asymptote towards zero? I think that’s all that’s necessary for safety to be substantially more attractive than speed-ups.
(BTW safety is never ‘infinitely’ preferred because even on a strict plateau view the accessible universe is still shrinking by about a billionth a year.)
Agreed. And even in the scenario where we could continue to find more valuable patterns of matter even billions of years in the future, I don’t think that efforts to accelerate things now would have any significant impact on the value we will create in the future, because it seems very likely that our future value creation will mostly depend on major events that won’t have much to do with the current state of things.
Let’s consider the launch of Von Neumann probes throughout the universe as such a possible major event: even if we could increase our current growth rate by 1% with a better allocation of resources, it doesn’t mean that the future launch of these probes will be 1% more efficient. Rather, the outcomes of this event seem largely uncorrelated with our growth rate prior to that moment. At best, accelerating our growth would hasten the launch by a tiny bit, but this is very different than saying “increasing our growth by 1% now will increase our whole future utility by 1%”.
Let me call X the statement: “our rate of improvement remains bounded away from zero far into the future”. If I understand correctly, you are saying that we have great difficulties imagining a scenario where X happens, therefore X is very unlikely.
Human imagination is very limited. For instance, most of human history shows very little change from one generation to the next; in other words, people were not able to imagine ways for future generations to do certain things in better ways than how they already knew. Here you ask our imagination to perform a spectacularly difficult task, namely to imagine what extremely advanced civilizations are likely to be doing in billions of years. I am not surprised if we do not manage to produce a credible scenario where X occurs. I do not take this as strong evidence against X.
Separately from this, I personally do not find it very likely that we will ultimately settle most of the accessible universe, as you suppose, because I would be surprised if human beings hold such a special position. (In my opinion, either advanced civilizations are not so interested in expanding in space; or else, we will at some point meet a much more advanced civilization, and our trajectory after this point will probably depend little on what we can do before it.)
Concerning the point you put in parentheses about safety being “infinitely” preferred, I meant to use phrases such as “virtually infinitely preferred” to convey that the preference is so strong that any actual empirical estimate is considered unnecessary. In footnote 5 above, I mentioned this 80k article intended to summarize the views of the EA community, where it is said that speedup interventions are “essentially morally neutral” (which, given the context, I take as being equivalent to saying that risk mitigation is essentially infinitely preferred).
As a first pass the rate of improvement should asymptote towards zero so long as there’s a theoretical optimum and declining returns to further research before the heat death of the universe, which seems like pretty mild assumptions.
As an analogy, there’s an impossibly wide range of configurations of matter you could in theory use to create a glass from which we can drink water. But we’ve already gotten most of the way towards the best glass for humans, I would contend. I don’t think we could keep improving glasses in any meaningful way using a galaxy’s resources for a trillion years.
Keep in mind eventually the light cone of each star shrinks so far it can’t benefit from research conducted elsewhere.
I think it would be really useful if this idea was explained in more details somewhere, preferably on the 80k website. Do you think there is a chance that this happens at some point? (hopefully not too far in the future ;-) )
Yes it needs to go in an explanation of how we score scale/importance in the problem framework! It’s on the list. :)
Alternatively I’ve been wondering if we need a standalone article explaining how we can influence the long term, and what are signs that something might be highly leveraged for doing that.
This is a really interesting post, thanks for writing it up.
I think I have two main models for thinking about these sorts of issues:
The accelerating view, where we have historically seen several big speed-ups in rate of change as a result of the introduction of more powerful methods of optimisation, and the introduction of human-level AGI is likely to be another. In this case the future is both potentially very valuable (because AGI will allow very rapid growth and world-optimisation) and endangered (because the default is that new optimisation forces do not respect the values or ‘values’ of previous modes.)
Physics/Chemistry/Plate Tectonics
Life/Evolution
Humanity/Intelligence/Culture/Agriculture
Enlightenment/Capitalism/Industrial Revolution
Recursively self-improving AGI?
The God of Straight Lines approach, where we’ll continue to see roughly 2% RGDP growth, because that is what always happens. AI will make us more productive, but not dramatically so, and at the same time previous sources of productivity growth will be exhausted, so overall trends will remain roughly intact. As such, the future is worth a lot less (perhaps we will colonise the stars, but only slowly, and growth rates won’t hit 50%/year) but also less endangered (because all progress will be incremental and slow, and humanity will remain in control). I think of this as being the epistemically modest approach.
As a result, my version of Clara thinks of AI Safety work as reducing risk in the worlds that happen to matter the most. It’s also possible that these are the worlds where we can have the most influence, if you thought that strong negative feedback mechanisms strongly limited action in the Straight Line world
Note that I was originally going to describe these as the inside and outside views, but I actually think that both have decent outside-view justifications.
Interesting view. It seems to me like it makes sense, but I also feel like it’d be valuable for it to be fleshed out and critiqued further to see how solid it is. (Perhaps this has already been done somewhere—I do feel like I’ve heard vaguely similar arguments here and there.)
Also, arriving at this thread 5 months late, I notice Toby Ord makes a similar argument in The Precipice. He writes about:
Thanks! I’m not sure if I made it up or not. I will try to find some time to write more about it.
I think it’s worth noting that, for predictions concerning the next few decades, accelerating growth or “the god of straight lines” with 2% growth are not the only possibilities. There is for instance this piece by Tyler Cowen and Ben Southwood on the slowing down of scientific progress, which I find very good. Also, in Chapter 18 of Gordon’s book on “the rise and fall of American growth”, he predicts (under assumptions that I find reasonable) that the median disposable income per person in the US will grow by about 0.3% per year on average over the period 2015-2040. (This does not affect your argument though, as far as I understand it.)
That’s an interesting point of view. But notice that none of your two options is compatible with the “plateau” view that, as far as I understand, forms the basis of the recommendations on the 80k website. (See also Robert Wiblin’s comment below.)
In the ‘2% RGDP growth’ view, the plateau is already here, since exponential RGDP growth is probably subexponential utility growth. (I reckon this is a good example of confusion caused by using ‘plateau’ to mean ‘subexponential’ :) )
In the ‘accelerating view’, it seems that whether there is exponential utility growth in the long term comes down to the same intuitions about whether things keep accelerating forever that are discussed in other threads.
Ok, but note that this depends crucially on whether you decide that your utility looks more like log(GDP), or more like (GDP)^0.1, say. I don’t know how we can be confident that it is one and not the other.
You describe the view you’re examining as:
You then proceed by discussing considerations that are somewhat specific to the specific types of interventions you’re comparing—i.e., reducing extinction risk versus speeding up growth.
You might be interested in another type of argument questioning this view. These arguments attack the “virtually infinitely” part of the view, in a way that’s agnostic about the interventions being compared. For such arguments, see e.g.:
Brian Tomasik, Why Charities Usually Don’t Differ Astronomically in Expected Cost-Effectiveness
Tobias Baumann, Uncertainty smooths out differences in impact
Thanks a lot, this looks all very useful. I found these texts by Tomasik and Baumann particularly interesting, and was not aware of them.
Thanks!
I wanted to say thanks for spelling that out. It seems that this implicitly underlies some important disagreements. By contrast, I think this addition is somewhat counterproductive:
The idea of a plateau brings to mind images of sub-linear growth, but all that is required is sub-exponential growth, a much weaker claim. I think this will cause confusion.
I also appreciated that the piece is consistently accurate. As I wrote this comment, there were several times where I was considering writing some response, then saw that the piece has a caveat for exactly the problem I was going to point out, or a footnote which explained what I was confused about.
A particular kind of accuracy is representing the views of others well. I don’t think the piece is always as charitable as it could be, but details like footnote 15 make it much easier to understand what exactly other people’s views are. Also, the simple absence of gross mischaracterisations of other people’s views made this piece much more useful to me than many critiques.
Here are a few thoughts on how the model or framing could be more useful:
‘Growth rate’
The concept of a ‘growth rate’ seems useful in many contexts. However, applying the concept to a long-run process locks the model of the process into the framework of an exponential curve, because only exponential curves have a meaningful long-run growth rate (as defined in this piece). The position that utility will grow like an exponential is just one of many possibilities. As such, it seems preferable to simply talk directly in terms of the shape of long-run utility.
Model decomposition
When discussing the shape of long-run utility, it might be easier to decompose total utility into population size and utility per capita. In particular, the ‘utility = log(GDP)’ model is actually ‘in a perfectly equal world, utility per capita = log(GDP per capita)’. i.e. in a perfectly equal world, utility = population size x log(GDP per capita).[1]
For example, this resolves the objection that
The proposed duplication doubles population size while keeping utility per capita fixed, so it is a doubling[2] of utility in a model of this form, as expected.
More broadly, I suspect that the feasibility of ways to gain at-least-exponentially greater resources over time (analogous to population size, e.g. baby-universes, reversible computation[3]) and ways to use those resources at-least-exponentially better (analogous to utility per capita, no known proposals?) might be debated quite separately.
How things relate to utility
Where I disagreed or thought the piece was less clear, it was usually because something seemed at risk of being confused for utility. For example, explosive growth in ‘the space of possible patterns of matter we can potentially explore’ is used as an argument for possible greater-than-exponential growth in utility, but the connection between these two things seems tenuous. Sharpening the argument there could make it more convincing.
More broadly, I can imagine any concrete proposal for how utility per capita might be able to rise exponentially over very long timescales being much more compelling for taking the idea seriously. For example, if the Christiano reversible computation piece Max Daniel links to turns out to be accurate, that naively seems more compelling.
Switching costs
My take is that these parts don’t get at the heart of any disagreements.
It already seems fairly common that, when faced with two approaches which look optimal under different answers to intractable questions, Effective Altruism-related teams and communities take both approaches simultaneously. For example, this is ongoing at the level of cause prioritisation and in how the AI alignment community works on multiple agendas simultaneously. It seems that the true disagreements are mostly around whether or not growth interventions are sufficiently plausible to add to the portfolio, rather than whether diversification can be valuable.
The piece also ties in some concerns about community health to switching costs. I particularly agree that we would not want to lose informed critics. However, similarly to the above, I don’t think this is a real point of disagreement. Discussed simultaneously are risk from being ‘surrounded by people who think that what I intend to do is of negligible importance’ and risks from people ‘being reminded that their work is of negligible importance’. I think this conflates what people believe with whether they treat those around them with respect, which I think are largely independent problems. It seems fairly clear that we should attempt to form accurate beliefs about what is best, and simultaneously be kind and supportive to other people trying to help others using evidence and reason.
---
[1] The standard log model is surely wrong but the point stands with any decomposition into population size multiplied by a function of GDP per capita.
[2] I think the part about creating identical copies is not the main point of the thought experiment and would be better separated out (by stipulating that a very similar but not identical population is created). However, I guess that in the case that we are actually creating identical people we can handle how much extra moral relevance we think this creates through the population factor.
[3] I guess it might be worth making super clear that these are hypothetical examples rather than things for which I have views on whether they are real.
Thanks for your detailed and kind comments! It’s true that naming this a “plateau” is not very accurate. It was my attempt to make the reader’s life a bit easier by using a notion that is relatively easier to grasp in the main text (with some math details in a footnote for those who want more precision). About the growth rate, mathematically a function is fully described by its growth rate (and initial condition), and here the crux is whether or not the growth rate will go to zero relatively quickly, so it seems like a useful concept to me.
(When you refer to footnote 15, that can make sense, but I wonder if you were meaning footnote 5 instead.)
I agree with all the other things you say. I may be overly worried about our community becoming more and more focused on one particular cause area, possibly because of a handful of disappointing personal experiences. One of the main goals of this post was to make people more aware of the fact that current recommendations are based in an important way on a certain belief on the trajectory of the far future, and maybe I should have focused on that goal only instead of trying to do several things at once and not doing them all very well :-)
One direction you could take this: It’s probably not actually necessary for us to explore 2^(10^50) patterns in a brute-force manner. For example, once I’ve tried brussels sprouts once, I can be reasonably confident that I still won’t like them if you move a few atoms over microscopically. A Friendly AI programmed to maximize a human utility function it has uncertainty about might offer incentives for humans to try new matter configurations that it believes offer high value of information. For example, before trying a dance performance which lasts millions of years, it might first run an experimental dance performance which lasts only one year and see how humans like it. I suspect a superintelligent Friendly AI would hit diminishing returns on experiments of this type within the first thousand years.
Very interesting!
If I understand you correctly, this is a one time expenditure, so we are talking about ~$80 billion. This is a model that considered $3 billion being spent on AGI safety. It was a marginal analysis, but I think many would agree that it would address a large fraction of the AGI risk, which is a large fraction of the total existential risk. So if it reduced existential risk overall by one percentage point, that would be 2.5 orders of magnitude more cost-effective than you have assumed, which is much better than your growth assumption. Investment into nuclear winter resilience has similar or even better returns. So I think we could be spending a lot more money on existential risk mitigation that would still be no regrets even with continued exponential growth of utility.
If I understand you correctly, this one-time investment of 0.1% of GDP increases the GDP by 1% above the business as usual for all time. So if you look over one century without discounting, that looks like you have gotten a benefit to cost ratio of 1000. I think there has been discussion about how we have increased our R&D dramatically over the past few decades, but GDP growth has not increased. So maybe someone can jump in with the marginal returns for R&D. Or maybe you had something else in mind?
This sounds like an argument in the Age of Em, that once we accelerate our thought processes, expanding into space would be too painfully slow.
The references you put up look really very interesting! I think part of my discomfort comes from my not being aware of such attempts to try to estimate the actual impact of such-and-such risk intervention. I’m very happy to discover them, I wish it was easier to find them!
Also, my wild guess is that if the existential risk intervention came out as cost effective for the present generation, then it may pass your test even with continued exponential growth in utility.
You say that:
I agree in the sense that I think your simple models succeed in isolating an important consideration that wouldn’t itself be qualitatively altered by looking at a more complex model.
However, I do think (without implying that this contradicts anything you have said in the OP) that there are other crucial premises for the argument concluding that reducing existential risk is the best strategy for most EAs. I’d like to highlight three, without implying that this list is comprehensive.
One important question is how growth and risk interact. Specifically, it seems that we face existential risks of two different types: (a) ‘exogenous’ risks with the property that their probability per wall-clock time doesn’t depend on what we do (perhaps a freak physics disaster such as vacuum decay); and (b) ‘endogenous’ risks due to our activities (e.g. AI risk). The probability of such endogenous risks might correlate with proxies such as economic growth or technological progress, or more specific kinds of these trends. As an additional complication, the distinction between exogenous and endogenous risks may not be clear-cut, and arguably is itself endogenous to the level of progress—for example, an asteroid strike could be an existential risk today but not for an intergalactic civilization. Regarding growth, we might thus think that we face a tradeoff where faster growth would on one hand reduce risk by allowing us to more quickly reach thresholds that would make us invulnerable to some risks, but on the other hand might exacerbate endogenous risks that increase with the rate of growth. (A crude model for why there might be risks of the latter kind: perhaps ‘wisdom’ increases at fixed linear speed, and perhaps the amount of risk posed by a new technology decreases with wisdom.)
I think “received wisdom” is roughly that most risk is endogenous, and that more fine-grained differential intellectual or technological progress aimed at specifically reducing such endogenous risk (e.g. working on AI safety rather then generically increasing technological progress) is therefore higher-value than shortening the window of time during which we’re exposed to some exogenous risks.
See for example Paul Christiano, On Progress and Prosperity
A somewhat different lense is to ask how growth will affect the willingness of impatient actors—i.e., those that discount future resources at a higher rate than longtermists—to spend resources on existential risk reduction. This is part of what Leopold Aschenbrenner has examined in his paper on Existential Risk and Economic Growth.
More generally, the value of existential risk reduction today depends on the distribution of existential risk over time, including into the very long-run future, and on whether todays effort would have permanent effects on that distribution. This distribution might in turn depend on the rate of growth, e.g. for the reasons mentioned in the previous point. For an excellent discussion, see Tom Sittler’s paper on The expected value of the long-term future. In particular, the standard argument for existential risk reduction requires the assumption that we will eventually reach a state with much lower total risk than today.
A somewhat related issue is the distribution of opportunities to improve the long-term future over time. Specifically, will there be more efficient longtermist interventions in, say, 50 years? If yes, this would be another reason to favor growth over reducing risk now. Though more specifically it would favor growth, not of the economy as a whole, but of the pool of resources dedicated to improving the long-term future—for example, through ‘EA community building’ or investing to give later. Relatedly, the observation that longtermists are unusually patient (i.e. discount future resources at a lower rate) is both a reason to invest now and give later, when longtermists control a larger share of the pie—and a consideration increasing the value of “ensuring that the future proceeds without disruptions”, potentially by using resources now to reduce existential risk. For more, see e.g.:
Toby Ord, The timing of labour aimed at reducing existential risk
Owen Cotton-Barratt, Allocating risk mitigation across time
Will MacAskill, Are we living at the most influential time in history?
Phil Trammel, Philanthropic timing and the Hinge of History
You’re right that these are indeed important considerations that I swept under the rug… Thanks again for all the references.
As I said in another comment, one relevant complication seems to be that risk and growth interact. In particular, the interaction might be such that speeding up growth could actually have negative value. This has been debated for a long time, and I don’t think the answer is obvious. It might something we’re clueless about.
(See Paul Christiano’s How useful is “progress”? for an ingenious argument for why either
(a) “People are so badly mistaken (or their values so misaligned with mine) that they systematically do harm when they intend to do good, or”
(b) “Other (particularly self-interested) activities are harmful on average.”
Conditional on (b) we might worry that speeding up growth would work via increasing the amount or efficiency of various self-interested activities, and thus would be harmful.
I’m not sure if I buy the argument, though. It is based on “approximat[ing] the changes that occur each day as morally neutral on net”. But on longer timescales it seems that we should be highly uncertain about the value of changes. It thus seems concerning to me to look at a unit of time for which the magnitude of change is unintuitively small, round it to zero, and extrapolate from this to a large-scale conclusion.)