Assumptions about the far future and cause priority

Abstract. This article examines the position that cause areas related to existential risk reduction, such as AI safety, should be virtually infinitely preferred to other cause areas such as global poverty. I will explore some arguments for and against this position. My first goal is to raise greater awareness of the crucial importance of a particular assumption concerning the far future, which negates the possibility of long-term exponential growth of our utility function. I will also discuss the classical decision rule based on the maximization of expected values. My second goal is to question this assumption and this decision rule. In particular, I wonder whether exponential growth could be sustained through the exploration of increasingly complex patterns of matter; and whether, when attempting to maximize the expected values of different actions, we might forget to take into account possibly large costs caused by later updates on our beliefs on the far future. While I consider the ideas presented here to be highly speculative, my hope is to elicit a more thorough analysis of the arguments underlying the case for existential risk reduction.

A fictitious conversation on cause priority

The considerations below could be put into complicated-looking mathematical models involving integrals and probability measures. I will not follow this path, and focus instead on a handful of simple model cases. For convenience of exposition, there will be fictitious characters who are supposed to hold each model as their belief on the far future. These models will be very simple. In my opinion, nothing of value is being lost by proceeding in this way.

The characters in the fictitious story have a fantastic opportunity to do good: they are about to spend 0.1% of the world GDP on whatever they want. They will debate what to do in light of their beliefs on the far future. They agree on many things already: they are at least vaguely utilitarian and concerned about the far future; they believe that in exactly 100 years from now (unless they intervene), there will be a 10% chance that all sentient life suddenly goes extinct (and otherwise everything goes on just fine); outside of this event they believe that there will be no such existential risk; finally, they believe that sentient life must come to an end in 100 billion years. Also, we take it that all these beliefs are actually correct.

The characters in the story hesitate between a “growth” intervention, which they estimate would instantaneously raise their utility function by 1%,[1] and an “existential” intervention, which would reduce the probability of extinction they will face in 100 years to 9.9% instead of 10%.[2]

Alice believes that unless extinction occurs, our utility function always grows at a roughly constant rate, until everything stops in 100 billion years. She calculates that in her model, the growth intervention moves the utility function upwards by 1% at any point in the future. In particular, the expectation of the total utility created in the future increases by 1% if she chooses the growth intervention. With the existential intervention, she calculates that (up to a minuscule error) this expectation moves up by 0.1%. Since 1% > 0.1%, she argues for the growth intervention.

Bob’s view on the far future is different. He believes that the growth rate of our utility function will first accelerate, as we discover more and more technologies. In fact, it will accelerate so fast that we will quickly have discovered all discoverable technologies. We will similarly quickly figure out the best arrangement of matter to maximize our utility function locally, and all that will be left to do is colonize space and fill it with this optimal arrangement of matter. However, we are bound by the laws of physics and cannot colonize space faster than the speed of light. This implies that in the long run, our utility function cannot grow faster than t^3 (where t is time). The growth rate of this function[3] decays to zero quickly, like 1/​t. So in effect, we may as well suppose that our utility function will spike up quickly, and then plateau at a value that can essentially be regarded as a constant[4]. For the existential intervention, he finds that the expected utility of the future increases by about 0.1%, in agreement with Alice’s assessment. However, he reaches a very different conclusion when evaluating the growth intervention. Indeed, in his model, the growth intervention only improves the fate of the future before the onset of the plateau, and brings this onset a bit closer to the present. In particular, it has essentially no effect on the utility function after the plateau is reached. But this is where the vast majority of the future resides. So the growth intervention will barely budge the total utility of the future. He therefore argues for the existential intervention.

Clara holds a more sophisticated model of the far future than both Alice and Bob. She acknowledges that we cannot be certain about our predictions. She holds that there is a range of different possible scenarios for the far future, to which she assigns certain probabilities. In fact, she puts a weight of 50% on Alice’s model, and a weight of 50% on Bob’s model. Her calculations will depend crucially on comparing the expected value of the total utility of the future under each model. She considers that the growth in utility in Alice’s model is slower than in Bob’s, so much so that the plateau that appears in Bob’s model is never within sight in Alice’s model. She thus concludes that the far future has much greater utility in Bob’s model. Or else, she reasons that Alice must have failed to properly take into account the slowdown appearing in Bob’s model. In any case, she rejoins Bob in arguing for the existential intervention.

In the next sections, we dig deeper into some of the arguments appearing in the preceding discussion.

A closer look at Bob’s arguments (no exponential growth)

As far as I can tell, some version of Bob’s view that our utility function ultimately reaches a plateau (or grows no faster than t^3) is the more typical view among EA people who have thought about the problem.[5] I will focus now on the examination of this point.

This view relies on the assumption that we[6] will quickly discover the essentially optimal way to organize matter in the region of space that we occupy. Once this is done, all that is left to do is to expand in space and reproduce this optimal arrangement of matter over and over again.

It seems extremely unlikely to me that we will come remotely close to discovering the utility-maximizing pattern of matter that can be formed even just here on Earth. There are about 10^50 atoms on Earth. In how many different ways could these atoms be organized in space? To keep things simple, suppose that we just want to delete some of them to form a more harmonious pattern, and otherwise do not move anything. Then there are already 2^(10^50) possible patterns for us to explore.

The number of atoms on our planet is already so large that it is many orders of magnitude beyond our intuitive grasp (in comparison, 100 billion years almost feels like you can touch it). So I’m not sure what to say to give a sense of scale for 2^(10^50); but let me give it a try. We can write down the number of atoms on Earth as a one followed by 50 zeros. If we try to write down 2^(10^50) similarly, then we would basically have to write a one followed by as many zeros as a third of the number of atoms on Earth.

Let me also stress that 2^(10^50) is in fact a very pessimistic lower bound on the number of different patterns that we can explore. Atoms are not all the same. They are made up of smaller parts that we can split up and play with separately. We are not restricted to using only the atoms on Earth, and we can move them further distances away. Also, I do not see why the optimizer of our utility function should be constant in time.[7] In comparison to the potential number of patterns accessible to us, a time scale of 100 billion years is really, really, REALLY ridiculously short.

In order to rescue Bob’s argument, it seems necessary to make the case that, although the space of possible patterns is indeed huge, exploring this space of patterns has only a very limited impact on the growth of our utility function. It find it very difficult to decide whether this is true or not. One has to answer questions such as (1) how rapidly are we capable to explore this space of patterns? (2) Should we expect our speed of exploration to increase over time? If so, by how much? (3) How does our utility function increase as we keep improving the quality of the patterns we discover?

I do not know how to answer these questions. But perhaps it will help to broaden our imagination if I suggest a simple mental image of what it could look like for a civilization to be mostly busy trying to explore the space of patterns available to them. Possibly, our future selves will find that the greatest good will be achieved by preparing for and then realizing a coordinated dance performance of cosmic dimension, spanning a region greater than that of the solar system and lasting millions of years. While they will not completely disregard space colonization, they will find greater value in optimizing over the choreography of their cosmic dance, preparing for the success of their performance, and then realizing it.[8] Generalizing on what I aim to capture with this example, I find it plausible that highly advanced sentient beings will be very interested in extremely refined and intricate doings comparable to art forms, which we cannot even begin to imagine, but which will score particularly highly for their utility function.

A somewhat Bayesian objection to the idea I am defending here could be: if indeed Bob’s view is invalid, then how come the point defended here has not already become more commonplace within the EA community? This is more tangential and speculative, so I will push a tentative answer to this question into a long footnote.[9]

A closer look at Alice’s point of view (exponential growth)

Aside from Bob’s and Clara’s objections, another type of argument that can be raised against Alice’s view is that, somewhat implicitly, it may conflate the utility function with something that at least vaguely looks like the world GDP; and that in truth, if there were a simple relationship between the utility function and the world GDP, it would rather be that our utility function is the logarithm of the world GDP.

This argument would put Alice’s belief that our utility function can grow at a steady rate over long periods of time into very serious doubt. Under a default scenario where GDP growth is constant, it would mean that our utility function only grows by some fixed amount per unit of time.

It is difficult to argue about the relationship between a state of the world and what the value of our utility function should be. I will only point out that the argument is very fragile to the precise functional relation we postulate between our utility function and world GDP. Indeed, if we decide that our utility function is some (possibly small) power of the world GDP, instead of its logarithm, then a steady growth rate of GDP does again imply a steady growth rate of our utility function (as opposed to adding a constant amount per unit of time). If there were a relationship between our utility function and the world GDP, then I do not know how I could go about and decide whether our utility function looks more like log(GDP) or more like, say, (GDP)^0.1. If anything, postulating that our utility function looks like (GDP)^x for some exponent x between 0 and 1 gives us more freedom for adjustment between reality and our model of it. I feel that it would also work better under certain circumstances; for instance, if we duplicate our world and create an identical copy of it, I would find it bizarre if our utility function only increases by a constant amount, and find it more reasonable if it is multiplied by some factor.[10]

Finally, I want to point out that Alice’s view is not at all sitting at the tail end of some spectrum of possible models. Indeed, I see no obstacle in the laws of physics to the idea that the growth rate of our utility function will not only remain positive, but will in fact continue to increase without bound for the next 100 billion years. Indeed, the space of possible patterns of matter we can potentially explore between time t and time 2 t grows like exp(t^4);[11] and the growth rate of this function indeed goes up to infinity. If one takes the position that the growth rate of our utility function can increase without bound, then one would be led to the conclusion that growth interventions are always to be preferred over existential interventions.

A closer look at Clara’s arguments (expectation maximization)

The reasoning that leads Clara to conclude in favor of Bob’s conclusion is, in my opinion, very fragile.[12] Although I do not think that it is necessary to do so, I find it clearest to explain this by supposing that Alice revises her model and says: “In fact I don’t know how our utility function will grow in the far future. The only thing I am certain about is that the growth rate of our utility function will always be at least 3% per year (outside of the possibility of extinction and of the interventions we do).” This is a weaker assumption on the future than her original model[13], so it should only increase the weight Clara is willing to put on Alice’s model. But with this formulation, whatever Bob’s prediction is for the future, Alice could say that maybe Bob is right up until the moment when he predicts a growth rate below 3%, but then she would insist on being more optimistic and keep the growth rate at (or above) 3%. In this way, Alice’s updated model is guaranteed to yield higher expected utility than Bob’s. Roughly speaking, Clara’s procedure essentially consists in selecting the most optimistic model around.[14]

I suppose that a typical line of defense for the expectation-maximization procedure has something to do with the idea that it is, in some sense, “provably” best; in other words, that there is some mathematical reasoning justifying its superiority. I want to challenge this view here with two counter-arguments.[15]

First, the classical argument for expectation maximization relies on the law of large numbers. This law deals with a series of relatively independent variables which we then sum up. It asserts that, in situations where each term contributes little to the overall sum, the total sum becomes concentrated around the sum of the expected values of each contribution, with comparatively small fluctuations. In such situations, it therefore makes sense to maximize the expected value of each of our actions. But, for all I know, there is only one universe in which we are taking bets on the long-term future.[16] If, say, we never update our bet for the long-term future, then there will be no averaging taking place. In such a circumstance, maximizing expected values seems to me rather arbitrary, and I would see no contradiction if someone decided to optimize for some different quantity.

My most important objection to Clara’s reasoning is that, in my opinion, it fails to take into account certain effects which I will call “switching costs”.[17] Although I explored the opposite hypothesis in the previous paragraph, I find it more likely that we will regularly update our prediction on the far future. And, in view of the scale of the uncertainties, I expect that our beliefs about it will actually mostly look like random noise.[18] Finally, I expect that the standard expectation-maximization prescription will be all-or-nothing: only do growth interventions; or only do existential interventions. It seems to me that Clara’s calculation is too short-sighted, and fails to take into account the cost associated with revising our opinion in the future. To illustrate this, suppose that Alice, Bob and Clara run a charitable organization called ABCPhil which invests 0.1% of world GDP each year to do the maximal amount of good. Imagine if for the next 10 years, ABCPhil was only financing existential interventions; and then it would suddenly switch to only financing growth interventions for the next 10 years; and so on, completely switching every 10 years. Now, compare this with the scenario where ABCPhil finances both equally all the time. While this comparison is not straightforward, I would be inclined to believe that the second scenario is superior. In any case, my point here is that Clara’s reasoning, as stated, simply ignores this question, and this may be an important problem.

Tentative guidelines for concrete decision taking

In view of the discussion in the previous section, and in particular of the possible problem of “switching costs”, I do not believe that there can be a simple recipe that Clara could just apply to discover the optimal decision she should take. (We will take Clara’s point of view here since she is more reasonably acknowledging her uncertainty about her model of the future.) The best that can be done is to indicate a few guidelines for decision taking, which then need to be complemented by some amount of “good judgment”.

I find it most useful to think in terms of “reference points”, a small set of possible decisions that each are the result of a certain type of thinking. Once these reference points are identified, “good judgment” can then weigh in and bend the final decision more or less toward a reference point, depending on rough guesses as to the magnitude of the effects that are not captured well under each perspective.

One such reference point is indeed that resulting from the maximization of expected values (which is what Clara was doing in the original story). A second reference point, which I will call the “hedging” decision rule, is as follows. First, Clara calculates what is the best action under each model of the future; in fact, Alice and Bob have done this calculation for her already. Then, she aggregates the decisions according to the likelihood she places on each model. In other words, she gives money to Alice and Bob in proportion to how much she believes each is right, and then let them do what they think is best.[19]

I want to stress again that I do not claim the hedging decision rule to be always superior to expectation maximization. However, it is also true that, in circumstances in which we believe that the switching costs are large, expectation maximization[20] will lead to conclusions that are inferior to those derived from the hedging decision rule.

The hedging decision rule is designed to be more robust to switching costs. The task of “good judgment” then is to try to evaluate whether these costs (and possibly other considerations) are likely to be significant or not. If not, then one should deviate only very little from expectation maximization. If yes, then one should be more inclined to favor the hedging decision rule.

It is interesting to notice that it is only with Alice’s assumptions that one needs to actually look at the actual efficiency of each intervention, and that one could come up with a concrete rule for comparing them which is not all-or-nothing.[21] While this has limitations, I find it very useful to have a concrete rule of thumb for comparing the efficiency of different interventions. In a final round of adjustment of Clara’s decisions, I believe that this additional information should also be taken into account. The extent of this final adjustment is again left to Clara’s “good judgment”.[22]

How I came to write this

In this section, I want to explain what led me to write the present article. It comes from my attempt to understand the career recommendations given on the 80k website. The advice given there changed recently. In my opinion, the recent version strongly suggests that one should give much higher priority to careers related to existential risk reduction than to careers related to, say, improving health in poor countries.[23]

Before spreading the word, I wanted to make sure that I understood and agreed with it. The present article is a summary of my best effort, and my conclusion is that I still don’t understand it.[24]

In a nutshell, I am worried that switching costs have not been estimated properly. This can be because people at 80k feel more certain than I am about the trajectory of the far future; or because they think that switching costs are not very high. I have already discussed my opinion on the trajectory of the far future at length, so I will now only focus on the sort of switching costs I am worried about.

Suppose that I am very interested in improving health in poor countries; and that I am not all that convinced by relatively convoluted arguments about what will happen to sentient life in billions of years. Even if everyone in the EA community has the best intentions, I would personally find it depressing to be surrounded by people who think that what I intend to do is of negligible importance. I would also feel the pressure to switch to topics such as AI safety, an extremely competitive topic requiring a lot of expertise. I think I would be very likely to simply leave the group.

Imagine now that in 10 years, someone comes up with a great argument which suddenly convinces the EA community that growth interventions are actually vastly superior to existential interventions. If most people interested in growth interventions have left the group, it will be extremely difficult for the EA community to bear the transition. And next, at least some people working on AI safety would consider that the cost of switching is too high, and that working on AI safety still kind of makes sense. As time passes, and supposing that the EA community has managed to transition to growth interventions without just disintegrating, people working on AI safety would grow tired of being reminded that their work is of negligible importance, and would tend to leave the group. Up until the next switch of opinion.

Notice also that in the fictitious scenario outlined above, it will in fact be quite difficult for the “great argument” to emerge from the EA community, and then also very hard for it to be known and acknowledged, since people with different interests no longer interact. And I am not even discussing possible synergistic effects between EA people working on different cause areas, which in my opinion can also be very significant.

Conclusion

This article examined the view that interventions aiming to reduce existential risk are virtually infinitely superior to those that aim to accelerate growth.

In my understanding, this view relies crucially on the assumption that the utility of the future cannot have exponential growth in the long term, and will instead essentially reach a plateau. While I do not intend to rule out this possibility, I tried to explain why I personally find the alternative possibility of a sustained exponential growth at least plausible.

One attempt to aggregate these different predictions on the far future can be to compute the expected value of different interventions, taking our uncertainty on the far future into account. In my opinion, this approach has important limitations, in particular because it ignores certain “switching costs”.

The present article is a summary of my attempt to understand some of the ideas which I consider central to the EA movement. I suppose that the people working full-time on the problems discussed here have a much deeper understanding of the issues at stake, and a much finer position than the one I have outlined here. I hope that this article will signal that it may currently be very difficult to reverse-engineer what this finer position is. If nothing else, this article can thus be taken as a request for clarification.

Acknowledgements. I would like to warmly thank the members of the French EA community, and in particular Laura Green and Lennart Stern, for their support and very useful feedback.


  1. ↩︎

    The point here is not about wondering if this number is reasonable. Rather, it is about seeing how this number enters (or does not enter) the decision process. But, to give some substance to it, if we very crudely conflate our utility function with world GDP, then I think it is reasonable to place a return of at least a factor of 10 on some of the better growth investments.

  2. ↩︎

    Again I want to stress that the point here is not to debate these numbers (I was told that a decrease of the extinction risk of 0.1 percentage point for an investment of 0.1% of the world GDP was reasonable, but found it difficult to find references; I would appreciate comments pointing to relevant references).

  3. ↩︎

    The growth rate measures the instantaneous increase of the function, in proportion to the size of the function. In formulas, if y(t) is the function, then the growth rate is y’(t)/​y(t) (this is also the derivative of log(y(t))).

  4. ↩︎

    If you feel uncomfortable with the idea that the function t^3 looks like a constant, let me stress that all the reasonings here are based on rates of growth. So if we were plotting these curves, it would be much more informative that we draw the logarithm of these functions. And, really, log(t^3) does look like a constant when t is large. To be more precise, one can check that Bob’s conclusions will hold as long as the growth rate falls down to essentially zero sufficiently quickly compared with the 100 billion year time scale, so that we can conflate any such scenario with a “plateau” scenario. In formulas, denote the present time by t1 and the final time by T = t1 + 100 billion years. If we postulate that our utility function at time t is t^3, then the total utility of the future would be the integral of this function for t ranging between t1 and T. The growth intervention allows to replace this function by (t+s)^3, where s is such that (t1 + s)^3/​t1^3 = 1.01. When we calculate the total utility of the future for this function, we find that it amounts to integrating the function t^3 for t varying between t1 + s and T + s. The utility gain caused by the intervention is thus essentially the integral between T and T+s of the function t^3 (the discrepancy near t1 is comparatively very small). This is approximately s T^3, which is extremely small compared with the total integral, which is of the order of T^4 (the ratio is of the order of s/​T, which essentially compares the speedup brought by the intervention with the time scale of 100 billion years).

  5. ↩︎

    This was my experience when talking to people, and has been confirmed by my searching through the literature. In particular, this 80k article attempts to survey the views of the community (to be precise, “mostly people associated with CEA, MIRI, FHI, GCRI, and related organisations”), and states that although a growth intervention “looks like it may have a lasting speedup effect on the entire future”, “the current consensus is that this doesn’t do much to change the value of the future. Shifting everything forward in time is essentially morally neutral.” Nick Bostrom argues in more detail for a plateau in The future of humanity (2007), and in footnote 20 of his book Superintelligence. The “plateau” view is somewhat implicit in Astronomical waste (2003), as well as in the concept of “technological maturity” in Existential risk as global priority (2013). This view was perhaps best summarized by Holden Karnofsky here, where he says: “we’ve encountered numerous people who argue that charities working on reducing the risk of sudden human extinction must be the best ones to support, since the value of saving the human race is so high that “any imaginable probability of success” would lead to a higher expected value for these charities than for others.” (See however here for an update on Karnofsky’s ideas.)

  6. ↩︎

    I use the word “we” but I do not mean to imply that sentient beings in the far future will necessarily look like the present-day “we”.

  7. ↩︎

    To take a trivial example, I clearly prefer to watch a movie from beginning to end than to stare at a given frame for two hours. So my utility function is not simply a function of the situation at a given time which I then integrate. Rather, it takes into account the whole trajectory of what is happening as time flows.

  8. ↩︎

    As another illustration, Elon Musk is surprised that space colonization is not receiving more attention, while many others counter that there is already a lot of useful work to be done here on Earth. I am suggesting here that this sort of situation may remain broadly unchanged even for extremely advanced civilizations. Incidentally, this would go some way into mitigating Fermi’s paradox: maybe other advanced civilizations have not come to visit us because they are mostly busy optimizing their surrounding environment, and don’t care all that much about colonizing space.

  9. ↩︎

    For one, I wonder if my cultural background in continental Europe makes me more likely to defend the point of view expressed here. It seems to me that the opposite view is more aligned with a rather “atomist” view of the ideal society as a collection of relatively small and mostly self-reliant agents, and that this sort of view is more popular in the US (and in particular in its tech community) than elsewhere. It also blends better with the hypothesis of a coming singularity. On a different note, we must acknowledge that the EA community is disproportionately made of highly intellectualizing people who find computer science, mathematics or philosophy to be very enjoyable activities. I bet it will not surprise anyone reading these lines if I say that I do math research for a living. And, well, I feel like I am in a better position to contribute if the best thing we can do is to work on AI safety, than if it is to distribute bed nets. In other words, the excellent alignment between many EAs’ interests and the AI safety problem is a warning sign, and suggests that we should be particularly vigilant that we are not fooling ourselves. This being said, I certainly do not mean to imply that EAs have a conscious bias; in fact I believe that the EA community is trying much harder than is typical to be free of biases. But a lot of our thinking processes happen unconsciously, and, for instance, if there is an idea around that looks reasonably well thought-of and whose conclusion I feel really happy with, then my subconscious thinking will not think as hard about whether there is a flaw in the argument as if I was strongly displeased with the idea. Or perhaps it will not bring it as forcefully to my conscious self. Or perhaps some vague version of a doubt will reach my conscious self, but I will not be most willing to play with this vague doubt until it can mature into something sufficiently strong to be communicable and credible. Other considerations related to the sociology of our community appear in Ben Garfinkel’s talk How sure are we about this AI stuff? (EAG 2018). A separate problem is that questioning an important idea of the EA movement requires, in addition to a fairly extensive knowledge of the community and its current thinking, at least some willingness to engage with a variety of ideas in mathematics, computer science, philosophy, physics, economics, etc., placing the “barrier to entry” quite high.

  10. ↩︎

    I suppose that total utilitarians would want the utility function to be doubled in such a circumstance (which would force the exponent x to be 1). My personal intuition is more confused, and I would not object to a model in which the total utility is multiplied by some number between 1 and 2. (Maybe I feel that there should be some sort of premium to originality, so that creating an identical copy of something is not quite as good as twice having only one version of it; or maybe I’m just willing to accept that we are just looking for a simple workable model, which will necessarily be imperfect.)

  11. ↩︎

    To be precise, this estimate implicitly postulates that we find matter to play with roughly proportionally to the volume of space we occupy, but in reality matter in space is rather more sparsely distributed. For a more accurate estimate, we can consider the effective dimension d of the repartition of matter around us, so that the amount of matter within distance r from us grows roughly like r^d. The estimate in the main text should then be exp(t^(1+d)), and this number d is in fact most likely below 3. However I think any reasonable estimate of d will suggest that it is positive, and this suffices to ensure the validity of the conclusion that a growth rate that increases without bound is possible.

  12. ↩︎

    I do not mean to imply that Clara’s view is typical or even common within the EA community (I just don’t know). I mostly want to use this character as an excuse to discuss expectation maximization.

  13. ↩︎

    To be precise, Alice had not specified the constant growth rate she had in mind; here I take it that her original claim was “our utility function will always grow at a rate of about 3% per year”.

  14. ↩︎

    For the record, Bostrom wonders in The vulnerable world hypothesis (2018) whether there would be ways, “using physics we don’t currently understand well, to initiate fast-growing processes of value creation (such as by creating an exponential cascade of baby-universes whose inhabitants would be overwhelmingly happy)”. I wonder how a hard-core expectation maximizer would deal with this possibility.

  15. ↩︎

    A notable critique of expectation maximization by Holden Karnofsky, somewhat different from what is discussed in this text, can be found here. (See however here for an update on Karnofsky’s ideas.)

  16. ↩︎

    I wonder if people will counter with appeals to quantum mechanics and multiple universes. At any rate, if this is the sort of reasoning that underpins our decisions, then I would like it to be made explicit.

  17. ↩︎

    Related considerations have been discussed here.

  18. ↩︎

    I’ll make the optimistic assumption that we are not biased in some direction.

  19. ↩︎

    My understanding is that this is in the ballpark of how OpenPhil operates.

  20. ↩︎

    at least in the “short-sighted” form expressed by Clara in the story

  21. ↩︎

    Let me recall this decision rule here: if the growth intervention causes a growth of our utility function of x%, and the existential intervention reduces the probability of extinction by y percentage points, then we choose the growth intervention when x > y, and the existential intervention otherwise.

  22. ↩︎

    Strictly speaking, I think that we could actually write down complicated models encoding the strength of the switching costs etc., and identify explicit formulas that take every aspect discussed so far into consideration. But I am skeptical that we are in a good position to evaluate all the parameters that would enter such complicated models, and I prefer to just push the difficulties into “good judgement”. I would personally give more credibility to someone telling me that they have done some “good judgement” adjustments, and explaining me in words the rough considerations they put into it, than with someone telling me that they have written down very complicated models, estimated all the relevant parameters and then applied the formula.

  23. ↩︎

    I think this is most strongly expressed in this article, where it is stated that “Speeding up events in society this year looks like it may have a lasting speedup effect on the entire future—it might make all of the future events happen slightly earlier than they otherwise would have. In some sense this changes the character of the far future, although the current consensus is that this doesn’t do much to change the value of the future. Shifting everything forward in time is essentially morally neutral.” The key ideas article states that interventions such as improving health in poor countries “seem especially promising if you don’t think people can or should focus on the long-term effects of their actions”. This implicitly conveys that if you think that you can and should focus on long-term effects, then you should not aim to work in such areas.

  24. ↩︎

    See also here for similar concerns.