This argument is highly dependent on your population ethics. From a longtermist, total positive utilitarian perspective, existential risk is many, many magnitudes worse than delaying progress, as it affect many, many magnitudes more (potential) people.
I think it would require an unreasonably radical interpretation of longtermism to believe, for example, that delaying something as valuable as a cure for cancer by 10 years (or another comparably significant breakthrough) would be justified, let alone overwhelmingly outweighed, because of an extremely slight and speculative anticipated positive impact on existential risk. Similarly, I think the same is true about AI, if indeed pausing the technology would only have a very slight impact on existential risk in expectation.
I’ve already provided a pragmatic argument for incorporating at least a slight amount of time discounting into one’s moral framework, but I want to reemphasize and elaborate on this point for clarity. Even if you are firmly committed to the idea that we should have no pure rate of time preference—meaning you believe future lives and welfare matter just as much as present ones—you should still account for the fact that the future is inherently uncertain. Our ability to predict the future diminishes significantly the farther we look ahead. This uncertainty should generally lead us to favor not delaying the realization of clearly good outcomes unless there is a strong and concrete justification for why the delay would yield substantial benefits.
Longtermism, as I understand it, is simply the idea that the distant future matters a great deal and should be factored into our decision-making. Longtermism does not—and should not—imply that we should essentially ignore enormous, tangible and clear short-term harms just because we anticipate extremely slight and highly speculative long-term gains that might result from a particular course of action.
I recognize that someone who adheres to an extremely strong and rigid version of longtermism might disagree with the position I’m articulating here. Such a person might argue that even a very small and speculative reduction in existential risk justifies delaying massive and clear near-term benefits. However, I generally believe that people should not adopt this kind of extreme strong longtermism. It leads to moral conclusions that are unreasonably detached from the realities of suffering and flourishing in the present and near future, and I think this approach undermines the pragmatic and balanced principles that arguably drew many of us to longtermism in the first place.
I don’t care about population ethics so don’t take this as a good faith argument. But doesn’t astronomical waste imply that saving lives earlier can compete on the same order of magnitude as x risk?
In light of the above discussion, it may seem as if a utilitarian ought to focus her efforts on accelerating technological development. The payoff from even a very slight success in this endeavor is so enormous that it dwarfs that of almost any other activity. We appear to have a utilitarian argument for the greatest possible urgency of technological development.
However, the true lesson is a different one. If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.8 Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.
Therefore, if our actions have even the slightest effect on the probability of eventual colonization, this will outweigh their effect on when colonization takes place. For standard utilitarians, priority number one, two, three and four should consequently be to reduce existential risk. The utilitarian imperative “Maximize expected aggregate utility!” can be simplified to the maxim “Minimize existential risk!”.
For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.
I’m curious how many EAs believe this claim literally, and think a 10 million year pause (assuming it’s feasible in the first place) would be justified if it reduced existential risk by a single percentage point. Given the disagree votes to my other comments, it seems a fair number might in fact agree to the literal claim here.
Given my disagreement that we should take these numbers literally, I think it might be worth writing a post about why we should have a pragmatic non-zero discount rate, even from a purely longtermist perspective.
I think fixed discount rates (i.e. a fixed discount rate per year) of any amount seems pretty obviously crazy to me as a model of the future. We use discount rates as a proxy for things like “predictability of the future” and “constraining our plans towards worlds we can influence”, which often makes sense, but I think even very simple thought-experiments produce obviously insane conclusions if you use practically any non-zero fixed discount rate.
This report seems to assume exponential discount rates for the future when modeling extinction risk. This seems to lead to extreme and seemingly immoral conclusions when applied to decisions that previous generations of humans faced.
I think exponential discount rates can make sense in short-term economic modeling, and can be a proxy for various forms of hard-to-model uncertainty and the death of individual participants in an economic system, but applying even mild economic discount rates very quickly implies pursuing policies that act with extreme disregard for any future civilizations and future humans (and as such overdetermine the results of any analysis about the long-run future).
The report says:
However, for this equation to equal to 432W, we would require merely that ρ = 0. 99526. In other words, we would need to discount utility flows like our own at 0.47% per year, to value such a future at 432 population years. This is higher than Davidson (2022), though still lower than the lowest rate recommended in Circular A-4. It suggests conservative, but not unheard of, valuations of the distant future would be necessary to prefer pausing science, if extinction imperiled our existence at rates implied by domain expert estimates.
At this discount rate, you would value a civilization that lives 10,000 years in the future, which is something that past humans decisions did influence, at less than a billion billion times of their civilization at the time. By this logic ancestral humans should have taken a trade where they had a slightly better meal, or a single person lived a single additional second (or anything else that improved the lives of a single person by more than a billionth of a percent), in favor of present civilization completely failing to come into existence.
This seems like a pretty strong reductio-ad-absurdum, so I have trouble taking the recommendations of the report seriously. From an extinction risk perspective it seems that if you buy exponential discount rates as aggressive as 1% you basically committed to not caring about future humans in any substantial way. It also seems to me that various thought experiments (like the above ancestral human facing the decision on whether to deal with the annoyance of stepping over a stone, or causing the destruction of our complete civilization) demonstrate that such discount rates almost inevitably recommend actions that seem strongly in conflict with various common sense notions of treating future generations with respect.
This argument is highly dependent on your population ethics. From a longtermist, total positive utilitarian perspective, existential risk is many, many magnitudes worse than delaying progress, as it affect many, many magnitudes more (potential) people.
I think it would require an unreasonably radical interpretation of longtermism to believe, for example, that delaying something as valuable as a cure for cancer by 10 years (or another comparably significant breakthrough) would be justified, let alone overwhelmingly outweighed, because of an extremely slight and speculative anticipated positive impact on existential risk. Similarly, I think the same is true about AI, if indeed pausing the technology would only have a very slight impact on existential risk in expectation.
I’ve already provided a pragmatic argument for incorporating at least a slight amount of time discounting into one’s moral framework, but I want to reemphasize and elaborate on this point for clarity. Even if you are firmly committed to the idea that we should have no pure rate of time preference—meaning you believe future lives and welfare matter just as much as present ones—you should still account for the fact that the future is inherently uncertain. Our ability to predict the future diminishes significantly the farther we look ahead. This uncertainty should generally lead us to favor not delaying the realization of clearly good outcomes unless there is a strong and concrete justification for why the delay would yield substantial benefits.
Longtermism, as I understand it, is simply the idea that the distant future matters a great deal and should be factored into our decision-making. Longtermism does not—and should not—imply that we should essentially ignore enormous, tangible and clear short-term harms just because we anticipate extremely slight and highly speculative long-term gains that might result from a particular course of action.
I recognize that someone who adheres to an extremely strong and rigid version of longtermism might disagree with the position I’m articulating here. Such a person might argue that even a very small and speculative reduction in existential risk justifies delaying massive and clear near-term benefits. However, I generally believe that people should not adopt this kind of extreme strong longtermism. It leads to moral conclusions that are unreasonably detached from the realities of suffering and flourishing in the present and near future, and I think this approach undermines the pragmatic and balanced principles that arguably drew many of us to longtermism in the first place.
I don’t care about population ethics so don’t take this as a good faith argument. But doesn’t astronomical waste imply that saving lives earlier can compete on the same order of magnitude as x risk?
https://nickbostrom.com/papers/astronomical-waste/
I’m curious how many EAs believe this claim literally, and think a 10 million year pause (assuming it’s feasible in the first place) would be justified if it reduced existential risk by a single percentage point. Given the disagree votes to my other comments, it seems a fair number might in fact agree to the literal claim here.
Given my disagreement that we should take these numbers literally, I think it might be worth writing a post about why we should have a pragmatic non-zero discount rate, even from a purely longtermist perspective.
I think fixed discount rates (i.e. a fixed discount rate per year) of any amount seems pretty obviously crazy to me as a model of the future. We use discount rates as a proxy for things like “predictability of the future” and “constraining our plans towards worlds we can influence”, which often makes sense, but I think even very simple thought-experiments produce obviously insane conclusions if you use practically any non-zero fixed discount rate.
See also my comment here: https://forum.effectivealtruism.org/posts/PArvxhBaZJrGAuhZp/report-on-the-desirability-of-science-given-new-biotech?commentId=rsqwSR6h5XPY8EPiT