Personal takes aimed to raise a discussion and get input. This is a side product of another strategy project, rather than a systematic and rigorous analysis of longtermist movement building and different concepts related to (very) positive long-term futures. We have not been able to confirm with original authors that we represent their ideas correctly, and while we have done our best and tried to refrain from speculating about their views, we might be mistaken. We apologize in advance and will do our best to update the post as needed.
One of the authors (@elteerkers) has a clear bias as she works at Foresight Institute and runs their existential hope program (both of which will be referred to in the post). The other author (@Vilhelm Skoglund) is also somewhat biased, as he is elteerkers’ partner and has been interacting a lot with Foresight and their existential hope program. This is written in a personal capacity and does not represent the views of our employers.
We think the most significant downside risks of this post are that:
We come across as making a case for redirecting resources away from work to decrease existential risk toward work on increased long-term flourishing, thus increasing tension between people/groups on different sides of this spectrum. (Rather, we wish to raise a discussion to help ourselves and others better build the community of people interested in working to shape the long-term future.)
People overupdate on these takes or do not critically consider what they might mean for their respective contexts.
Summary
This post explores if there could be value in considering alternative approaches to motivate work aiming to shape the long-term future. More concretely, we consider strategies aiming to increase upside risk to complement what we see as the current primary approach—decreasing large downside risk (primarily existential risk). We begin by describing a few terms regarding pursuing upside risk, such as “existential hope,” “utopia,” and “positive trajectory change,” and consider how these terms are being used today and what work (if any) is being done under these banners in the EA and longtermist community.
We find all these terms relevant to consider when working to shape the long-term future. As we mainly want to consider the potential effects of approaches and framings pursuing upside risk in general, we use the term “very positive futures” (VPF) for the remainder of the post, where we discuss the potential arguments for and against putting more emphasis on pursuing upside risk.
Potential arguments for focusing more on VPF approaches and framings include:
A crucial consideration when deciding on how to prioritize between near-term and long-term causes and interventions (including reducing existential risk) is the potential value of the future. And to understand the potential value of the future, it is important to understand VPF.
Understanding what positive long-term futures could look like and what would make them more or less likely could aid in prioritizing between different causes and interventions aimed at shaping the far future.
Putting more emphasis on VPF in communication and crafting compelling visions of the future might;
Attract more people to work on improving the long-term future (including reducing existential risk) by creating a better understanding of how good the future could be, as well as increasing people’s emotional attachment to the future.
Help in bridging the gap between communities working on reducing existential risk from emerging technologies and techno-optimists who are skeptical of attempts to govern or slow down work on technological capabilities. This might be especially important if such techno-optimists are working on two-sided technologies and become more interested in differential technological development under a VPF approach.
Building on the above, currently, in culture, there seems to be a bias toward dystopian depictions of the future. Apart from making people pessimistic about the future and inhibiting action, this might make people underestimate the potential of scientific and technological advances and, thus, generally skeptical of societal investments in these fields.
Considering the potential of AI to accelerate innovation rapidly, a radically different future could be here soon. If this is the case, we need research and work on VPF to guide this acceleration towards desirable outcomes.
As far as we can tell, little work has been done on VPF, and we think the potential value is high while very uncertain. Thus, this area could have low-hanging fruit and likely substantial information value to be gained.
Potential arguments against focusing more on VPF approaches and framings include:
We should prioritize work on reducing existential risk. The authors of this post think work on existential risk is extremely important, and we are unsure if the current relative proportions of work to increase upside risk and decrease downside risk are sound. But you might think it is clear that the current relative proportions are sound or that there should be more focus on existential risk. A few reasons for this position could be that you think:
We can work on VPF when we have reduced more urgent existential risk to a sufficient level.
There is a “low” upper bound on how good the future could be that future civilizations can be expected to achieve.
It is not tractable to find and/or take actions with big, contingent, and persistent effects on the long-term future other than reducing existential risk.
Work on VPF could (unintentionally) increase existential risks, e.g., by accelerating AI development without proper safety considerations.
Putting more emphasis on VPF might not be an effective way to attract more people to work on improving the long-term future, or it might attract people who are less sympathetic to the complexities of long-term challenges, leading to ineffective or even harmful work.
Utopian pursuits have a bad track record. Similarly, work on VPF risks fostering naive overconfidence, leading to ill-conceived solutions or extreme measures that may cause more harm than good.
We end the post asking for input and discussion on the topic. A few key uncertainties we would love higher clarity on are the current proportions of work dedicated to VPF versus existential risk, the “default trajectory” of the future, conditional on survival, and ideas for promising causes or pathways for VPF.
Introduction
Most people in the EA community working to shape the long-term future focus on avoiding existential risk, for example, by preventing unaligned AI or engineered pandemics. There are good reasons for this; existential catastrophes are possible, and many think we are now experiencing a heightened level of risk. If they did occur, we would be unable to grasp our civilization’s long-term potential, and while it is difficult, we can act to avoid existential risk.
However, avoiding existential risk isn’t the only way we can shape the long-term future. Rather than avoiding large downside risks, we could act to attain large upside risks. For example, if we think humanity’s values might get locked in, we can take strategic action to improve our values today, which may lead to humanity being guided by better values for millennia to come. Or we might be able to speed up progress persistently—giving humanity access to life-altering technology or social development sooner than what would otherwise have happened.
We think there could be value in considering such (complementary rather than rivalry) approaches for the community of people working to shape the long-term future. We mean this partly in terms of what causes and interventions to focus on and partly in terms of how we talk about and frame our work. At least, we think this area would benefit from more conceptual clarity and deserves exploring. That is what we hope to do in this post. More precisely, we aim to:
Explore a few of ideas and terms related to very positive long-term futures (probably skip this if you have thought a lot about alternative approaches to shape the long-term future)
Present potential benefits for the EA and wider longtermist community of focusing more on very positive long-term futures
Present potential drawbacks of focusing more on very positive long-term futures
List our main uncertainties and raise questions for further discussion
1. Ideas and terms regarding positive long-term futures
Introduction
Before exploring the arguments for and against focusing more on pursuing large upside risk, we want to describe the frameworks and terms we think seem most relevant in this domain—Existential Hope, Utopia(s), and Positive Trajectory Changes. Note that this is not an exhaustive list, and our understanding of these concepts might differ from how others use them. Please let us know if you think something important is lacking or disagree with what we say!
Existential Hope (and Eucatastrophes)
Origin
The terms existential hope (and eucatastrophe) were, in the meaning used here, introduced by Toby Ord and Owen Cotton-Barratt in the paper “Existential Risk and Existential Hope: Definitions.” They start the paper by exploring different definitions related to existential risk and suggest that a definition via expectations might be better than the traditional Bostrom definition as it handles non-binary events better. (Note that it seems Ord has gone back to prefer Bostrom’s traditional definition. At least it is what he uses in The Precipice.)
More precisely, they suggest the following definitions:
“An existential catastrophe is an event which causes the loss of a large fraction of expected value.”
“Existential risk is the chance of an existential catastrophe taking place.”
With that background, we can now move on to existential hope and eucatastrophes because it is quite natural to consider these parallel concepts as “the other side of the coin” when defining existential risk and existential catastrophe this way, via expectations.
Ord’s and Cotton-Barratt’s definitions:
“An existential eucatastrophe is an event which causes there to be much more expected value after the event than before.”
“Existential hope is the chance of an existential eucatastrophe taking place.”
Many uncertainties arise with this definition, especially concerning what they see as the threshold for “much more expected value.” Our understanding is that they have a (very) high bar in mind. The origin of life is mentioned as a specific eucatastrophe. Also, they mention the rise of multicellular life and intelligence as eucatastrophes, to the extent that they were not inevitable, and suggest successfully passing any ‘great filter’ as a general type of eucatastrophe. A further indication of their high bar is that in an EA Forum post, Owen Cotton-Barratt has said that he does not think successful whole-brain emulation or full automation of the labor required to maintain a comfortable style of living for everyone would amount to eucatastrophes, as it is unclear what the effect on the long-term expected value of these events would be.
Use today
As far as we can tell, the organization that uses the terms existential hope and eucatastrophe most today is Foresight Institute. Note that the co-author of this post, elteerkers, works at Foresight Institute.
Their use of the term existential hope is much broader than what Ord and Cotton-Barratt initially seem to have intended. On Foresight’s existential hope website, it is explained in the following way: “Existential hope is a way of looking at the future with a lens that acknowledges the potential for positive change in the world. By focusing on what is actually possible, we can start to map out the main developments and challenges that need to be navigated to reach our desired outcome.”
Foresight also uses the term “eucatastrophe” in a much broader way than Ord and Cotton-Barratt, simply stating that “A eucatastrophe would be the opposite of a catastrophe, i.e., an event after which the expected value of the world would be much higher.” There is no definition of how much higher the value would have to be to count as a eucatastrophe. Examples of what is referred to as potential eucatastrophes on their website are inventions like cryopreservation or safe fusion energy.
Foresight often communicates what they believe existential hope is in relation to existential risk. When asked why they work with existential hope, Foresight CEO Allison Duettmann has said:
“We think it is useful to engage with the possibility that we may not be doomed and to consider what may be on the other side. There are many great projects dedicated to identifying the threats to human existence, but very few serious projects offer guidance on what to aim for instead. Our Existential Hope work is focused on generating the common knowledge needed to coordinate toward beautiful futures. By mapping the space of peril and promise which lie before us, perhaps we can use that knowledge to actively shoot for promising futures, rather than merely avoiding the terrifying ones.”
As alluded to in this quote, Foresight does not only focus on existential hope when considering what (causes) to work on when trying to improve the long-term future, but also how to talk about this work. While acknowledging risks and needing to be careful, Foresight aims to imbue people with hope and excitement about the future.
Utopias and describing very positive futures
Origin
The history of the term “utopia” is too long and broad to go over in this post. Suffice it to say that it is a highly established term, coined by Sir Thomas More in 1516 when he wrote a fictional book by the same name. The word is derived from the Greek words: “ou,” meaning “not,” and “topos,” meaning “place,” which together suggest that a utopia is a “no-place” or an idealized, nonexistent society.
Over time, the term “utopia” has come to represent any visionary or idealized state of society and has been used in a variety of contexts, both positive and negative. Some have also sought to bring utopian ideals one step further and turn them into reality through planned communities or political reforms, sometimes leading to significant suffering. More on this below.
We recommend the Wikipedia article for a more comprehensive account of the history of the word Utopia and how it has been used more generally.
Use today
Today, utopia is used in many contexts. Below, we will focus on use within EA and the community of people working to shape the long-term future.
Several people in EA and the community working to shape the long-term future have written about utopia. While not all offer explicit definitions, and they use the term somewhat differently, we think it is fair to say that most use it to refer to “a profoundly good future” (borrowing words from Bostrom).
A common theme in these writings is an emphasis on the fact that humanity, along with our descendants, has the potential to construct a profoundly good future through sufficient patience and wisdom. Another theme in the writing and thinking on utopias seems to be centered around trying to describe concrete utopias and why it is both important and challenging.
To avoid the difficulties of describing appealing concrete utopias and to aid in steering toward positive futures, authors have described different types of utopias or paths toward them, such as:
Mild to Radical Utopias—Introduced by Holden Karnofsky, who argues that utopias can be thought of as existing on a spectrum. Mild utopias are those that make incremental improvements to the current world and do not fundamentally challenge the status quo. These utopias might involve minor changes to laws and policies or the implementation of new technologies that make life better in some way. Radical utopias are those that involve more significant changes to the way society is organized and operated.
Concrete vs Sublime Utopias—Introduced by Joseph Carlsmith, who worries that most imagined Utopias today are too small. Most depictions of Utopia fall into the category of Concrete Utopias. Sublime Utopias may be too hard for us to even imagine based on how we function and are limited as human beings.
Protopia—Introduced by Kevin Kelly to describe a state of continuous improvement and incremental progress in the future rather than a fixed and static utopian outcome.
Weirdtopia—Introduced by Eliezer Yudkwosky to explain the fact that the future will be weird to us. It may even be indistinguishable from a Dystopia for us today. That’s because values drift a lot over time. Especially given the fact that technological progress is speeding up and leading to new social realities, values may only drift faster and farther.
Paretotopia—Introduced by Eric Drexler and Mark S. Miller as a comparison to phototropism in plants. Just as plants naturally grow towards light, Paretotopia represents a utopian state where society gravitates towards Pareto-preferred outcomes. They argue that we already live in a form of Paretotopia, as voluntary interactions gradually move civilization into Pareto-preferred directions, e.g., we have become less violent over time, and various aspects of life, such as health and education, have improved.
Worldbuilding
A concept related to imagining positive futures that seems worth noting is worldbuilding. We understand worldbuilding to be a mix of art and science (with more or less focus on one of them depending on context) that aims to construct a coherent, and preferably detailed, fictitious depiction of a world. When exploring possible futures for our own world, worldbuilding can serve as a tool to gain a deeper understanding of which types of worlds may be more or less desirable and potentially how to navigate towards a more desirable reality.
Worldbuilding has been employed in fields like emergency and defense planning. In a paper from RAND, it is argued that worldbuilding can help create a system for identifying concrete actions that “lead to both new thinking and coordinated effort [...] in a direction that aims at a desirable future, not merely a default one” and “help(s) inject more focus on human agency into our consideration of technology futures by exploring purposely different scenarios that stretch the imagination to think about what we would like to have happen within a robust framework for how it might happen.”
For interesting examples of worldbuilding and imagined worlds, see Future of Life Institute’s Worldbuilding contest and listen to their podcast, where top entries are explored.
(Positive) Trajectory Change
Origin
In his Ph.D. thesis, Nick Beckstead introduced the concept of the world’s development trajectory (trajectory for short), which is a rough summary of the way the future will unfold over time. This summary includes various facts about the world, e.g., “how rich people are, what technologies are available, how happy people are, how developed our science and culture is (...), and how well things are going all-things-considered at different points of time.” Notably, the concept of a trajectory change is related to the concept of path dependence (whenever some aspect of the world could easily have been arranged in way X but is arranged in way Y due to something that happened in the past) and historical contingency (chance-influenced events with substantial long-term effects).
A positive trajectory change occurs when our actions improve the long-term path along which our descendants will walk.
Use today
Various actors use the concept of trajectory changes today. For example, Will MacAskill explores it in What We Owe the Future, distinguishing between attempts to ensure civilizational survival (avoiding extinction) and change trajectory (improving the quality of future people’s lives). Further, he introduces a framework for assessing the expected value of attempts to influence the future. We can utilize three main criteria: significance, persistence, and contingency.
Significance considers the impact of an event on the world’s overall value. It gauges the difference an event makes, whether positive or negative.
Persistence evaluates the duration of that impact. Essentially, how long will the consequences of a particular event last?
Contingency considers whether the event is likely to have occurred without any intervention. It questions the inevitability of an outcome.
For instance, let’s examine species loss. One should assess the value that a specific species brings to the world. This is its significance. Next, consider the permanence of its loss: once a species is gone, can it ever be reclaimed? This is its persistence. Finally, it’s essential to evaluate contingency: even if efforts were made to prevent a species’ extinction, could external forces still lead to its demise shortly after?
Toby Ord recently published a great paper describing different potential positive trajectory changes. To learn more about these trajectories, we recommend reading the paper or watching this clip, as the graphics are helpful for gaining an understanding, but below is a very rough summary of the trajectories Ord considers (other than avoiding extinction):
Advancements – If we think the future is likely to be better than the present, we could try to improve it by shifting the trajectory of humanity’s instantaneous value earlier by some amount of time. This may correspond roughly to advancing all forms of progress by that same amount of time.
Speed-ups – Instead of only advancing progress by some amount, we could maybe do something to permanently speed it up—to achieve in 100 years what would otherwise have taken us 101, achieve in 1,000 years what would otherwise have taken 1,010, and so on. As this would have a proportionally more significant effect further into the future, it wouldn’t be shifting the future trajectory but rather compressing it.
Gains – What if, instead of adjusting the timings of the future trajectory, we could directly adjust the instantaneous value itself? A gain is a direct increase of instantaneous value by a constant absolute amount.
Enhancements – It may also be possible to permanently improve humanity’s instantaneous value by a given proportion. E.g., making every moment across humanity’s future 1% more valuable. Ord calls this an enhancement.
Continued terminology in this post
We think all of the concepts above—Existential Hope, Utopias, and Trajectory Change—are relevant to consider when working to improve the long-term future. And the main thing we want to do with this post is to explore the value of upside risk approaches as complementary to reducing existential risk approaches. Thus, in the following, we will not limit ourselves to one of the concepts but rather refer to “Very Positive Futures” (VPF) in general, and only when relevant, highlight if one of the concepts seems especially useful.
We will not give a precise definition of VPF but settle with saying that working on VPF means trying to identify and/or bring about something that would be very good for the expected value of the long-term future. Examples of what working on VPF could look like are changing society’s values to care more about non-human-sentient beings living now and in the future, creating institutions that better represent future generations, or finding techniques to increase the likelihood of potential sentient artificial intelligence having better lives, to mention a few.
Importantly, with VPF and in the following, we also want to focus on framings and narratives of work to shape the long-term future; that is not only what to work on but also how to talk about the work. We consider what benefits and disadvantages more positive and hopeful framings might have. We want to be very clear that a VPF framing does not imply that risks should not be mentioned or considered. It means that considerations of large downside risks (catastrophic and existential risks) should be complemented with considerations of large upside risks (very positive futures).
2. Reasons why we might want to focus more on VPF
Generate ideas for new causes and interventions
If we can ensure the survival of humanity broadly construed, we believe there could still be vast differences in the expected value of the future, including between a “default” future and a very good future. While we think that it will be very hard to increase the chance of VPF, there does not seem to have been much thought put into finding such opportunities. Giving this more consideration might help generate ideas for new causes and effective interventions. Arguably, improving values, as advocated by Will MacAskill in What We Owe the Future, falls into this bucket.
Additionally, a better mapping of VPF and a better understanding of what to aim for and how to get there might aid in avoiding paths that lead to unrecoverable dystopias or collapses (i.e., existential catastrophes other than extinction).
Improve how to prioritize the future in relation to the present
A crucial consideration when deciding on how to prioritize between near-term and long-term causes and interventions (including causes and interventions to lower existential risk) is the potential value of the future. To understand the potential value of the future, it is important to understand VPF, especially if we think there is a high variance in the value of potential futures. As far as we have been able to tell, few have given these questions serious thought, indicating that there is still low-hanging fruit to pick.
One important sub-question here is whether the future will be good by “default,” conditional on us getting there. For more on this, we want to recommend What We Owe The Future chapter 9 by William MacAskill, this post by Jan Brauner and Friederike Grosse-Holz, and this post by Paul Christiano.
Improve how to prioritize between long-term focused causes and interventions
A better understanding of how good and likely various long-term futures are, what they would look like, and what would make them more or less likely could aid in prioritizing different causes and interventions aimed at shaping the far future.
First, we want to point to Toby Ord’s recent paper and talk, comparing different long-term trajectories. It is much better and more exhaustive than anything we can hope to create. However, we want to highlight two observations in Ord’s paper that we think are good examples of how alternative approaches to VPF can help us prioritize between and within longtermist causes.
Work aimed at improving the long-term future should ideally scale with both the instantaneous value of the long-term future and its duration. Using Ord’s terminology, Advancements and Speed-ups seem less promising than Enhancements (and existential risk reduction).
The value of work bringing about progress sooner critically depends on whether they also (risk to) bring about extinction sooner. Using Ord’s terminology, Advancements and Speed-ups are negative if they also bring forward extinction.
Further, we make a few of our own observations. If a cause both reduces existential risk and increases existential hope, it seems like it, all else being equal, should be prioritized higher than one only reducing existential risk by an equal amount. More concretely, this might, on the margin, push for something like AI Safety being more valuable than nuclear security, as it seems AI has larger potential to bring about very positive things (if safe) compared to nuclear technology. Similarly to prioritizing between causes, if a particular intervention within a cause both reduces existential risk and increases existential hope, it seems like it, all else being equal, should be prioritized higher than one only reducing existential risk. More concretely, this might push for something such as focusing on neglected green tech rather than campaigns to nudge people to use less fossil fuels.
Also, prioritizing existential risk reduction leans quite heavily on the assumption that the expected value of human survival is positive. However, if the expected value (by default) is lower than previously thought, it suggests a need to reallocate some longtermist resources from focusing on extinction risk reduction and towards trying to improve the quality of the future. This could be done both by decreasing downside quality risk, including s-risks, and increasing upside quality risk, including the chance of VPF. Some potential approaches to enhance the quality of the long-term future include expanding the moral circle, cooperative game theory, the well-being of digital minds, and global priorities research. It seems worth noting that risk aversion increases the value of improving the quality of the future relative to ensuring survival, conditional on thinking there is some likelihood of a bad future. Another potential argument for valuing qualitative change higher than survival is that, plausibly, the value lost as a result of the gap between expected and highest-possible-quality is greater than the value lost as a result of the gap between expected and highest-possible-duration of survival. (Obviously, this argument could be turned on its head.)
Attracting more people to work on shaping the long-term future
Increasing the salience and emotional connection to the future
Something we find striking is that many early thinkers about existential risks were transhumanists, such as Nick Bostrom, Anders Sandberg, and David Pearce. One might wonder why. Was it just a fear of AI bringing about the end of humanity or the concerns about the misuse of biotechnology? We do not think so; rather these thinkers seem to have been driven by an understanding and a sense that when it comes to the future, there’s so much at stake. We think this is an interesting indication that viewing the future through a VPF lens, where the stakes are understood to be very high, could motivate a deeper concern for the long-term future, including existential risks. In a recent 80,000 Hours podcast episode with Anders Sandberg, he seems to allude to a similar conclusion:
“I think one reason I started on this [writing his book on Grand Futures] was I realised we need to write about hope. A lot of my research is about existential risk and global catastrophes and other dreadful things. Quite often journalists ask me, ‘But how do you sleep at night?’ And I usually explain actually quite well, because I’m thinking that I’m doing my part to reduce some of this risk. But the deeper answer is that I’m really optimistic. And you have to be optimistic about the future to want to save it. If the future actually could be very grand, we have a very good reason to save it.”
On this point, we think that most people seem to vastly underestimate how good or how large the future could be. This is perhaps a defense mechanism, as it may be easier to be complacent about a future that is only marginally better than the present. More on this below.
Bridging the gap between different communities
By offering a complementary framing, VPF could potentially expand engagement in longtermist work, attracting groups that haven’t traditionally participated because of their less risk-focused approach to development and the future. It seems that some actors that fit this description have reacted against the current EA and longtermist discourse and community, as they think too little attention is given to the benefits of development (such as Marc Andreessen and Effective Accelerationists).
We think it may be especially important to consider VPF for people who are excited about (unchecked) technological development and/or working on two-sided techniques with limited concern for the downside risks or the long-term future. Our intuition is that such actors would be more attracted to a VPF-framing, as this perspective, while emphasizing longer-term effects and the importance of safety and reducing existential risk, also talks about the upside, which fits more naturally with a development mindset. Further, if someone already is, or is more likely to, work on two-sided techniques or short-term capabilities without much consideration for the long-term future, the counterfactual of getting them to update seems extra high.
There is a very important counter-consideration here. A VPF approach might lead to over-excitement and work for unchecked technological development. Therefore, it’s crucial that VPF places a strong emphasis on safety-centric strategies like differential technological development.
There is currently a bias toward negative futures
Our perception is that there’s little discussion or description of potential VPFs, while there is a lot of dystopian futures. Further, the few existing attempts to describe VPF often fall short.
Homogeneous: it’s hard to describe a world of people living very differently from each other. To be concrete, utopias tend to emphasize specific lifestyles, daily activities, etc., which ends up sounding totalitarian.
Alien: Anything too different from the status quo is going to sound uncomfortable to many people.
We think the lack of compelling visions for the long-term future is problematic for a couple of different reasons (this is obviously more or less what this entire post is about, but a few things seem especially relevant to note here):
Faced with a flow of dystopian stories in our popular culture, it seems easy to feel pessimistic about the future, inhibiting action.
Building on the above, we think many underestimate the potential of scientific and technological advances. There’s a perspective that posits technological and scientific advancements as neutral or even harmful. It’s cautioned that while some problems may be solved, others of equal or higher weight might emerge or that the benefits plateau beyond a certain threshold of advancement. This perspective seems especially likely to occur where pessimistic narratives overshadow optimistic ones. We think it is more indicative of our lack of imagination than any intrinsic limitations of the future itself.
Short-term aims and visions are not enough. While we think addressing immediate societal concerns is paramount, an exclusive focus on them seems myopic. We should also look to the horizon, considering humanity’s longer-term objectives (including preventing potential large-scale catastrophes, not only for those living today but also considering the broader scope of humanity’s potential future). If the notions of positive futures seem ambiguous or ill-defined, it could be tempting to shift focus to more tangible, short-term goals.
Radical changes and (what we might think of as) the long-term future could be here surprisingly quickly, so we want to know what to aim for. See more below.
As an end note here, we believe that, while conceptualizing VPF seems complicated, it is worth further effort; we haven’t tried enough to conclude it is intractable.
The future could be here sooner than we think
This section, arguing that the future could be here sooner than we think, mainly summarizes some posts in Holden Karnofsky’s Most Important Century series.
Standard economic theory suggests that any technology capable of fully automating innovation would lead to productivity skyrocketing to infinity this century. This is due to the creation of a powerful feedback loop: more resources lead to more innovation, which leads to more resources, and so on. To some extent, we have already seen this—economic history followed a loop of more resources, leading to more people, leading to more ideas, which led to more resources, and so on. However, a couple of hundred years ago, during the “demographic transition,” the “more resources leading to more people” dynamic of that loop stopped. Instead of leading to more people, more resources led to richer people; at least, this has been the case for more developed nations. If some other technology were to restore the “more resources leading to more ideas” dynamic, the feedback loop could come back. One such technology could be the right kind of AI, where more resources lead to more AIs, which lead to more ideas, which lead to more resources, and so on. This means that a radically different (long-run) future could be upon us shortly after relevant AI is developed (if it ever is). And as it seems relevant AI could be developed in the decades to come, our perspectives on which future to aim for may become increasingly important.
We think this raises the potential importance of conducting more research on VPF and desired futures now, particularly as we put some probability to scenarios where our current choices in non-trivial ways will shape future possibilities. By working to increase the chances of VPF, we enhance our likelihood of building AI systems and a society that not only mitigates downside risks but also brings about upside risks.
Information value
As indicated throughout this post, we believe little work has been done on VPF, and we think the potential value at hand is high but very uncertain. Thus, it seems there is substantial information value to be gained by further work and research on VPF. (Though we think it is far from obvious how to gain more information, making it less tractable.)
3. Reasons why we might not want to focus more on VPF
We want to stress that we feel uncertain about all our suggested reasons for concentrating more on VPF. Thus, relevant counterarguments could simply be negations of these suggestions. Below, we list the counterarguments that seem most relevant to us.
We should prioritize work on avoiding existential risk
As we have tried to highlight throughout the post, we think work on existential risk is extremely important. One of the authors, Vilhelm Skoglund, thinks this is where most people focusing on shaping the long-term future should direct their resources (and that there are compelling reasons for focusing on existential risk reduction also from a near-medium-term perspective) and is uncertain where resources should be directed on the margin in terms of focus on existential risk, vs. other approaches to shape the long term future. But you might think that it is clear that existential risk deserves more, or all, resources. There could be several reasons for this:
We can work on VPF when we have reduced more urgent existential risk to a sufficient level, and there will not be any meaningful lock-in limiting the potential upside following from this delay.
There is a “low” upper bound on how good the future could be, that future civilizations can be expected to achieve, or that moral convergence is very likely, conditional on survival.
It is not tractable to find and/or take action that has big, contingent, and persistent effects on our long-term future other than reducing existential risk.
Building on the above, feedback loops for increasing the likelihood of VPF are (even) worse than feedback loops for decreasing existential risk.
One might prioritize existential risk from the expected value in the near-medium-term and not be (primarily) motivated by potential value in the long-term future.
Work on VPF could potentially (unintentionally) increase existential risks. For instance, by accelerating AI capabilities or atomically precise manufacturing without proper safety considerations.
It is (much) more important to avert expected suffering than to increase expected well-being. (This would likely push you to focus on decreasing S-risks rather than existential risks.)
VPF will not effectively motivate people to work on shaping the long-term future
We have suggested some reasons why VPF might be an effective complementary framing to attract and motivate people to work on shaping the long-term future, including existential risk. Reasons you might disagree:
VPF approaches, and messaging might attract individuals who are development-biased and underestimate the complexity of long-term challenges, leading to, at best, resources redirected to ineffective attempts to shape the long-term future and, at worst, naive accelerationism that causes harm.
Focus on VPF might be seen as detached from present concerns (more so than existential risk), leading to lashback from those focusing on the near term. It might exacerbate the perception that most people concerned about the long-term future, including existential risk, don’t care much about near-term issues and/or are mainly motivated by small chances of astronomically good outcomes.
Focus on VPF might diminish the perceived urgency to take action today to bring about a positive long-term future and mitigate existential risks.
Several existential risks could affect those living today, making society more likely to engage with them for (partly) self-serving and near-term reasons. This might be especially true for institutions with built-in near-term focus, such as (most) political institutions.
Existential risk is simpler and more concrete than VPF, and when discussing existential risks, we agree on one fundamental point, in contrast to when discussing VPF; we desire survival and continued potential for good things but disagree on what those good things are. This might make existential risk a more compelling message for people to “rally” behind. On this, and as noted above, it is hard to craft compelling visions for the future, and this might make such attempts ineffective.
VPF may seem overly optimistic or “pollyannaish,” as it may resonate with people as unrealistic expectations of a guaranteed positive future. Further, lack of progress or setbacks could lead to disappointment and loss of motivation to engage in work to shape the long-term future.
We should prioritize focusing more on individual causes, rather than VPF.
Work on VFP will have negative consequences
Utopian thinking has a bad track record, and there might be a risk of similar developments with work on VPF. As noted by Carlsmith:
“Utopian” thinking has an aura, at best, of a kind of thin-ness and naïveté — an aspiration to encompass all of the world’s mess and contradiction in a top-down, rational plan, to “solve things once and for all,” to “do it right this time.” Here one thinks of poorly but “rationally” planned cities, failed communes, and the like.
And at worst, of course, Utopianism ends in abject horror. There is a certain type of fervor that comes from thinking, not just that paradise is possible, but that you and your movement are actually, finally, building it. In the grip of such enthusiasm, humans have historically been very willing to do things like “purge” or “purify”; to crush and suppress any opposition; to break eggs to make omelets, and so on.
4. Ending Remarks
This post has explored the concept of very positive long-term futures and its potential influence and value to the EA community and efforts to shape the long-term future. We remain uncertain about whether to invest more resources in VPF approaches and framings in the long term, but we think it seems plausible enough that we should spend resources to get higher clarity on this and experiment more with VPF approaches and framings. As mentioned, perhaps it could help us gain a better understanding of what futures to aim for, help prioritize between causes and interventions, and motivate more people to work on shaping the long-term future.
A few of our key uncertainties that have not been highlighted in the above counterarguments:
How many are pursuing relevant work related to VPF in comparison to relevant work in existential risk?
Will (most of the) potential upside of the long-term future be harnessed by default, conditional on avoiding existential risks?
Apart from research similar to that of Ord, what are the most promising alternatives to existential risk reduction for shaping the long-term future? Are there any tractable causes or interventions?
We’d love to receive feedback on the post and the potential benefits and disadvantages of different approaches and framings, as well as those not covered in this post. Here are a few prompts for further discussion:
What are your views on the potential pros and cons of more focus on VPF? With which points do you agree or disagree? What are other arguments?
Which term resonates most with you: Existential Hope, Positive Trajectory Change, Utopia, or perhaps a similar term not described in the post?
Are there any significant pieces of writing or research on this topic that we have overlooked?
What are the best accounts of VPF you’re familiar with?
Do you think it would be useful with a more rigorous conceptual mapping or creation of frameworks and standardized terminology for VPF?
Existential Hope and Existential Risk: Exploring the value of optimistic approaches to shaping the long-term future
Epistemic Status
Personal takes aimed to raise a discussion and get input. This is a side product of another strategy project, rather than a systematic and rigorous analysis of longtermist movement building and different concepts related to (very) positive long-term futures. We have not been able to confirm with original authors that we represent their ideas correctly, and while we have done our best and tried to refrain from speculating about their views, we might be mistaken. We apologize in advance and will do our best to update the post as needed.
One of the authors (@elteerkers) has a clear bias as she works at Foresight Institute and runs their existential hope program (both of which will be referred to in the post). The other author (@Vilhelm Skoglund) is also somewhat biased, as he is elteerkers’ partner and has been interacting a lot with Foresight and their existential hope program. This is written in a personal capacity and does not represent the views of our employers.
We think the most significant downside risks of this post are that:
We come across as making a case for redirecting resources away from work to decrease existential risk toward work on increased long-term flourishing, thus increasing tension between people/groups on different sides of this spectrum. (Rather, we wish to raise a discussion to help ourselves and others better build the community of people interested in working to shape the long-term future.)
People overupdate on these takes or do not critically consider what they might mean for their respective contexts.
Summary
This post explores if there could be value in considering alternative approaches to motivate work aiming to shape the long-term future. More concretely, we consider strategies aiming to increase upside risk to complement what we see as the current primary approach—decreasing large downside risk (primarily existential risk). We begin by describing a few terms regarding pursuing upside risk, such as “existential hope,” “utopia,” and “positive trajectory change,” and consider how these terms are being used today and what work (if any) is being done under these banners in the EA and longtermist community.
We find all these terms relevant to consider when working to shape the long-term future. As we mainly want to consider the potential effects of approaches and framings pursuing upside risk in general, we use the term “very positive futures” (VPF) for the remainder of the post, where we discuss the potential arguments for and against putting more emphasis on pursuing upside risk.
Potential arguments for focusing more on VPF approaches and framings include:
A crucial consideration when deciding on how to prioritize between near-term and long-term causes and interventions (including reducing existential risk) is the potential value of the future. And to understand the potential value of the future, it is important to understand VPF.
Understanding what positive long-term futures could look like and what would make them more or less likely could aid in prioritizing between different causes and interventions aimed at shaping the far future.
Putting more emphasis on VPF in communication and crafting compelling visions of the future might;
Attract more people to work on improving the long-term future (including reducing existential risk) by creating a better understanding of how good the future could be, as well as increasing people’s emotional attachment to the future.
Help in bridging the gap between communities working on reducing existential risk from emerging technologies and techno-optimists who are skeptical of attempts to govern or slow down work on technological capabilities. This might be especially important if such techno-optimists are working on two-sided technologies and become more interested in differential technological development under a VPF approach.
Building on the above, currently, in culture, there seems to be a bias toward dystopian depictions of the future. Apart from making people pessimistic about the future and inhibiting action, this might make people underestimate the potential of scientific and technological advances and, thus, generally skeptical of societal investments in these fields.
Considering the potential of AI to accelerate innovation rapidly, a radically different future could be here soon. If this is the case, we need research and work on VPF to guide this acceleration towards desirable outcomes.
As far as we can tell, little work has been done on VPF, and we think the potential value is high while very uncertain. Thus, this area could have low-hanging fruit and likely substantial information value to be gained.
Potential arguments against focusing more on VPF approaches and framings include:
We should prioritize work on reducing existential risk. The authors of this post think work on existential risk is extremely important, and we are unsure if the current relative proportions of work to increase upside risk and decrease downside risk are sound. But you might think it is clear that the current relative proportions are sound or that there should be more focus on existential risk. A few reasons for this position could be that you think:
We can work on VPF when we have reduced more urgent existential risk to a sufficient level.
There is a “low” upper bound on how good the future could be that future civilizations can be expected to achieve.
It is not tractable to find and/or take actions with big, contingent, and persistent effects on the long-term future other than reducing existential risk.
Work on VPF could (unintentionally) increase existential risks, e.g., by accelerating AI development without proper safety considerations.
Putting more emphasis on VPF might not be an effective way to attract more people to work on improving the long-term future, or it might attract people who are less sympathetic to the complexities of long-term challenges, leading to ineffective or even harmful work.
Utopian pursuits have a bad track record. Similarly, work on VPF risks fostering naive overconfidence, leading to ill-conceived solutions or extreme measures that may cause more harm than good.
We end the post asking for input and discussion on the topic. A few key uncertainties we would love higher clarity on are the current proportions of work dedicated to VPF versus existential risk, the “default trajectory” of the future, conditional on survival, and ideas for promising causes or pathways for VPF.
Introduction
Most people in the EA community working to shape the long-term future focus on avoiding existential risk, for example, by preventing unaligned AI or engineered pandemics. There are good reasons for this; existential catastrophes are possible, and many think we are now experiencing a heightened level of risk. If they did occur, we would be unable to grasp our civilization’s long-term potential, and while it is difficult, we can act to avoid existential risk.
However, avoiding existential risk isn’t the only way we can shape the long-term future. Rather than avoiding large downside risks, we could act to attain large upside risks. For example, if we think humanity’s values might get locked in, we can take strategic action to improve our values today, which may lead to humanity being guided by better values for millennia to come. Or we might be able to speed up progress persistently—giving humanity access to life-altering technology or social development sooner than what would otherwise have happened.
We think there could be value in considering such (complementary rather than rivalry) approaches for the community of people working to shape the long-term future. We mean this partly in terms of what causes and interventions to focus on and partly in terms of how we talk about and frame our work. At least, we think this area would benefit from more conceptual clarity and deserves exploring. That is what we hope to do in this post. More precisely, we aim to:
Explore a few of ideas and terms related to very positive long-term futures (probably skip this if you have thought a lot about alternative approaches to shape the long-term future)
Present potential benefits for the EA and wider longtermist community of focusing more on very positive long-term futures
Present potential drawbacks of focusing more on very positive long-term futures
List our main uncertainties and raise questions for further discussion
1. Ideas and terms regarding positive long-term futures
Introduction
Before exploring the arguments for and against focusing more on pursuing large upside risk, we want to describe the frameworks and terms we think seem most relevant in this domain—Existential Hope, Utopia(s), and Positive Trajectory Changes. Note that this is not an exhaustive list, and our understanding of these concepts might differ from how others use them. Please let us know if you think something important is lacking or disagree with what we say!
Existential Hope (and Eucatastrophes)
Origin
The terms existential hope (and eucatastrophe) were, in the meaning used here, introduced by Toby Ord and Owen Cotton-Barratt in the paper “Existential Risk and Existential Hope: Definitions.” They start the paper by exploring different definitions related to existential risk and suggest that a definition via expectations might be better than the traditional Bostrom definition as it handles non-binary events better. (Note that it seems Ord has gone back to prefer Bostrom’s traditional definition. At least it is what he uses in The Precipice.)
More precisely, they suggest the following definitions:
With that background, we can now move on to existential hope and eucatastrophes because it is quite natural to consider these parallel concepts as “the other side of the coin” when defining existential risk and existential catastrophe this way, via expectations.
Ord’s and Cotton-Barratt’s definitions:
Many uncertainties arise with this definition, especially concerning what they see as the threshold for “much more expected value.” Our understanding is that they have a (very) high bar in mind. The origin of life is mentioned as a specific eucatastrophe. Also, they mention the rise of multicellular life and intelligence as eucatastrophes, to the extent that they were not inevitable, and suggest successfully passing any ‘great filter’ as a general type of eucatastrophe. A further indication of their high bar is that in an EA Forum post, Owen Cotton-Barratt has said that he does not think successful whole-brain emulation or full automation of the labor required to maintain a comfortable style of living for everyone would amount to eucatastrophes, as it is unclear what the effect on the long-term expected value of these events would be.
Use today
As far as we can tell, the organization that uses the terms existential hope and eucatastrophe most today is Foresight Institute. Note that the co-author of this post, elteerkers, works at Foresight Institute.
Their use of the term existential hope is much broader than what Ord and Cotton-Barratt initially seem to have intended. On Foresight’s existential hope website, it is explained in the following way: “Existential hope is a way of looking at the future with a lens that acknowledges the potential for positive change in the world. By focusing on what is actually possible, we can start to map out the main developments and challenges that need to be navigated to reach our desired outcome.”
Foresight also uses the term “eucatastrophe” in a much broader way than Ord and Cotton-Barratt, simply stating that “A eucatastrophe would be the opposite of a catastrophe, i.e., an event after which the expected value of the world would be much higher.” There is no definition of how much higher the value would have to be to count as a eucatastrophe. Examples of what is referred to as potential eucatastrophes on their website are inventions like cryopreservation or safe fusion energy.
Foresight often communicates what they believe existential hope is in relation to existential risk. When asked why they work with existential hope, Foresight CEO Allison Duettmann has said:
As alluded to in this quote, Foresight does not only focus on existential hope when considering what (causes) to work on when trying to improve the long-term future, but also how to talk about this work. While acknowledging risks and needing to be careful, Foresight aims to imbue people with hope and excitement about the future.
Utopias and describing very positive futures
Origin
The history of the term “utopia” is too long and broad to go over in this post. Suffice it to say that it is a highly established term, coined by Sir Thomas More in 1516 when he wrote a fictional book by the same name. The word is derived from the Greek words: “ou,” meaning “not,” and “topos,” meaning “place,” which together suggest that a utopia is a “no-place” or an idealized, nonexistent society.
Over time, the term “utopia” has come to represent any visionary or idealized state of society and has been used in a variety of contexts, both positive and negative. Some have also sought to bring utopian ideals one step further and turn them into reality through planned communities or political reforms, sometimes leading to significant suffering. More on this below.
We recommend the Wikipedia article for a more comprehensive account of the history of the word Utopia and how it has been used more generally.
Use today
Today, utopia is used in many contexts. Below, we will focus on use within EA and the community of people working to shape the long-term future.
Several people in EA and the community working to shape the long-term future have written about utopia. While not all offer explicit definitions, and they use the term somewhat differently, we think it is fair to say that most use it to refer to “a profoundly good future” (borrowing words from Bostrom).
A common theme in these writings is an emphasis on the fact that humanity, along with our descendants, has the potential to construct a profoundly good future through sufficient patience and wisdom. Another theme in the writing and thinking on utopias seems to be centered around trying to describe concrete utopias and why it is both important and challenging.
To avoid the difficulties of describing appealing concrete utopias and to aid in steering toward positive futures, authors have described different types of utopias or paths toward them, such as:
Mild to Radical Utopias—Introduced by Holden Karnofsky, who argues that utopias can be thought of as existing on a spectrum. Mild utopias are those that make incremental improvements to the current world and do not fundamentally challenge the status quo. These utopias might involve minor changes to laws and policies or the implementation of new technologies that make life better in some way. Radical utopias are those that involve more significant changes to the way society is organized and operated.
Concrete vs Sublime Utopias—Introduced by Joseph Carlsmith, who worries that most imagined Utopias today are too small. Most depictions of Utopia fall into the category of Concrete Utopias. Sublime Utopias may be too hard for us to even imagine based on how we function and are limited as human beings.
Protopia—Introduced by Kevin Kelly to describe a state of continuous improvement and incremental progress in the future rather than a fixed and static utopian outcome.
Weirdtopia—Introduced by Eliezer Yudkwosky to explain the fact that the future will be weird to us. It may even be indistinguishable from a Dystopia for us today. That’s because values drift a lot over time. Especially given the fact that technological progress is speeding up and leading to new social realities, values may only drift faster and farther.
Paretotopia—Introduced by Eric Drexler and Mark S. Miller as a comparison to phototropism in plants. Just as plants naturally grow towards light, Paretotopia represents a utopian state where society gravitates towards Pareto-preferred outcomes. They argue that we already live in a form of Paretotopia, as voluntary interactions gradually move civilization into Pareto-preferred directions, e.g., we have become less violent over time, and various aspects of life, such as health and education, have improved.
Worldbuilding
A concept related to imagining positive futures that seems worth noting is worldbuilding. We understand worldbuilding to be a mix of art and science (with more or less focus on one of them depending on context) that aims to construct a coherent, and preferably detailed, fictitious depiction of a world. When exploring possible futures for our own world, worldbuilding can serve as a tool to gain a deeper understanding of which types of worlds may be more or less desirable and potentially how to navigate towards a more desirable reality.
Worldbuilding has been employed in fields like emergency and defense planning. In a paper from RAND, it is argued that worldbuilding can help create a system for identifying concrete actions that “lead to both new thinking and coordinated effort [...] in a direction that aims at a desirable future, not merely a default one” and “help(s) inject more focus on human agency into our consideration of technology futures by exploring purposely different scenarios that stretch the imagination to think about what we would like to have happen within a robust framework for how it might happen.”
For interesting examples of worldbuilding and imagined worlds, see Future of Life Institute’s Worldbuilding contest and listen to their podcast, where top entries are explored.
(Positive) Trajectory Change
Origin
In his Ph.D. thesis, Nick Beckstead introduced the concept of the world’s development trajectory (trajectory for short), which is a rough summary of the way the future will unfold over time. This summary includes various facts about the world, e.g., “how rich people are, what technologies are available, how happy people are, how developed our science and culture is (...), and how well things are going all-things-considered at different points of time.” Notably, the concept of a trajectory change is related to the concept of path dependence (whenever some aspect of the world could easily have been arranged in way X but is arranged in way Y due to something that happened in the past) and historical contingency (chance-influenced events with substantial long-term effects).
A positive trajectory change occurs when our actions improve the long-term path along which our descendants will walk.
Use today
Various actors use the concept of trajectory changes today. For example, Will MacAskill explores it in What We Owe the Future, distinguishing between attempts to ensure civilizational survival (avoiding extinction) and change trajectory (improving the quality of future people’s lives). Further, he introduces a framework for assessing the expected value of attempts to influence the future. We can utilize three main criteria: significance, persistence, and contingency.
Significance considers the impact of an event on the world’s overall value. It gauges the difference an event makes, whether positive or negative.
Persistence evaluates the duration of that impact. Essentially, how long will the consequences of a particular event last?
Contingency considers whether the event is likely to have occurred without any intervention. It questions the inevitability of an outcome.
For instance, let’s examine species loss. One should assess the value that a specific species brings to the world. This is its significance. Next, consider the permanence of its loss: once a species is gone, can it ever be reclaimed? This is its persistence. Finally, it’s essential to evaluate contingency: even if efforts were made to prevent a species’ extinction, could external forces still lead to its demise shortly after?
Toby Ord recently published a great paper describing different potential positive trajectory changes. To learn more about these trajectories, we recommend reading the paper or watching this clip, as the graphics are helpful for gaining an understanding, but below is a very rough summary of the trajectories Ord considers (other than avoiding extinction):
Advancements – If we think the future is likely to be better than the present, we could try to improve it by shifting the trajectory of humanity’s instantaneous value earlier by some amount of time. This may correspond roughly to advancing all forms of progress by that same amount of time.
Speed-ups – Instead of only advancing progress by some amount, we could maybe do something to permanently speed it up—to achieve in 100 years what would otherwise have taken us 101, achieve in 1,000 years what would otherwise have taken 1,010, and so on. As this would have a proportionally more significant effect further into the future, it wouldn’t be shifting the future trajectory but rather compressing it.
Gains – What if, instead of adjusting the timings of the future trajectory, we could directly adjust the instantaneous value itself? A gain is a direct increase of instantaneous value by a constant absolute amount.
Enhancements – It may also be possible to permanently improve humanity’s instantaneous value by a given proportion. E.g., making every moment across humanity’s future 1% more valuable. Ord calls this an enhancement.
Continued terminology in this post
We think all of the concepts above—Existential Hope, Utopias, and Trajectory Change—are relevant to consider when working to improve the long-term future. And the main thing we want to do with this post is to explore the value of upside risk approaches as complementary to reducing existential risk approaches. Thus, in the following, we will not limit ourselves to one of the concepts but rather refer to “Very Positive Futures” (VPF) in general, and only when relevant, highlight if one of the concepts seems especially useful.
We will not give a precise definition of VPF but settle with saying that working on VPF means trying to identify and/or bring about something that would be very good for the expected value of the long-term future. Examples of what working on VPF could look like are changing society’s values to care more about non-human-sentient beings living now and in the future, creating institutions that better represent future generations, or finding techniques to increase the likelihood of potential sentient artificial intelligence having better lives, to mention a few.
Importantly, with VPF and in the following, we also want to focus on framings and narratives of work to shape the long-term future; that is not only what to work on but also how to talk about the work. We consider what benefits and disadvantages more positive and hopeful framings might have. We want to be very clear that a VPF framing does not imply that risks should not be mentioned or considered. It means that considerations of large downside risks (catastrophic and existential risks) should be complemented with considerations of large upside risks (very positive futures).
2. Reasons why we might want to focus more on VPF
Generate ideas for new causes and interventions
If we can ensure the survival of humanity broadly construed, we believe there could still be vast differences in the expected value of the future, including between a “default” future and a very good future. While we think that it will be very hard to increase the chance of VPF, there does not seem to have been much thought put into finding such opportunities. Giving this more consideration might help generate ideas for new causes and effective interventions. Arguably, improving values, as advocated by Will MacAskill in What We Owe the Future, falls into this bucket.
Additionally, a better mapping of VPF and a better understanding of what to aim for and how to get there might aid in avoiding paths that lead to unrecoverable dystopias or collapses (i.e., existential catastrophes other than extinction).
Improve how to prioritize the future in relation to the present
A crucial consideration when deciding on how to prioritize between near-term and long-term causes and interventions (including causes and interventions to lower existential risk) is the potential value of the future. To understand the potential value of the future, it is important to understand VPF, especially if we think there is a high variance in the value of potential futures. As far as we have been able to tell, few have given these questions serious thought, indicating that there is still low-hanging fruit to pick.
One important sub-question here is whether the future will be good by “default,” conditional on us getting there. For more on this, we want to recommend What We Owe The Future chapter 9 by William MacAskill, this post by Jan Brauner and Friederike Grosse-Holz, and this post by Paul Christiano.
Improve how to prioritize between long-term focused causes and interventions
A better understanding of how good and likely various long-term futures are, what they would look like, and what would make them more or less likely could aid in prioritizing different causes and interventions aimed at shaping the far future.
First, we want to point to Toby Ord’s recent paper and talk, comparing different long-term trajectories. It is much better and more exhaustive than anything we can hope to create. However, we want to highlight two observations in Ord’s paper that we think are good examples of how alternative approaches to VPF can help us prioritize between and within longtermist causes.
Work aimed at improving the long-term future should ideally scale with both the instantaneous value of the long-term future and its duration. Using Ord’s terminology, Advancements and Speed-ups seem less promising than Enhancements (and existential risk reduction).
The value of work bringing about progress sooner critically depends on whether they also (risk to) bring about extinction sooner. Using Ord’s terminology, Advancements and Speed-ups are negative if they also bring forward extinction.
Further, we make a few of our own observations. If a cause both reduces existential risk and increases existential hope, it seems like it, all else being equal, should be prioritized higher than one only reducing existential risk by an equal amount. More concretely, this might, on the margin, push for something like AI Safety being more valuable than nuclear security, as it seems AI has larger potential to bring about very positive things (if safe) compared to nuclear technology. Similarly to prioritizing between causes, if a particular intervention within a cause both reduces existential risk and increases existential hope, it seems like it, all else being equal, should be prioritized higher than one only reducing existential risk. More concretely, this might push for something such as focusing on neglected green tech rather than campaigns to nudge people to use less fossil fuels.
Also, prioritizing existential risk reduction leans quite heavily on the assumption that the expected value of human survival is positive. However, if the expected value (by default) is lower than previously thought, it suggests a need to reallocate some longtermist resources from focusing on extinction risk reduction and towards trying to improve the quality of the future. This could be done both by decreasing downside quality risk, including s-risks, and increasing upside quality risk, including the chance of VPF. Some potential approaches to enhance the quality of the long-term future include expanding the moral circle, cooperative game theory, the well-being of digital minds, and global priorities research. It seems worth noting that risk aversion increases the value of improving the quality of the future relative to ensuring survival, conditional on thinking there is some likelihood of a bad future. Another potential argument for valuing qualitative change higher than survival is that, plausibly, the value lost as a result of the gap between expected and highest-possible-quality is greater than the value lost as a result of the gap between expected and highest-possible-duration of survival. (Obviously, this argument could be turned on its head.)
Attracting more people to work on shaping the long-term future
Increasing the salience and emotional connection to the future
Something we find striking is that many early thinkers about existential risks were transhumanists, such as Nick Bostrom, Anders Sandberg, and David Pearce. One might wonder why. Was it just a fear of AI bringing about the end of humanity or the concerns about the misuse of biotechnology? We do not think so; rather these thinkers seem to have been driven by an understanding and a sense that when it comes to the future, there’s so much at stake. We think this is an interesting indication that viewing the future through a VPF lens, where the stakes are understood to be very high, could motivate a deeper concern for the long-term future, including existential risks. In a recent 80,000 Hours podcast episode with Anders Sandberg, he seems to allude to a similar conclusion:
On this point, we think that most people seem to vastly underestimate how good or how large the future could be. This is perhaps a defense mechanism, as it may be easier to be complacent about a future that is only marginally better than the present. More on this below.
Bridging the gap between different communities
By offering a complementary framing, VPF could potentially expand engagement in longtermist work, attracting groups that haven’t traditionally participated because of their less risk-focused approach to development and the future. It seems that some actors that fit this description have reacted against the current EA and longtermist discourse and community, as they think too little attention is given to the benefits of development (such as Marc Andreessen and Effective Accelerationists).
We think it may be especially important to consider VPF for people who are excited about (unchecked) technological development and/or working on two-sided techniques with limited concern for the downside risks or the long-term future. Our intuition is that such actors would be more attracted to a VPF-framing, as this perspective, while emphasizing longer-term effects and the importance of safety and reducing existential risk, also talks about the upside, which fits more naturally with a development mindset. Further, if someone already is, or is more likely to, work on two-sided techniques or short-term capabilities without much consideration for the long-term future, the counterfactual of getting them to update seems extra high.
There is a very important counter-consideration here. A VPF approach might lead to over-excitement and work for unchecked technological development. Therefore, it’s crucial that VPF places a strong emphasis on safety-centric strategies like differential technological development.
There is currently a bias toward negative futures
Our perception is that there’s little discussion or description of potential VPFs, while there is a lot of dystopian futures. Further, the few existing attempts to describe VPF often fall short.
Holden Karnofsky, writing about potential Utopias, notes that they often seem:
Dull: Due to the lack of conflict and challenge.
Homogeneous: it’s hard to describe a world of people living very differently from each other. To be concrete, utopias tend to emphasize specific lifestyles, daily activities, etc., which ends up sounding totalitarian.
Alien: Anything too different from the status quo is going to sound uncomfortable to many people.
We think the lack of compelling visions for the long-term future is problematic for a couple of different reasons (this is obviously more or less what this entire post is about, but a few things seem especially relevant to note here):
Faced with a flow of dystopian stories in our popular culture, it seems easy to feel pessimistic about the future, inhibiting action.
Building on the above, we think many underestimate the potential of scientific and technological advances. There’s a perspective that posits technological and scientific advancements as neutral or even harmful. It’s cautioned that while some problems may be solved, others of equal or higher weight might emerge or that the benefits plateau beyond a certain threshold of advancement. This perspective seems especially likely to occur where pessimistic narratives overshadow optimistic ones. We think it is more indicative of our lack of imagination than any intrinsic limitations of the future itself.
Short-term aims and visions are not enough. While we think addressing immediate societal concerns is paramount, an exclusive focus on them seems myopic. We should also look to the horizon, considering humanity’s longer-term objectives (including preventing potential large-scale catastrophes, not only for those living today but also considering the broader scope of humanity’s potential future). If the notions of positive futures seem ambiguous or ill-defined, it could be tempting to shift focus to more tangible, short-term goals.
Radical changes and (what we might think of as) the long-term future could be here surprisingly quickly, so we want to know what to aim for. See more below.
As an end note here, we believe that, while conceptualizing VPF seems complicated, it is worth further effort; we haven’t tried enough to conclude it is intractable.
The future could be here sooner than we think
This section, arguing that the future could be here sooner than we think, mainly summarizes some posts in Holden Karnofsky’s Most Important Century series.
The Duplicator: Instant Cloning Would Make the World Economy Explode
Forecasting Transformative AI, Part 1: What Kind of AI?
Why AI alignment could be hard with modern deep learning
Standard economic theory suggests that any technology capable of fully automating innovation would lead to productivity skyrocketing to infinity this century. This is due to the creation of a powerful feedback loop: more resources lead to more innovation, which leads to more resources, and so on. To some extent, we have already seen this—economic history followed a loop of more resources, leading to more people, leading to more ideas, which led to more resources, and so on. However, a couple of hundred years ago, during the “demographic transition,” the “more resources leading to more people” dynamic of that loop stopped. Instead of leading to more people, more resources led to richer people; at least, this has been the case for more developed nations. If some other technology were to restore the “more resources leading to more ideas” dynamic, the feedback loop could come back. One such technology could be the right kind of AI, where more resources lead to more AIs, which lead to more ideas, which lead to more resources, and so on. This means that a radically different (long-run) future could be upon us shortly after relevant AI is developed (if it ever is). And as it seems relevant AI could be developed in the decades to come, our perspectives on which future to aim for may become increasingly important.
We think this raises the potential importance of conducting more research on VPF and desired futures now, particularly as we put some probability to scenarios where our current choices in non-trivial ways will shape future possibilities. By working to increase the chances of VPF, we enhance our likelihood of building AI systems and a society that not only mitigates downside risks but also brings about upside risks.
Information value
As indicated throughout this post, we believe little work has been done on VPF, and we think the potential value at hand is high but very uncertain. Thus, it seems there is substantial information value to be gained by further work and research on VPF. (Though we think it is far from obvious how to gain more information, making it less tractable.)
3. Reasons why we might not want to focus more on VPF
We want to stress that we feel uncertain about all our suggested reasons for concentrating more on VPF. Thus, relevant counterarguments could simply be negations of these suggestions. Below, we list the counterarguments that seem most relevant to us.
We should prioritize work on avoiding existential risk
As we have tried to highlight throughout the post, we think work on existential risk is extremely important. One of the authors, Vilhelm Skoglund, thinks this is where most people focusing on shaping the long-term future should direct their resources (and that there are compelling reasons for focusing on existential risk reduction also from a near-medium-term perspective) and is uncertain where resources should be directed on the margin in terms of focus on existential risk, vs. other approaches to shape the long term future. But you might think that it is clear that existential risk deserves more, or all, resources. There could be several reasons for this:
We can work on VPF when we have reduced more urgent existential risk to a sufficient level, and there will not be any meaningful lock-in limiting the potential upside following from this delay.
There is a “low” upper bound on how good the future could be, that future civilizations can be expected to achieve, or that moral convergence is very likely, conditional on survival.
It is not tractable to find and/or take action that has big, contingent, and persistent effects on our long-term future other than reducing existential risk.
Building on the above, feedback loops for increasing the likelihood of VPF are (even) worse than feedback loops for decreasing existential risk.
One might prioritize existential risk from the expected value in the near-medium-term and not be (primarily) motivated by potential value in the long-term future.
Work on VPF could potentially (unintentionally) increase existential risks. For instance, by accelerating AI capabilities or atomically precise manufacturing without proper safety considerations.
It is (much) more important to avert expected suffering than to increase expected well-being. (This would likely push you to focus on decreasing S-risks rather than existential risks.)
VPF will not effectively motivate people to work on shaping the long-term future
We have suggested some reasons why VPF might be an effective complementary framing to attract and motivate people to work on shaping the long-term future, including existential risk. Reasons you might disagree:
VPF approaches, and messaging might attract individuals who are development-biased and underestimate the complexity of long-term challenges, leading to, at best, resources redirected to ineffective attempts to shape the long-term future and, at worst, naive accelerationism that causes harm.
Focus on VPF might be seen as detached from present concerns (more so than existential risk), leading to lashback from those focusing on the near term. It might exacerbate the perception that most people concerned about the long-term future, including existential risk, don’t care much about near-term issues and/or are mainly motivated by small chances of astronomically good outcomes.
Focus on VPF might diminish the perceived urgency to take action today to bring about a positive long-term future and mitigate existential risks.
Several existential risks could affect those living today, making society more likely to engage with them for (partly) self-serving and near-term reasons. This might be especially true for institutions with built-in near-term focus, such as (most) political institutions.
Existential risk is simpler and more concrete than VPF, and when discussing existential risks, we agree on one fundamental point, in contrast to when discussing VPF; we desire survival and continued potential for good things but disagree on what those good things are. This might make existential risk a more compelling message for people to “rally” behind. On this, and as noted above, it is hard to craft compelling visions for the future, and this might make such attempts ineffective.
VPF may seem overly optimistic or “pollyannaish,” as it may resonate with people as unrealistic expectations of a guaranteed positive future. Further, lack of progress or setbacks could lead to disappointment and loss of motivation to engage in work to shape the long-term future.
We should prioritize focusing more on individual causes, rather than VPF.
Work on VFP will have negative consequences
Utopian thinking has a bad track record, and there might be a risk of similar developments with work on VPF. As noted by Carlsmith:
4. Ending Remarks
This post has explored the concept of very positive long-term futures and its potential influence and value to the EA community and efforts to shape the long-term future. We remain uncertain about whether to invest more resources in VPF approaches and framings in the long term, but we think it seems plausible enough that we should spend resources to get higher clarity on this and experiment more with VPF approaches and framings. As mentioned, perhaps it could help us gain a better understanding of what futures to aim for, help prioritize between causes and interventions, and motivate more people to work on shaping the long-term future.
A few of our key uncertainties that have not been highlighted in the above counterarguments:
How many are pursuing relevant work related to VPF in comparison to relevant work in existential risk?
Will (most of the) potential upside of the long-term future be harnessed by default, conditional on avoiding existential risks?
Apart from research similar to that of Ord, what are the most promising alternatives to existential risk reduction for shaping the long-term future? Are there any tractable causes or interventions?
We’d love to receive feedback on the post and the potential benefits and disadvantages of different approaches and framings, as well as those not covered in this post. Here are a few prompts for further discussion:
What are your views on the potential pros and cons of more focus on VPF? With which points do you agree or disagree? What are other arguments?
Which term resonates most with you: Existential Hope, Positive Trajectory Change, Utopia, or perhaps a similar term not described in the post?
Are there any significant pieces of writing or research on this topic that we have overlooked?
What are the best accounts of VPF you’re familiar with?
Do you think it would be useful with a more rigorous conceptual mapping or creation of frameworks and standardized terminology for VPF?