Humanity’s vast future and its implications for cause prioritization

Link post

Summary

  • Humanity could last millions of years and spread beyond Earth, resulting in an unimaginably large number of people being born. Even under conservative assumptions, at least 100 trillion people could exist in the future.

  • The best way to help future people have good lives is to create positive trajectory changes—durable improvements to the world at every point in the future. Reducing existential and suffering risks would improve humanity’s entire future trajectory, whereas speeding up economic growth would only create benefits over the next few thousand years (which is still a very big deal for the next few generations).

  • Based on this information, I’ve pivoted toward reducing suffering risks (S-risks), or risks of astronomical suffering in the future, because it is an important and neglected strategy for creating positive long-term trajectory changes.

How many people could there be?

There are almost 8 billion people in the world today. The United Nations estimates that by 2100, the global population will stabilize around 11 billion people with 125 million births per year. If we have an idea of how long humanity will survive, then we can estimate how many people will be born in the future.

Modern humans have been around for 200,000 years, and the average mammalian species lasts for a million years. So let’s suppose that humanity will survive for another 800,000 years. If 125 million people are born each year, then 100 trillion people will eventually be born. That’s 12,500 times as many people as are alive today. If we last 10 million years, like some mammalian species, then we could eventually give rise to over a quadrillion people.

Of course, human civilization could survive for much longer than that—or much shorter. Even in extreme scenarios, climate change is unlikely to destroy civilization or render humans extinct, but it could indirectly lead to our extinction by driving political instability and global conflict.[1] By some accounts, artificial general intelligence could be developed this century, and if we don’t have the technology to align AGI systems with our values, one could go rogue and kill us all simply because it is indifferent to our survival.[2]

On the other hand, we could use our technological capabilities to live for billions of years in the Solar System and beyond. This century, we will probably start establishing human settlements on other planets, like Mars, and satellites like Saturn’s largest moon Titan. Some scientists have speculated that we will be able to start traveling through deep space by the end of the 24th century.[3] Earth will remain habitable for at least another 500 million years, but if we have a presence elsewhere in the Solar System, we can survive for much longer than that. When the Sun eventually becomes a white dwarf in 8 billion years’ time, we could still live in artificial space habitats orbiting what’s left of it, but most of us will likely be living on exoplanets and space habitats orbiting other stars.[4]

Trajectory changes vs. speeding up growth

This vast potential has profound implications for how best to help as many people as possible in the present and future. Many longtermists believe that creating trajectory changes—durable changes to the amount of good in the world at every point in the future—is more valuable for the trillions of people yet to be born than trying to speed up the future.[5] Preventing an existential catastrophe, such as human extinction or the collapse of civilization, is a prototypical trajectory change, since the entire value of the future hangs in the balance.

In Stubborn Attachments, economist Tyler Cowen argues that humanity must focus on three things to make the long-term future go as well as possible: reducing existential risks, protecting basic human rights, and maximizing the rate of sustainable economic growth. On timescales of 50 to 100 years, speeding up economic growth is significant. If the world economy grows at 2% per year for the next 100 years, then it will have grown to 7 times its current size. But if the economy grows at 3% per year, it will have grown by a factor of 19. This percentage point change would make a huge difference for people alive over the next century: global extreme poverty would be eradicated sooner, life expectancy would be higher, and people would be more educated and tolerant.

But economic growth cannot realistically continue at a sustained exponential rate for more than a thousand years or so. If the world economy grows by 2% per year for the next 8200 years, then we would eventually “need to be sustaining multiple economies as big as today’s entire world economy” per atom in the Milky Way galaxy.[6] Whether the economy grows at 2% or 3% over the next 500 years will probably make little difference to the living standards of people alive 10,000 years from now, as the economy will have reached its peak well before then. On the other hand, global extreme poverty and income inequality could have persistent negative effects on the global political order. Since political developments tend to get locked in, reducing global poverty and inequality today could have far-reaching benefits for the long-term future.

Reducing S-risks as a key priority

Since realizing this, I’ve decided to focus more on making positive changes to humanity’s long-term trajectory and less on economic growth. Although I still believe that reducing barriers to growth and development is important, I think that positive trajectory changes that affect humanity’s entire future are more important still. I group positive trajectory changes into three basic categories:

  • Decreasing existential risks: reducing the chance of future events such as human extinction that prevent us from ever having an impact on the future.

  • Decreasing suffering risks (S-risks), or risks of future suffering that greatly exceeds the amount of suffering in the universe today.

  • Increasing the chance that the future will be very good rather than mediocre.

Tentatively, I think reducing S-risks is the most promising strategy for me to pursue, because it is way more neglected than reducing existential risks in general. The field of S-risks is new to me, but commonly discussed S-risks include:

  • Factory farming spreads to space. Farmed animals already outnumber humans at least 10 to 1, so if humans ever bring farmed animals to space, the resulting number of farmed animals could be very large. Most farmed animals live pretty miserable lives, so this would lead to an enormous amount of suffering.

  • Similarly, humans introduce Earth-based life to other planets and thus create an astronomical number of suffering wild animals.

  • Artificial intelligence systems or digital people with the capacity to feel pain are widely used and made to suffer a lot. Since digital sentient beings can be created very efficiently compared to biological ones, they could exist in extremely large numbers in the future, making this an especially pressing concern.

  • Misaligned superintelligence creates astronomical suffering as a means to its programmed goals, possibly involving one or more of the scenarios above.[7]

Historically, I have been skeptical of claims that AI safety is the most pressing issue from an EA perspective, but I’ve been starting to come around. Although I still think there could be more important causes, I now think that advanced AI is a major threat to the future, factoring in both existential risks and S-risks, and I recognize that I have a good personal fit for the field. So at EA Global this coming weekend, I’m going to explore whether I’d be a good fit for AI safety roles and which areas of AI safety seem most promising for reducing S-risks. I’ve also been spinning up a Discord server focused on the intersection of longtermism and animal welfare, an area that I think is especially important and neglected by the EA community. Even though I’m uncertain about these initiatives, I am excited about the potential impact they could have on future suffering.

  1. ^

    Hilton, Benjamin (2022). “Climate change.” 80,000 Hours.

  2. ^

    Karnofsky, Holden (2021). “The Most Important Century.” Cold Takes.

  3. ^

    Tangermann, Victor (2021). “NASA Scientists Predict Settlements on Moons of Saturn, Jupiter.” Futurism.

  4. ^
  5. ^

    Beckstead, Nick (2013). “A proposed adjustment to the astronomical waste argument.” EA Forum.

  6. ^

    Karnofsky, Holden (2021). “This Can’t Go On.” Cold Takes.

  7. ^

    Gloor, Lukas (2016). “Altruists Should Prioritize Artificial Intelligence.” Center on Long-Term Risk.