Note: This remains a work-in-progress. Please feel free to contact me with any thoughts or feedback on any flaws in my thinking or ways in which I should further flesh out the idea to make it clearer.
Let us start with a bit of background context for framing. For the vast majority of human history, our circle of empathy included ourselves and our tribe. What constitutes our tribe has grown from dozens of people who were in one way or another directly related to us to nations constituting tens of millions of people. Moving beyond that, we have begun to have empathy for all of humanity and even some non-human creatures, besides the ones we are already culturally inclined toward like cats and dogs.
In recent years, there has been an effort to not just think about all the creatures alive today but also the creatures that will be alive in the long-term. After all, the thinking goes, why should our lives matter more just because we are the ones here now? And if we can make better choices that will positively impact the well-being of those future creatures, then are we not ethically obliged to do so? This idea, known as longtermism, is a logical extension of our expanding circle of empathy.
After all, because we care so deeply about our own children, we put in massive effort to ensure they are well-fed, educated, and otherwise setup to live a good life. We also recognize that other people around the world are not so different from ourselves and also care about their children and grandchildren. As evidenced by the passionate activism around climate change, for example, many of us have now successfully expanded our circle of empathy not just to the rest of humanity, but even to the rest of humanity plus a generation or two into the future.
Longtermists challenge us to think beyond that. Some of them are concerned about centuries, some millennia, and some further still with megaannum, but less discussed is the end goal. In this view, that answer is not well-defined. The longtermist perspective is essentially about ensuring that those of us in the 21st century do not do something foolish that precludes the possibility of a thriving, spacefaring, multiplanetary civilization. I would certainly agree that is a desirable outcome in the long-term, but it is also possible for us to think longer term than the longtermists.
If we think trillions of years into the future, once all of the Universe has been explored, all of the adventures that are to be had have been had, and the Universe is nearing its heat death, what end state do the longtermists foresee?
Some have been so disturbed by the increasing power of humans to shape and damage their environment that they have argued for an end goal of voluntary extinguishing of the human species by not reproducing. While the formal organization associated with this way of thinking is quite fringe, there are a large number of people who agree with this sentiment and do not wish to have children as a result. They have been told having a child is the single worst thing you could do for the environment, as we are already overpopulated.
There are several issues with this line of thinking. Firstly, it is a self-terminating idea. Any idea that demands its followers do not reproduce is going to be memetically weak as the potent parent-to-child vector of idea transmission is automatically eliminated, unless the individual adopts a child. On the other hand, those who do not follow that line of thought will continue to have children, some of them many.
Let us leave that aside for a moment and consider what would happen if this ideology were successful in its aims. This could be done either by convincing 100% of humans not to have children or forcing the matter, such as through some sort of hypothetical highly contagious and deadly disease. Even if successful in removing humans from the playing field, life would continue on Earth and wherever else it exists in the vast Universe.
Eventually, another intelligent species would evolve. It could take anywhere from hundreds of thousands to many millions of years, but it would happen eventually somewhere. It would require an immense amount of suffering to get to that point, akin to the suffering our own ancestors experienced for millions of years. Their lives were nasty, brutish, and short, as many of the lives of our fellow humans today continue to be.
Our own recorded history began only around 5,000 years ago, with roughly 300,000 years of unrecorded history prior to that. More recent still, it is only in the last few centuries that modern nation-states and corporations came into being, enabling global trade networks, industrial economies, and innovation on an unprecedented scale. The development of these systems has allowed humanity to emerge into relative abundance and to develop technologies like space travel, artificial intelligence, and gene editing over what amounts to the blink of an eye.
This holds true not just for our most advanced societies, but for all of humanity. Over the past decade, I have lived in some of the least developed nations on Earth today. For all the hardships people in these nations undergo, I can say from firsthand experience the lives people live in these countries would be totally alien to past humans. In these places, many people have access to technologies like antibiotics, currency notes, woven fabrics, fixed dwellings, steel machetes and cooking pots, radios, basic cell phones and solar panels, and perhaps even a vehicle like a speedboat or a used RAV4 that would appear magical or simply incomprehensible to humans thoughout most of our history. When we have collectively come so far in such a short time, it would be ludicrous to throw it all away.
If we nonetheless self-extinguish, intentionally or otherwise, it is likely that at some point, somewhere in the Universe, another intelligent species would evolve. As for humanity, it would likely take hundreds of thousands of years for that species to reach the stage we are at today. If that species also self-annihilates, such as due to an anti-natalist guilt over its negative impact on the environment, this would do nothing to allieviate suffering. It would simply mark a continuation of an endless cycle that leads nowhere. Rather than put the rare emergence of intelligence to use to reduce suffering, this view, in its most extreme form, simply perpetuates it indefinitely.
The ultimate problem with this idea, like longtermism, is that it does not resolve the issue of the end outcome.
You might be thinking, “100 trillion years is a long way off. Why should I think about that now?” The reason is because once we agree on the end state we want to see, we can work backwards from there to inform our collective worldviews today.
Consider R.N. Smart’s argument that a “ruler who controls a weapon capable of instantly and painlessly destroying the human race” is a logical implication of Karl Popper’s negative utilitarianism. According to Smart, the use of such a weapon would be the the ruler’s duty, as it would be “bound to diminish suffering.” Despite Smart proffering that argument as an absurd outcome of negative utilitarianism, some people, such as antinatalists and human extinctionists, are in fact in favor of such an outcome.
However, what Smart and extinctionists do not consider is this: what comes after this ruler ends all human life? All you would have is hundreds of millions of years of animals fighting to fulfill their basic needs in a state of nature until and unless another intelligent species evolves on Earth and we repeat this brutal cycle of evolution, innovation, guilt, and self-destruction until some sort of natural catastrophe wipes us all out once and for all, rendering all that suffering and struggle truly pointless.
One possible outcome of the longtermist thinking would be a Universe filled with innumerable beings — endless hundreds of trillions — experiencing pure, eternal ecstasy. While there is a certain appeal to that outcome compared to the extinctionist impulse, it has a kind of emptiness to it. It simply takes things we care about now, namely a thriving civilization filled with people who are happy, and projects that indefinitely into the future.
As an intermediate goal, I have no objection to it. However, as an end state, it is akin to the paperclip problem in that it takes something we want in abundance — paperclips, in Bostrom’s example or happy people who are not suffering in the longtermist example — and leads to a sort of dystopic outcome. In this case, the end state may be something akin to trillions of brains in trillions of vats all hooked up to a system that disables hedonic adaptation and pumps them full of dopamine while they all hallucinate. Presumably, this would go on indefinitely until the Universe ends, putting an end to that project and to all of existence.
If neither creating an endless number of trillions of people who experience as much happiness as possible for as long as possible or extinguishing all humans are good end outcomes, what would be?
When I first developed this view in the early 2000s, I was uncertain about its validity, and I was unsure of how to best express it or test it. Before gaining enough confidence, I had to understand the views of the world I was born into. I have spoken to followers of Abrahamic faiths like Christian pastors and Muslim imams who believe the end game is follow God’s will so that they may reach Heaven or Jannah and exist there blissfully forever at his side. Likewise, I have spoken to Hindu and Buddhist priests and scholars who believe the aim is to overcome samsara and duhkha to achieve moksha or nirvana. I have also spoken to anxious Europeans choosing not to have children in order to protect those children from suffering or contributing to a climate disaster and Oxford-educated effective altruists, some of whom would find the idea of trillions of brains in trillions of pleasure vats to be akin to the Heaven envisioned by the religious.
From these conversations, I saw a common thread that aligned with my thought process. The commonality all of these traditions share is that they prioritize the end of cycles of suffering. However, how we should go about that is not necessarily immediately clear, especially if we wish to do so in the temporal world.
However, if we are seeking to accomplish this goal in the temporal world and reject making humans go extinct or hooking our brains up to an experience machine as valid goals, where does that leave us? I have come to the conclusion that the following principles encompass where we should be aiming:
Ensure humans or some other intelligence continues to exist forever. There is no purpose in gaining all of the knowledge of the Universe if we are all destroyed at the end of it. Our hard-won knowledge must be protected.
Progressively gain knowledge, until all is known, except for knowledge that requires inducing suffering. For instance, one of the most unacceptable and unethical acts possible would be to create virtual worlds full of ignorant creatures who suffer and themselves create virtual worlds full of ignorant creatures who suffer ad infinitum. The fact that this is so deeply unethical is one reason I doubt Bostrom’s argument that we likely live in a simulation. His argument does not account for the desire of highly advanced civilizations to prevent it from happening.
End all suffering. This includes both human and non-human, on Earth and throughout the Universe, any hypothetical multiverses or virtual worlds. Basically, anywhere that suffering can be eliminated, it must be.
These principles allow us to work backwards to today to determine what to collectively prioritize. If we understand what our place in the Universe is, we can more readily determine how to move forward. We should worry about things like climate change, but we should reject the extinctionist and antinatalist impulses as shortsighted and wrongheaded. Likewise, we should aim for a thriving, happy multiplanetary species in the long-term but without losing sight of the end state.
So what would this end state I am proposing look like? I would put forward it should be an eternal knowledge custodianship. This custodianship could take any number of forms, perhaps a single entity like the Multivac or an entire type IV civilization. We will have plenty of time to work out those sorts of specifics. However, what is important for the time being are the broad strokes of the end state. Specifically, the principles I have referenced above should be enshrined in its behavior: 1) ensuring the ongoing existence of sentience, 2) gaining total knowledge except that knowledge which requires inducing suffering, and 3) ending all suffering.
This approach ensures that all the billions of years of suffering that lifeforms have endured was not totally meaningless and it prevents that cycle from repeating. Instead, it leaves, at the end of it all, a peaceful Universe in a perpetual state of fully knowing itself.
Thinking beyond the long-term
Link post
Note: This remains a work-in-progress. Please feel free to contact me with any thoughts or feedback on any flaws in my thinking or ways in which I should further flesh out the idea to make it clearer.
Let us start with a bit of background context for framing. For the vast majority of human history, our circle of empathy included ourselves and our tribe. What constitutes our tribe has grown from dozens of people who were in one way or another directly related to us to nations constituting tens of millions of people. Moving beyond that, we have begun to have empathy for all of humanity and even some non-human creatures, besides the ones we are already culturally inclined toward like cats and dogs.
In recent years, there has been an effort to not just think about all the creatures alive today but also the creatures that will be alive in the long-term. After all, the thinking goes, why should our lives matter more just because we are the ones here now? And if we can make better choices that will positively impact the well-being of those future creatures, then are we not ethically obliged to do so? This idea, known as longtermism, is a logical extension of our expanding circle of empathy.
After all, because we care so deeply about our own children, we put in massive effort to ensure they are well-fed, educated, and otherwise setup to live a good life. We also recognize that other people around the world are not so different from ourselves and also care about their children and grandchildren. As evidenced by the passionate activism around climate change, for example, many of us have now successfully expanded our circle of empathy not just to the rest of humanity, but even to the rest of humanity plus a generation or two into the future.
Longtermists challenge us to think beyond that. Some of them are concerned about centuries, some millennia, and some further still with megaannum, but less discussed is the end goal. In this view, that answer is not well-defined. The longtermist perspective is essentially about ensuring that those of us in the 21st century do not do something foolish that precludes the possibility of a thriving, spacefaring, multiplanetary civilization. I would certainly agree that is a desirable outcome in the long-term, but it is also possible for us to think longer term than the longtermists.
If we think trillions of years into the future, once all of the Universe has been explored, all of the adventures that are to be had have been had, and the Universe is nearing its heat death, what end state do the longtermists foresee?
Some have been so disturbed by the increasing power of humans to shape and damage their environment that they have argued for an end goal of voluntary extinguishing of the human species by not reproducing. While the formal organization associated with this way of thinking is quite fringe, there are a large number of people who agree with this sentiment and do not wish to have children as a result. They have been told having a child is the single worst thing you could do for the environment, as we are already overpopulated.
There are several issues with this line of thinking. Firstly, it is a self-terminating idea. Any idea that demands its followers do not reproduce is going to be memetically weak as the potent parent-to-child vector of idea transmission is automatically eliminated, unless the individual adopts a child. On the other hand, those who do not follow that line of thought will continue to have children, some of them many.
Let us leave that aside for a moment and consider what would happen if this ideology were successful in its aims. This could be done either by convincing 100% of humans not to have children or forcing the matter, such as through some sort of hypothetical highly contagious and deadly disease. Even if successful in removing humans from the playing field, life would continue on Earth and wherever else it exists in the vast Universe.
Eventually, another intelligent species would evolve. It could take anywhere from hundreds of thousands to many millions of years, but it would happen eventually somewhere. It would require an immense amount of suffering to get to that point, akin to the suffering our own ancestors experienced for millions of years. Their lives were nasty, brutish, and short, as many of the lives of our fellow humans today continue to be.
Our own recorded history began only around 5,000 years ago, with roughly 300,000 years of unrecorded history prior to that. More recent still, it is only in the last few centuries that modern nation-states and corporations came into being, enabling global trade networks, industrial economies, and innovation on an unprecedented scale. The development of these systems has allowed humanity to emerge into relative abundance and to develop technologies like space travel, artificial intelligence, and gene editing over what amounts to the blink of an eye.
This holds true not just for our most advanced societies, but for all of humanity. Over the past decade, I have lived in some of the least developed nations on Earth today. For all the hardships people in these nations undergo, I can say from firsthand experience the lives people live in these countries would be totally alien to past humans. In these places, many people have access to technologies like antibiotics, currency notes, woven fabrics, fixed dwellings, steel machetes and cooking pots, radios, basic cell phones and solar panels, and perhaps even a vehicle like a speedboat or a used RAV4 that would appear magical or simply incomprehensible to humans thoughout most of our history. When we have collectively come so far in such a short time, it would be ludicrous to throw it all away.
If we nonetheless self-extinguish, intentionally or otherwise, it is likely that at some point, somewhere in the Universe, another intelligent species would evolve. As for humanity, it would likely take hundreds of thousands of years for that species to reach the stage we are at today. If that species also self-annihilates, such as due to an anti-natalist guilt over its negative impact on the environment, this would do nothing to allieviate suffering. It would simply mark a continuation of an endless cycle that leads nowhere. Rather than put the rare emergence of intelligence to use to reduce suffering, this view, in its most extreme form, simply perpetuates it indefinitely.
The ultimate problem with this idea, like longtermism, is that it does not resolve the issue of the end outcome.
You might be thinking, “100 trillion years is a long way off. Why should I think about that now?” The reason is because once we agree on the end state we want to see, we can work backwards from there to inform our collective worldviews today.
Consider R.N. Smart’s argument that a “ruler who controls a weapon capable of instantly and painlessly destroying the human race” is a logical implication of Karl Popper’s negative utilitarianism. According to Smart, the use of such a weapon would be the the ruler’s duty, as it would be “bound to diminish suffering.” Despite Smart proffering that argument as an absurd outcome of negative utilitarianism, some people, such as antinatalists and human extinctionists, are in fact in favor of such an outcome.
However, what Smart and extinctionists do not consider is this: what comes after this ruler ends all human life? All you would have is hundreds of millions of years of animals fighting to fulfill their basic needs in a state of nature until and unless another intelligent species evolves on Earth and we repeat this brutal cycle of evolution, innovation, guilt, and self-destruction until some sort of natural catastrophe wipes us all out once and for all, rendering all that suffering and struggle truly pointless.
One possible outcome of the longtermist thinking would be a Universe filled with innumerable beings — endless hundreds of trillions — experiencing pure, eternal ecstasy. While there is a certain appeal to that outcome compared to the extinctionist impulse, it has a kind of emptiness to it. It simply takes things we care about now, namely a thriving civilization filled with people who are happy, and projects that indefinitely into the future.
As an intermediate goal, I have no objection to it. However, as an end state, it is akin to the paperclip problem in that it takes something we want in abundance — paperclips, in Bostrom’s example or happy people who are not suffering in the longtermist example — and leads to a sort of dystopic outcome. In this case, the end state may be something akin to trillions of brains in trillions of vats all hooked up to a system that disables hedonic adaptation and pumps them full of dopamine while they all hallucinate. Presumably, this would go on indefinitely until the Universe ends, putting an end to that project and to all of existence.
If neither creating an endless number of trillions of people who experience as much happiness as possible for as long as possible or extinguishing all humans are good end outcomes, what would be?
When I first developed this view in the early 2000s, I was uncertain about its validity, and I was unsure of how to best express it or test it. Before gaining enough confidence, I had to understand the views of the world I was born into. I have spoken to followers of Abrahamic faiths like Christian pastors and Muslim imams who believe the end game is follow God’s will so that they may reach Heaven or Jannah and exist there blissfully forever at his side. Likewise, I have spoken to Hindu and Buddhist priests and scholars who believe the aim is to overcome samsara and duhkha to achieve moksha or nirvana. I have also spoken to anxious Europeans choosing not to have children in order to protect those children from suffering or contributing to a climate disaster and Oxford-educated effective altruists, some of whom would find the idea of trillions of brains in trillions of pleasure vats to be akin to the Heaven envisioned by the religious.
From these conversations, I saw a common thread that aligned with my thought process. The commonality all of these traditions share is that they prioritize the end of cycles of suffering. However, how we should go about that is not necessarily immediately clear, especially if we wish to do so in the temporal world.
However, if we are seeking to accomplish this goal in the temporal world and reject making humans go extinct or hooking our brains up to an experience machine as valid goals, where does that leave us? I have come to the conclusion that the following principles encompass where we should be aiming:
Ensure humans or some other intelligence continues to exist forever. There is no purpose in gaining all of the knowledge of the Universe if we are all destroyed at the end of it. Our hard-won knowledge must be protected.
Progressively gain knowledge, until all is known, except for knowledge that requires inducing suffering. For instance, one of the most unacceptable and unethical acts possible would be to create virtual worlds full of ignorant creatures who suffer and themselves create virtual worlds full of ignorant creatures who suffer ad infinitum. The fact that this is so deeply unethical is one reason I doubt Bostrom’s argument that we likely live in a simulation. His argument does not account for the desire of highly advanced civilizations to prevent it from happening.
End all suffering. This includes both human and non-human, on Earth and throughout the Universe, any hypothetical multiverses or virtual worlds. Basically, anywhere that suffering can be eliminated, it must be.
These principles allow us to work backwards to today to determine what to collectively prioritize. If we understand what our place in the Universe is, we can more readily determine how to move forward. We should worry about things like climate change, but we should reject the extinctionist and antinatalist impulses as shortsighted and wrongheaded. Likewise, we should aim for a thriving, happy multiplanetary species in the long-term but without losing sight of the end state.
So what would this end state I am proposing look like? I would put forward it should be an eternal knowledge custodianship. This custodianship could take any number of forms, perhaps a single entity like the Multivac or an entire type IV civilization. We will have plenty of time to work out those sorts of specifics. However, what is important for the time being are the broad strokes of the end state. Specifically, the principles I have referenced above should be enshrined in its behavior: 1) ensuring the ongoing existence of sentience, 2) gaining total knowledge except that knowledge which requires inducing suffering, and 3) ending all suffering.
This approach ensures that all the billions of years of suffering that lifeforms have endured was not totally meaningless and it prevents that cycle from repeating. Instead, it leaves, at the end of it all, a peaceful Universe in a perpetual state of fully knowing itself.