In 2003 Nick Bostrom published an article titled “Astronomical Waste: The Opportunity Cost of Delayed Technological Development.” The argument is summarised in his abstract:
“With very advanced technology, a very large population of people living happy lives could be sustained in the accessible region of the universe. For every year that development of such technologies and colonization of the universe is delayed, there is therefore an opportunity cost: a potential good, lives worth living, is not being realized. Given some plausible assumptions, this cost is extremely large. However, the lesson for utilitarians is not that we ought to maximize the pace of technological development, but rather that we ought to maximize its safety, i.e. the probability that colonization will eventually occur.”
This text is an argument against the argument of astronomical waste. If it holds, a utilitarian actor should not put astronomical weight on preventing existential risks. The counterargument is based on two assumptions: firstly, that humans require a specific environment to thrive, and secondly, that technological hazards outpace humanity’s collective wisdom in effectively regulating them. The first assumption suggests that random and significant changes to our current state are likely to be detrimental. The second assumption implies that new technology can lead to unpredictable events. Consequently, if sophisticated technology continues to burgeon, we cannot assume that the lives of future generations will be generally positive.
Let’s take a closer look at the assumptions. The first one, that humans can only thrive in specific conditions, is measurable. On one hand, humans lead incredibly diverse lives and have spread to every continent on Earth, with the exception of Antarctica. We are capable of surviving extreme conditions, from the harsh winters in Greenland to the scorching summers in the Sahara. We have even ventured into space and reached the depths of the Mariana Trench. On the other hand, we must note that our ability to adapt to such diverse environments is heavily reliant on technology. Without it, surviving in Greenland, the Sahara, space, or the Mariana Trench would be very difficult or even impossible. In fact, if we were to suddenly change factors such as pressure, temperature, radiation, atmospheric composition, natural environment, and access to tools, it would undoubtedly result in our quick demise.
The above observation can be summarised as follows: although individual humans lead specialised lives with diverse areas of expertise, as a species, we are generalists. Individuals can acquire new skills over time, but it is a slow process. Therefore, if a sudden change were to occur, humanity would have to rely on its collective knowledge and adapt to the new circumstances, as long as they don’t destabilise the socio-technological institutions that we depend on. Urgent challenges, such as runaway artificial intelligence, fall under this category.
Tools are capable of performing general tasks, but they typically operate in specific ways. For instance, most mobile phones allow you to watch videos and communicate with people from all over the globe, but if you were to submerge it in a swimming pool, it would quickly malfunction. Similarly, a corrupt Tunisian government maintained power for many years, but the self-immolation of just one man, Mohammed Bouazizi, ignited the revolt of the Arab Spring. When coupled with the risks of positive feedback loops, even a minor issue can rapidly escalate into a significant change. It is then important to understand that just because a tool is designed to perform a variety of tasks, it may not be suitable for a task it wasn’t intended for.
The second assumption, which suggests that the future use of technology is likely to result in dramatic and unexpected events, cannot be measured. While it is possible to estimate the risks associated with the most potent technology available at any given time, the judgement and foresight of the decision makers involved remain elusive. But history might provide us with a guess. In the words of historian Yuval Noah Harari “humans were always far better at inventing tools than using them wisely.” For instance, British aristocrats who burned coal to support the Industrial Revolution had no idea that it would contribute to global warming three hundred years later. Similarly, Thai politicians were eager to build the Ratchaprapa Dam to provide Southern Thailand with cheaper electricity, but failed to consider that 44 species in Khao Sok national park would go extinct as a result.
The phenomenon of selective sight can be attributed to the psychology, systems, and institutions responsible for making decisions. At the collective level, capitalism creates a profitable market for innovations that cater to current demands, while there is comparatively less financial gain to be made from a thorough analysis of long-term consequences. Additionally, politicians who are elected through public votes may prioritise visible, short-term gains in order to remain in power, possibly neglecting interventions that may only yield results many decades in the future. Lastly, the individuals who create the demands and elect the politicians are often ill-equipped to conduct a comprehensive risk analysis of novel tools due to a lack of information or expertise.
If the responsibility for implementing the critical tool were to be placed on individuals, the situation would not necessarily improve. As humans, we often suffer from a narrow focus that causes us to neglect things that are not immediately visible to us. This tendency is most evident in situations of immediate danger, where our attention is naturally focused on our own survival. For example, if someone were standing in quicksand, they would not be thinking about people starving in another part of the world, but instead would be focused on getting themselves out of harm’s way. In more usual situations, our cognitive biases can lead to a form of selfishness that results in the “tragedy of the commons”—where everyone pursuing their own interests can lead to the depletion of shared resources. Although moral intuitions can be useful in resolving certain ethical issues, they may not be sufficient when dealing with complex situations that lack a clear moral consensus. Individuals would have to cast their habits and feelings aside, but what should replace them?
To safely handle complex problems with new technology, it seems we introduce some ethically aligned and extremely knowledgeable panel or algorithm. Were this to be done, past stupidity would be a weak guide to future use of potentially dangerous technology. Without such a framework, dramatic events enabled by advanced technology could shape the future, potentially causing misery or extinction. It is therefore essential that we take proactive measures to regulate the development and use of new technologies and prioritise ethical considerations in the decision-making process.
Post Scriptum
There is a dangerous misconception that my argument implies that the lives of future generations will have a net negative impact, and therefore, it would be best for us to choose extinction soon. However, my actual stance is that predicting the happiness or wellbeing of people in the distant future is extremely challenging. Rather than expecting their lives to be either positive or negative, it may be more prudent to acknowledge the limitations of our ability to forecast their future experiences. The challenge lies mainly in anticipating future wisdom in the use of new technologies.
A Counterargument to the Argument of Astronomical Waste
In 2003 Nick Bostrom published an article titled “Astronomical Waste: The Opportunity Cost of Delayed Technological Development.” The argument is summarised in his abstract:
“With very advanced technology, a very large population of people living happy lives could be sustained in the accessible region of the universe. For every year that development of such technologies and colonization of the universe is delayed, there is therefore an opportunity cost: a potential good, lives worth living, is not being realized. Given some plausible assumptions, this cost is extremely large. However, the lesson for utilitarians is not that we ought to maximize the pace of technological development, but rather that we ought to maximize its safety, i.e. the probability that colonization will eventually occur.”
This text is an argument against the argument of astronomical waste. If it holds, a utilitarian actor should not put astronomical weight on preventing existential risks. The counterargument is based on two assumptions: firstly, that humans require a specific environment to thrive, and secondly, that technological hazards outpace humanity’s collective wisdom in effectively regulating them. The first assumption suggests that random and significant changes to our current state are likely to be detrimental. The second assumption implies that new technology can lead to unpredictable events. Consequently, if sophisticated technology continues to burgeon, we cannot assume that the lives of future generations will be generally positive.
Let’s take a closer look at the assumptions. The first one, that humans can only thrive in specific conditions, is measurable. On one hand, humans lead incredibly diverse lives and have spread to every continent on Earth, with the exception of Antarctica. We are capable of surviving extreme conditions, from the harsh winters in Greenland to the scorching summers in the Sahara. We have even ventured into space and reached the depths of the Mariana Trench. On the other hand, we must note that our ability to adapt to such diverse environments is heavily reliant on technology. Without it, surviving in Greenland, the Sahara, space, or the Mariana Trench would be very difficult or even impossible. In fact, if we were to suddenly change factors such as pressure, temperature, radiation, atmospheric composition, natural environment, and access to tools, it would undoubtedly result in our quick demise.
The above observation can be summarised as follows: although individual humans lead specialised lives with diverse areas of expertise, as a species, we are generalists. Individuals can acquire new skills over time, but it is a slow process. Therefore, if a sudden change were to occur, humanity would have to rely on its collective knowledge and adapt to the new circumstances, as long as they don’t destabilise the socio-technological institutions that we depend on. Urgent challenges, such as runaway artificial intelligence, fall under this category.
Tools are capable of performing general tasks, but they typically operate in specific ways. For instance, most mobile phones allow you to watch videos and communicate with people from all over the globe, but if you were to submerge it in a swimming pool, it would quickly malfunction. Similarly, a corrupt Tunisian government maintained power for many years, but the self-immolation of just one man, Mohammed Bouazizi, ignited the revolt of the Arab Spring. When coupled with the risks of positive feedback loops, even a minor issue can rapidly escalate into a significant change. It is then important to understand that just because a tool is designed to perform a variety of tasks, it may not be suitable for a task it wasn’t intended for.
The second assumption, which suggests that the future use of technology is likely to result in dramatic and unexpected events, cannot be measured. While it is possible to estimate the risks associated with the most potent technology available at any given time, the judgement and foresight of the decision makers involved remain elusive. But history might provide us with a guess. In the words of historian Yuval Noah Harari “humans were always far better at inventing tools than using them wisely.” For instance, British aristocrats who burned coal to support the Industrial Revolution had no idea that it would contribute to global warming three hundred years later. Similarly, Thai politicians were eager to build the Ratchaprapa Dam to provide Southern Thailand with cheaper electricity, but failed to consider that 44 species in Khao Sok national park would go extinct as a result.
The phenomenon of selective sight can be attributed to the psychology, systems, and institutions responsible for making decisions. At the collective level, capitalism creates a profitable market for innovations that cater to current demands, while there is comparatively less financial gain to be made from a thorough analysis of long-term consequences. Additionally, politicians who are elected through public votes may prioritise visible, short-term gains in order to remain in power, possibly neglecting interventions that may only yield results many decades in the future. Lastly, the individuals who create the demands and elect the politicians are often ill-equipped to conduct a comprehensive risk analysis of novel tools due to a lack of information or expertise.
If the responsibility for implementing the critical tool were to be placed on individuals, the situation would not necessarily improve. As humans, we often suffer from a narrow focus that causes us to neglect things that are not immediately visible to us. This tendency is most evident in situations of immediate danger, where our attention is naturally focused on our own survival. For example, if someone were standing in quicksand, they would not be thinking about people starving in another part of the world, but instead would be focused on getting themselves out of harm’s way. In more usual situations, our cognitive biases can lead to a form of selfishness that results in the “tragedy of the commons”—where everyone pursuing their own interests can lead to the depletion of shared resources. Although moral intuitions can be useful in resolving certain ethical issues, they may not be sufficient when dealing with complex situations that lack a clear moral consensus. Individuals would have to cast their habits and feelings aside, but what should replace them?
To safely handle complex problems with new technology, it seems we introduce some ethically aligned and extremely knowledgeable panel or algorithm. Were this to be done, past stupidity would be a weak guide to future use of potentially dangerous technology. Without such a framework, dramatic events enabled by advanced technology could shape the future, potentially causing misery or extinction. It is therefore essential that we take proactive measures to regulate the development and use of new technologies and prioritise ethical considerations in the decision-making process.
Post Scriptum
There is a dangerous misconception that my argument implies that the lives of future generations will have a net negative impact, and therefore, it would be best for us to choose extinction soon. However, my actual stance is that predicting the happiness or wellbeing of people in the distant future is extremely challenging. Rather than expecting their lives to be either positive or negative, it may be more prudent to acknowledge the limitations of our ability to forecast their future experiences. The challenge lies mainly in anticipating future wisdom in the use of new technologies.