We can assess the strength of people’s preferences for future generations by analyzing their economic behavior. The key idea is that if people genuinely cared deeply about future generations, they would prioritize saving a huge portion of their income for the benefit of those future individuals rather than spending it on themselves in the present. This would indicate a strong intertemporal preference for improving the lives of future people over the well-being of currently existing individuals.
For instance, if people truly valued humanity as a whole far more than their own personal well-being, we would expect parents to allocate the vast majority of their income to their descendants (or humanity collectively) rather than using it for their own immediate needs and desires. However, empirical studies generally do not support the claim that people place far greater importance on the long-term preservation of humanity than on the well-being of currently existing individuals. In reality, most people tend to prioritize themselves and their children, while allocating only a relatively small portion of their income to charitable causes or savings intended to benefit future generations beyond their immediate children. If people were intrinsically and strongly committed to the abstract concept of humanity itself, rather than primarily concerned with the welfare of present individuals (including their immediate family and friends), we would expect to see much higher levels of long-term financial sacrifice for future generations than we actually observe.
To be clear, I’m not claiming that people don’t value their descendants, or the concept of humanity at all. Rather, my point is that this preference does not appear to be strong enough to override the considerations outlined in my previous argument. While I agree that people do have an independent preference for preserving humanity—beyond just their personal desire to avoid death—this preference is typically not way stronger than their own desire for self-preservation. As a result, my previous conclusion still holds: from the perspective of present-day individuals, accelerating AI development can still be easily justified if one does not believe in a high probability of human extinction from AI.
The economic behavior analysis falls short. People usually do not expect to have a significant impact on the survival of humanity. If in the past centuries people had saved a large part of their income for “future generations” (including for us) this would likely have had almost no impact on the survival of humanity, probably not even significantly on our present quality of life. The expected utility of saving money for future generations is simply too low compared to spending the money in the present for themselves. This does just mean that people (reasonably) expect to have little influence on the survival of humanity, not that they are relatively okay with humanity going extinct. If people could somehow directly influence, via voting perhaps, whether to trade a few extra years of life against a significant increase in the likelihood of humanity going extinct, I think the outcome would be predictable.
Though I’m indeed not specifically commenting here on what delaying AI could realistically achieve. My main point was only that the preferences for humanity not going extinct are significant and that they easily outweigh any preferences for future AI coming into existence, without relying on immoral speciesism.
We can assess the strength of people’s preferences for future generations by analyzing their economic behavior. The key idea is that if people genuinely cared deeply about future generations, they would prioritize saving a huge portion of their income for the benefit of those future individuals rather than spending it on themselves in the present. This would indicate a strong intertemporal preference for improving the lives of future people over the well-being of currently existing individuals.
For instance, if people truly valued humanity as a whole far more than their own personal well-being, we would expect parents to allocate the vast majority of their income to their descendants (or humanity collectively) rather than using it for their own immediate needs and desires. However, empirical studies generally do not support the claim that people place far greater importance on the long-term preservation of humanity than on the well-being of currently existing individuals. In reality, most people tend to prioritize themselves and their children, while allocating only a relatively small portion of their income to charitable causes or savings intended to benefit future generations beyond their immediate children. If people were intrinsically and strongly committed to the abstract concept of humanity itself, rather than primarily concerned with the welfare of present individuals (including their immediate family and friends), we would expect to see much higher levels of long-term financial sacrifice for future generations than we actually observe.
To be clear, I’m not claiming that people don’t value their descendants, or the concept of humanity at all. Rather, my point is that this preference does not appear to be strong enough to override the considerations outlined in my previous argument. While I agree that people do have an independent preference for preserving humanity—beyond just their personal desire to avoid death—this preference is typically not way stronger than their own desire for self-preservation. As a result, my previous conclusion still holds: from the perspective of present-day individuals, accelerating AI development can still be easily justified if one does not believe in a high probability of human extinction from AI.
The economic behavior analysis falls short. People usually do not expect to have a significant impact on the survival of humanity. If in the past centuries people had saved a large part of their income for “future generations” (including for us) this would likely have had almost no impact on the survival of humanity, probably not even significantly on our present quality of life. The expected utility of saving money for future generations is simply too low compared to spending the money in the present for themselves. This does just mean that people (reasonably) expect to have little influence on the survival of humanity, not that they are relatively okay with humanity going extinct. If people could somehow directly influence, via voting perhaps, whether to trade a few extra years of life against a significant increase in the likelihood of humanity going extinct, I think the outcome would be predictable.
Though I’m indeed not specifically commenting here on what delaying AI could realistically achieve. My main point was only that the preferences for humanity not going extinct are significant and that they easily outweigh any preferences for future AI coming into existence, without relying on immoral speciesism.