On a basic level, I agree that we should take artificial sentience extremely seriously, and think carefully about the right type of laws to put in place to ensure that artificial life is able to happily flourish, rather than suffer. This includes enacting appropriate legal protections to ensure that sentient AIs are treated in ways that promote well-being rather than suffering. Relying solely on voluntary codes of conduct to govern the treatment of potentially sentient AIs seems deeply inadequate, much like it would be for protecting children against abuse. Instead, I believe that establishing clear, enforceable laws is essential for ethically managing artificial sentience.
That said, I’m skeptical that a moratorium is the best policy.
From a classical utilitarian perspective, the imposition of a lengthy moratorium on the development of sentient AI seems like it would help to foster a more conservative global culture—one that is averse towards not only creating sentient AI, but also potentially towards other forms of life-expanding ventures, such as space colonization. Classical utilitarianism is typically seen as aiming to maximize the number of conscious beings in existence, advocating for actions that enable the flourishing and expansion of life, happiness, and fulfillment on as broad a scale as possible. However, implementing and sustaining a lengthy ban on AI would likely require substantial cultural and institutional shifts away from these permissive and ambitious values.
To enforce a moratorium of this nature, societies would likely adopt a framework centered around caution, restriction, and a deep-seated aversion to risk—values that would contrast sharply with those that encourage creating sentient life and proliferating this life on as large of a scale as possible. Maintaining a strict stance on AI development might lead governments, educational institutions, and media to promote narratives emphasizing the potential dangers of sentience and AI experimentation, instilling an atmosphere of risk-aversion rather than curiosity, openness, and progress. Over time, these narratives could lead to a culture less inclined to support or value efforts to expand sentient life.
Even if the ban is at some point lifted, there’s no guarantee that the conservative attitudes generated under the ban would entirely disappear, or that all relevant restrictions on artificial life would completely go away. Instead, it seems more likely that many of these risk-averse attitudes would remain even after the ban is formally lifted, given the initially long duration of the ban, and the type of culture the ban would inculcate.
In my view, this type of cultural conservatism seems likely to, in the long run, undermine the core aims of classical utilitarianism. A shift toward a society that is fearful or resistant to creating new forms of life may restrict humanity’s potential to realize a future that is not only technologically advanced but also rich in conscious, joyful beings. If we accept the idea of ‘value lock-in’—the notion that the values and institutions we establish now may set a trajectory that lasts for billions of years—then cultivating a culture that emphasizes restriction and caution may have long-term effects that are difficult to reverse. Such a locked-in value system could close off paths to outcomes that are aligned with maximizing the proliferation of happy, meaningful lives.
Thus, if a moratorium on sentient AI were to shape society’s cultural values in a way that leans toward caution and restriction, I think the enduring impact would likely contradict classical utilitarianism’s ultimate goal: the maximal promotion and flourishing of sentient life. Rather than advancing a world with greater life, joy, and meaningful experiences, these shifts might result in a more closed-off, limited society, actively impeding efforts to create a future rich with diverse and conscious life forms.
(Note that I have talked mainly about these concerns from a classical utilitarian point of view. However, I concede that a negative utilitarian or antinatalist would find it much easier to rationally justify a long moratorium on AI.
It is also important to note that my conclusion holds even if one does not accept the idea of a ‘value lock-in’. In that case, longtermists should likely focus on the near-term impacts of their decisions, as the long-term impacts of their actions may be impossible to predict. And I’d argue that a moratorium would likely have a variety of harmful near-term effects.)
I appreciate this thoughtful comment with such clearly laid out cruxes.
I think, based on this comment, that I am much more concerned about the possibility that created minds will suffer because my prior is much more heavily weighted toward suffering when making a draw from mindspace. I hope to cover the details of my prior distribution in a future post (but doing that topic justice will require a lot of time I may not have).
Additionally, I am a “Great Asymmetry” person, and I don’t think it is wrong not to create life that may thrive even though it is wrong to create life to suffer. (I don’t think the Great Asymmetry position fits the most elegantly with other utilitarian views that I hold, like valuing positive states— I just think it is true.) Even if I were trying to be a classical utilitarian on this, I still think the risk of creating suffering that we don’t know about and perhaps in principle could never know about is huge and should dominate our calculus.
I agree that our next moves on AI will likely set the tone for future risk tolerance. I just think the unfortunate truth is that we don’t know what we would need to know to proceed responsibly with creating new minds and setting precedents for creating new minds. I hope that one day we know everything we need to know and can fill the Lightcone with happy beings, and I regret that the right move now to prevent suffering today could potentially make it harder to proliferate happy life one day, but I don’t see a responsible way to set pro-creation values today that adequately takes welfare into account.
This is a very thoughtful comment, which I appreciate. Such cultural shifts aren’t taken enough into account usually.
That said, I agree with @Holly_Elmore comment, that this approach is more risky if artificial sentience has overall negative lives—something we really don’t have enough good information on.
Once powerful AIs are widely used everywhere, it will be much harder to backtrack if it turns out that they don’t have good lives (same for factory farming today).
(I’m repeating something I said in another comment I wrote a few hours ago, but adapted to this post.)
On a basic level, I agree that we should take artificial sentience extremely seriously, and think carefully about the right type of laws to put in place to ensure that artificial life is able to happily flourish, rather than suffer. This includes enacting appropriate legal protections to ensure that sentient AIs are treated in ways that promote well-being rather than suffering. Relying solely on voluntary codes of conduct to govern the treatment of potentially sentient AIs seems deeply inadequate, much like it would be for protecting children against abuse. Instead, I believe that establishing clear, enforceable laws is essential for ethically managing artificial sentience.
That said, I’m skeptical that a moratorium is the best policy.
From a classical utilitarian perspective, the imposition of a lengthy moratorium on the development of sentient AI seems like it would help to foster a more conservative global culture—one that is averse towards not only creating sentient AI, but also potentially towards other forms of life-expanding ventures, such as space colonization. Classical utilitarianism is typically seen as aiming to maximize the number of conscious beings in existence, advocating for actions that enable the flourishing and expansion of life, happiness, and fulfillment on as broad a scale as possible. However, implementing and sustaining a lengthy ban on AI would likely require substantial cultural and institutional shifts away from these permissive and ambitious values.
To enforce a moratorium of this nature, societies would likely adopt a framework centered around caution, restriction, and a deep-seated aversion to risk—values that would contrast sharply with those that encourage creating sentient life and proliferating this life on as large of a scale as possible. Maintaining a strict stance on AI development might lead governments, educational institutions, and media to promote narratives emphasizing the potential dangers of sentience and AI experimentation, instilling an atmosphere of risk-aversion rather than curiosity, openness, and progress. Over time, these narratives could lead to a culture less inclined to support or value efforts to expand sentient life.
Even if the ban is at some point lifted, there’s no guarantee that the conservative attitudes generated under the ban would entirely disappear, or that all relevant restrictions on artificial life would completely go away. Instead, it seems more likely that many of these risk-averse attitudes would remain even after the ban is formally lifted, given the initially long duration of the ban, and the type of culture the ban would inculcate.
In my view, this type of cultural conservatism seems likely to, in the long run, undermine the core aims of classical utilitarianism. A shift toward a society that is fearful or resistant to creating new forms of life may restrict humanity’s potential to realize a future that is not only technologically advanced but also rich in conscious, joyful beings. If we accept the idea of ‘value lock-in’—the notion that the values and institutions we establish now may set a trajectory that lasts for billions of years—then cultivating a culture that emphasizes restriction and caution may have long-term effects that are difficult to reverse. Such a locked-in value system could close off paths to outcomes that are aligned with maximizing the proliferation of happy, meaningful lives.
Thus, if a moratorium on sentient AI were to shape society’s cultural values in a way that leans toward caution and restriction, I think the enduring impact would likely contradict classical utilitarianism’s ultimate goal: the maximal promotion and flourishing of sentient life. Rather than advancing a world with greater life, joy, and meaningful experiences, these shifts might result in a more closed-off, limited society, actively impeding efforts to create a future rich with diverse and conscious life forms.
(Note that I have talked mainly about these concerns from a classical utilitarian point of view. However, I concede that a negative utilitarian or antinatalist would find it much easier to rationally justify a long moratorium on AI.
It is also important to note that my conclusion holds even if one does not accept the idea of a ‘value lock-in’. In that case, longtermists should likely focus on the near-term impacts of their decisions, as the long-term impacts of their actions may be impossible to predict. And I’d argue that a moratorium would likely have a variety of harmful near-term effects.)
I appreciate this thoughtful comment with such clearly laid out cruxes.
I think, based on this comment, that I am much more concerned about the possibility that created minds will suffer because my prior is much more heavily weighted toward suffering when making a draw from mindspace. I hope to cover the details of my prior distribution in a future post (but doing that topic justice will require a lot of time I may not have).
Additionally, I am a “Great Asymmetry” person, and I don’t think it is wrong not to create life that may thrive even though it is wrong to create life to suffer. (I don’t think the Great Asymmetry position fits the most elegantly with other utilitarian views that I hold, like valuing positive states— I just think it is true.) Even if I were trying to be a classical utilitarian on this, I still think the risk of creating suffering that we don’t know about and perhaps in principle could never know about is huge and should dominate our calculus.
I agree that our next moves on AI will likely set the tone for future risk tolerance. I just think the unfortunate truth is that we don’t know what we would need to know to proceed responsibly with creating new minds and setting precedents for creating new minds. I hope that one day we know everything we need to know and can fill the Lightcone with happy beings, and I regret that the right move now to prevent suffering today could potentially make it harder to proliferate happy life one day, but I don’t see a responsible way to set pro-creation values today that adequately takes welfare into account.
This is a very thoughtful comment, which I appreciate. Such cultural shifts aren’t taken enough into account usually.
That said, I agree with @Holly_Elmore comment, that this approach is more risky if artificial sentience has overall negative lives—something we really don’t have enough good information on.
Once powerful AIs are widely used everywhere, it will be much harder to backtrack if it turns out that they don’t have good lives (same for factory farming today).