there is ample opportunity for peaceful and mutually beneficial trade with AIs that do not share our utility functions
What would humans have to offer AIs for trade in this scenario, where there are “more competitive machine alternatives to humans in almost all societal functions”?
as long as this is done peacefully and lawfully
What do these words even mean in an ASI context? If humans are relatively disempowered, this would also presumably extend to the use of force and legal contexts.
What would humans have to offer AIs for trade in this scenario, where there are “more competitive machine alternatives to humans in almost all societal functions”?
In a lawful regime, humans would have the legal right to own property beyond just their own labor. This means they could possess assets—such as land, businesses, or financial investments—that they could trade with AIs in exchange for goods or services. This principle is similar to how retirees today can sustain themselves comfortably without working. Instead of relying on wages from labor, they live off savings, government welfare, or investments. Likewise, in a future where AIs play a dominant economic role, humans could maintain their well-being by leveraging their legally protected ownership of valuable assets.
What do these words even mean in an ASI context? If humans are relatively disempowered, this would also presumably extend to the use of force and legal contexts.
In the scenario I described, humanity’s protection would be ensured through legal mechanisms designed to safeguard individual human autonomy and well-being, even in a world where AIs collectively surpass human capabilities. These legal structures could establish clear protections for humans, ensuring that their rights, freedoms, and control over their own property remain intact despite the overwhelming combined power of AI systems.
This concept is genuinely not unusual or unprecedented. Consider your current situation as an individual in society. Compared to the collective power of all other humans combined, you are extremely weak. If the rest of the world suddenly decided to harm you, they could easily overpower you—killing you or taking your possessions with little effort.
Yet, in practice, you likely do not live in constant fear of this possibility. The primary reason is that, despite being vastly outmatched in raw power, you are integrated into a legal and social framework that protects your rights. Society as a whole coordinates to maintain legal structures that safeguard individuals like you from harm. For instance, if you live in the United States, you are entitled to due process under the law, and you are protected from crimes like murder and theft by legal statutes that are actively enforced.
Similarly, even if AI systems collectively become more powerful than humans, they could be governed by collective legal mechanisms that ensure human safety and autonomy, just as current legal systems protect individuals from the vastly greater power of society-in-general.
I don’t understand how you think these legal mechanisms would actually serve to bind superintelligent AIs. Or to put it another way, could chimpanzees or dolphins have established a legal mechanism that would have prevented human incursion into their habitat? If not, how is this hypothetical situation different?
Regarding the idea of trade — doesn’t this basically assume that humans will get a return on capital that is at least as good as the AIs’ return on capital? If not, wouldn’t the AIs eventually end up owning all the capital? And wouldn’t we expect superintelligent AIs to be better than humans at managing capital?
What would humans have to offer AIs for trade in this scenario, where there are “more competitive machine alternatives to humans in almost all societal functions”?
What do these words even mean in an ASI context? If humans are relatively disempowered, this would also presumably extend to the use of force and legal contexts.
In a lawful regime, humans would have the legal right to own property beyond just their own labor. This means they could possess assets—such as land, businesses, or financial investments—that they could trade with AIs in exchange for goods or services. This principle is similar to how retirees today can sustain themselves comfortably without working. Instead of relying on wages from labor, they live off savings, government welfare, or investments. Likewise, in a future where AIs play a dominant economic role, humans could maintain their well-being by leveraging their legally protected ownership of valuable assets.
In the scenario I described, humanity’s protection would be ensured through legal mechanisms designed to safeguard individual human autonomy and well-being, even in a world where AIs collectively surpass human capabilities. These legal structures could establish clear protections for humans, ensuring that their rights, freedoms, and control over their own property remain intact despite the overwhelming combined power of AI systems.
This concept is genuinely not unusual or unprecedented. Consider your current situation as an individual in society. Compared to the collective power of all other humans combined, you are extremely weak. If the rest of the world suddenly decided to harm you, they could easily overpower you—killing you or taking your possessions with little effort.
Yet, in practice, you likely do not live in constant fear of this possibility. The primary reason is that, despite being vastly outmatched in raw power, you are integrated into a legal and social framework that protects your rights. Society as a whole coordinates to maintain legal structures that safeguard individuals like you from harm. For instance, if you live in the United States, you are entitled to due process under the law, and you are protected from crimes like murder and theft by legal statutes that are actively enforced.
Similarly, even if AI systems collectively become more powerful than humans, they could be governed by collective legal mechanisms that ensure human safety and autonomy, just as current legal systems protect individuals from the vastly greater power of society-in-general.
I don’t understand how you think these legal mechanisms would actually serve to bind superintelligent AIs. Or to put it another way, could chimpanzees or dolphins have established a legal mechanism that would have prevented human incursion into their habitat? If not, how is this hypothetical situation different?
Regarding the idea of trade — doesn’t this basically assume that humans will get a return on capital that is at least as good as the AIs’ return on capital? If not, wouldn’t the AIs eventually end up owning all the capital? And wouldn’t we expect superintelligent AIs to be better than humans at managing capital?