I have a social constructivist view of technologyāthat is, I strongly believe that technology is a part of society, not an external force that acts on it. Ultimately, a technologyās effects on a society depend on the interactions between that technology and other values, institutions, and technologies within that society. So for example, although genetic engineering may enable human gene editing, the specific ways in which humans use gene editing would depend on cultural attitudes and institutions regarding the technology.
How this worldview applies to AI: Artificial intelligence systems have embedded values because they are inherently goal-directed, and the goals we put into them may match with one or more human values.[1] Also, because they are autonomous, AI systems have more agency than most technologies. But AI systems are still a product of society, and their effects depend on their own values and capabilities as well as economic, social, environmental, and legal conditions in society.
Because of this constructivist view, Iām moderately optimistic about AI despite some high-stakes risks. Most technologies are net-positive for humanity; this isnāt surprising, because technologies are chosen for their ability to meet human needs. But no technology can solve all of humanityās problems.
Iāve previously expressedskepticism about AI completely automating human labor. I think itās very likely that current trends in automation will continue, at least until AGI is developed. But Iām skeptical that all humans will always have a comparative advantage, let alone a comparative advantage in labor. Thus, I see a few ways that widespread automation could go wrong:
AI stops short of automating everything, but instead of augmenting human productivity, displaces workers into low-productivity jobsāor worse, economic roles other than labor. This scenario would create massive income inequality between those who own AI-powered firms and those who donāt.
AI takes over most tasks essential to governing society, causing humans to be alienated from the process of running their own society (human enfeeblement). Society drifts off course from where humans want it to go.
I think economics will determine which human tasks are automated and which are still performed by humans.
The embedded values thesis is sometimes considered a form of āsoft determinismā since it posits that technologies have their own effects on society based on their embedded values. However, I think itās compatible with social constructivism because a technologyās embedded values are imparted to it by people.
Social constructivism and AI
I have a social constructivist view of technologyāthat is, I strongly believe that technology is a part of society, not an external force that acts on it. Ultimately, a technologyās effects on a society depend on the interactions between that technology and other values, institutions, and technologies within that society. So for example, although genetic engineering may enable human gene editing, the specific ways in which humans use gene editing would depend on cultural attitudes and institutions regarding the technology.
How this worldview applies to AI: Artificial intelligence systems have embedded values because they are inherently goal-directed, and the goals we put into them may match with one or more human values.[1] Also, because they are autonomous, AI systems have more agency than most technologies. But AI systems are still a product of society, and their effects depend on their own values and capabilities as well as economic, social, environmental, and legal conditions in society.
Because of this constructivist view, Iām moderately optimistic about AI despite some high-stakes risks. Most technologies are net-positive for humanity; this isnāt surprising, because technologies are chosen for their ability to meet human needs. But no technology can solve all of humanityās problems.
Iāve previously expressed skepticism about AI completely automating human labor. I think itās very likely that current trends in automation will continue, at least until AGI is developed. But Iām skeptical that all humans will always have a comparative advantage, let alone a comparative advantage in labor. Thus, I see a few ways that widespread automation could go wrong:
AI stops short of automating everything, but instead of augmenting human productivity, displaces workers into low-productivity jobsāor worse, economic roles other than labor. This scenario would create massive income inequality between those who own AI-powered firms and those who donāt.
AI takes over most tasks essential to governing society, causing humans to be alienated from the process of running their own society (human enfeeblement). Society drifts off course from where humans want it to go.
I think economics will determine which human tasks are automated and which are still performed by humans.
The embedded values thesis is sometimes considered a form of āsoft determinismā since it posits that technologies have their own effects on society based on their embedded values. However, I think itās compatible with social constructivism because a technologyās embedded values are imparted to it by people.