I was surprised to see the comments on this post, which mostly provide arguments in favor of pursuing technological progress, even if this might lead to a higher risk of catastrophes.
I would like to chip in the following:
Preferences regarding the human condition are largely irrelevant for technological progress in the areas that you mention. Technological progress is driven by a large number of individuals that seek prestige and money. There is simply consumer demand for AI and technologies which may alter the human condition. Thus, technological progress happens, irrespective of whether this is considered good or bad.
Further reading:
The philosophical debate you are referring to is sometimes discussed as the scenario “1972”, e.g. in Max Tegmarks “Life 3.0″. He also provides reasons to believe that this scenario is not satisfying, given better alternatives.
Thanks for your response! I did mean to limit my post by saying that I wasn’t intending to discuss the practical feasibility of permanently stopping AI progress in the actual world, only the moral desirability of doing so. With that said, I don’t think postmodern Western capitalism is the final word on what is possible in either the economic or moral realms. More imagination is needed, I think.
Thanks for the further reading suggestion—adding it to my list.
I was surprised to see the comments on this post, which mostly provide arguments in favor of pursuing technological progress, even if this might lead to a higher risk of catastrophes.
I would like to chip in the following:
Preferences regarding the human condition are largely irrelevant for technological progress in the areas that you mention. Technological progress is driven by a large number of individuals that seek prestige and money. There is simply consumer demand for AI and technologies which may alter the human condition. Thus, technological progress happens, irrespective of whether this is considered good or bad.
Further reading:
The philosophical debate you are referring to is sometimes discussed as the scenario “1972”, e.g. in Max Tegmarks “Life 3.0″. He also provides reasons to believe that this scenario is not satisfying, given better alternatives.
Thanks for your response! I did mean to limit my post by saying that I wasn’t intending to discuss the practical feasibility of permanently stopping AI progress in the actual world, only the moral desirability of doing so. With that said, I don’t think postmodern Western capitalism is the final word on what is possible in either the economic or moral realms. More imagination is needed, I think.
Thanks for the further reading suggestion—adding it to my list.