I can speak for myself: I want AGI, if it is developed, to reflect the best possible values we have currently (i.e. liberal values[1]), and I believe itās likely that an AGI system developed by an organization based in the free world (the US, EU, Taiwan, etc.) would embody better values than one developed by one based in the Peopleās Republic of China. There is a widely held belief in science and technology studies that all technologies have embedded values; the most obvious way values could be embedded in an AI system is through its objective function. Itās unclear to me how much these values would differ if the AGI were developed in a free country versus an unfree one, because a lot of the AI systems that the US government uses could also be used for oppressive purposes (and arguably already are used in oppressive ways by the US).
Holden Karnofsky calls this the ācompetition frameāāin which it matters most who develops AGI. He contrasts this with the ācaution frameā, which focuses more on whether AGI is developed in a rushed way than whether it is misused. Both frames seem valuable to me, but Holden warns that most people will gravitate toward the competition frame by default and neglect the caution one.
Fwiw I do believe that liberal values can be improved on, especially in that they seldom include animals. But the foundation seems correct to me: centering every individualās right to life, liberty, and the pursuit of happiness.
Thanks for the explanation! Though I think Iāve been misunderstood.
I think I strongly prefer if e.g. Sam Altman, Demis Hassabis, or Elon Musk ends up with majority influence over how AGI gets applied (assuming weāre alive), over leading candidates in China (especially Xi Jinping). But to state that preference as āI hope China doesnāt develop AI before the US!ā seems ā¦ unusually imprecise and harmful. Especially when nationalistic framings like that are already very likely to fuel otherisation and lack of cross-cultural understanding.
Itās like saying āRussia is an evil country for attacking Ukraine,ā when you could just say āPutinā or āthe Russian governmentā or any other way of phrasing what you mean with less likelihood of spilling over to hatred of Russians in general.
I can speak for myself: I want AGI, if it is developed, to reflect the best possible values we have currently (i.e. liberal values[1]), and I believe itās likely that an AGI system developed by an organization based in the free world (the US, EU, Taiwan, etc.) would embody better values than one developed by one based in the Peopleās Republic of China. There is a widely held belief in science and technology studies that all technologies have embedded values; the most obvious way values could be embedded in an AI system is through its objective function. Itās unclear to me how much these values would differ if the AGI were developed in a free country versus an unfree one, because a lot of the AI systems that the US government uses could also be used for oppressive purposes (and arguably already are used in oppressive ways by the US).
Holden Karnofsky calls this the ācompetition frameāāin which it matters most who develops AGI. He contrasts this with the ācaution frameā, which focuses more on whether AGI is developed in a rushed way than whether it is misused. Both frames seem valuable to me, but Holden warns that most people will gravitate toward the competition frame by default and neglect the caution one.
Hope this helps!
Fwiw I do believe that liberal values can be improved on, especially in that they seldom include animals. But the foundation seems correct to me: centering every individualās right to life, liberty, and the pursuit of happiness.
Thanks for the explanation! Though I think Iāve been misunderstood.
I think I strongly prefer if e.g. Sam Altman, Demis Hassabis, or Elon Musk ends up with majority influence over how AGI gets applied (assuming weāre alive), over leading candidates in China (especially Xi Jinping). But to state that preference as āI hope China doesnāt develop AI before the US!ā seems ā¦ unusually imprecise and harmful. Especially when nationalistic framings like that are already very likely to fuel otherisation and lack of cross-cultural understanding.
Itās like saying āRussia is an evil country for attacking Ukraine,ā when you could just say āPutinā or āthe Russian governmentā or any other way of phrasing what you mean with less likelihood of spilling over to hatred of Russians in general.