I’m honestly curious what motivates people to support this framing. I don’t live in an English-speaking country, and for all I know, I might have severely misunderstood broader EA culture. Are people here sincerely feeling nationalistic (a negative term in Norway) about this? I’d be appreciative if somebody volunteered to help me understand.
I can speak for myself: I want AGI, if it is developed, to reflect the best possible values we have currently (i.e. liberal values[1]), and I believe it’s likely that an AGI system developed by an organization based in the free world (the US, EU, Taiwan, etc.) would embody better values than one developed by one based in the People’s Republic of China. There is a widely held belief in science and technology studies that all technologies have embedded values; the most obvious way values could be embedded in an AI system is through its objective function. It’s unclear to me how much these values would differ if the AGI were developed in a free country versus an unfree one, because a lot of the AI systems that the US government uses could also be used for oppressive purposes (and arguably already are used in oppressive ways by the US).
Holden Karnofsky calls this the “competition frame”—in which it matters most who develops AGI. He contrasts this with the “caution frame”, which focuses more on whether AGI is developed in a rushed way than whether it is misused. Both frames seem valuable to me, but Holden warns that most people will gravitate toward the competition frame by default and neglect the caution one.
Fwiw I do believe that liberal values can be improved on, especially in that they seldom include animals. But the foundation seems correct to me: centering every individual’s right to life, liberty, and the pursuit of happiness.
Thanks for the explanation! Though I think I’ve been misunderstood.
I think I strongly prefer if e.g. Sam Altman, Demis Hassabis, or Elon Musk ends up with majority influence over how AGI gets applied (assuming we’re alive), over leading candidates in China (especially Xi Jinping). But to state that preference as “I hope China doesn’t develop AI before the US!” seems … unusually imprecise and harmful. Especially when nationalistic framings like that are already very likely to fuel otherisation and lack of cross-cultural understanding.
It’s like saying “Russia is an evil country for attacking Ukraine,” when you could just say “Putin” or “the Russian government” or any other way of phrasing what you mean with less likelihood of spilling over to hatred of Russians in general.
I’m honestly curious what motivates people to support this framing. I don’t live in an English-speaking country, and for all I know, I might have severely misunderstood broader EA culture. Are people here sincerely feeling nationalistic (a negative term in Norway) about this? I’d be appreciative if somebody volunteered to help me understand.
I can speak for myself: I want AGI, if it is developed, to reflect the best possible values we have currently (i.e. liberal values[1]), and I believe it’s likely that an AGI system developed by an organization based in the free world (the US, EU, Taiwan, etc.) would embody better values than one developed by one based in the People’s Republic of China. There is a widely held belief in science and technology studies that all technologies have embedded values; the most obvious way values could be embedded in an AI system is through its objective function. It’s unclear to me how much these values would differ if the AGI were developed in a free country versus an unfree one, because a lot of the AI systems that the US government uses could also be used for oppressive purposes (and arguably already are used in oppressive ways by the US).
Holden Karnofsky calls this the “competition frame”—in which it matters most who develops AGI. He contrasts this with the “caution frame”, which focuses more on whether AGI is developed in a rushed way than whether it is misused. Both frames seem valuable to me, but Holden warns that most people will gravitate toward the competition frame by default and neglect the caution one.
Hope this helps!
Fwiw I do believe that liberal values can be improved on, especially in that they seldom include animals. But the foundation seems correct to me: centering every individual’s right to life, liberty, and the pursuit of happiness.
Thanks for the explanation! Though I think I’ve been misunderstood.
I think I strongly prefer if e.g. Sam Altman, Demis Hassabis, or Elon Musk ends up with majority influence over how AGI gets applied (assuming we’re alive), over leading candidates in China (especially Xi Jinping). But to state that preference as “I hope China doesn’t develop AI before the US!” seems … unusually imprecise and harmful. Especially when nationalistic framings like that are already very likely to fuel otherisation and lack of cross-cultural understanding.
It’s like saying “Russia is an evil country for attacking Ukraine,” when you could just say “Putin” or “the Russian government” or any other way of phrasing what you mean with less likelihood of spilling over to hatred of Russians in general.