Geoffrey Miller is professor of evolutionary psychology, and a long-time participant in the effective altruism (EA) movement. He has recently been emphasizing more how the extreme variability in human social psychology is a neglected consideration for AI alignment, relative to the field’s more technical side. Further, he has been raising the alarm about how this presents a thorny set of cultural, moral and political issues that the EA and AI alignment communities are woefully unprepared to contend with.
In other words, for the problem of AI alignment:
There is the more philosophical problem of how humans might retain control of general/transformative AI, and what that would even look like.
There is the more STEM-oriented side tangling with the technologies and mathematics to accomplish that goal.
There is another, underrated element of AI alignment, in terms of human psychology, posing questions about how, even assuming AGI would be aligned with a set of fundamentally human values, what those (sets of) values are, and who controls which set(s) of values transformative/general AI would be aligned with.
His commentary on this subject often touches on its importance for relations between China and western countries, especially the United States, for EA and AI alignment. He most recently reinforced all of this in an in-depth and well-received comment on a post, by Leopold, evaluating how currently the entire field of AI alignment is in a generally abysmal state. From Geoffrey’s comment on how that all relates to China specifically:
We have, in my opinion, some pretty compelling reasons to think that it [the problem of AI alignment] is not solvable even in principle[...] given the deep game-theoretic conflicts between human individuals, groups, companies, and nation-states[emphasis added] (which cannot be waved away by invoking Coherent Extrapolated Volition, or ‘dontkilleveryoneism’, or any other notion that sweeps people’s profoundly divergent interests under the carpet). [...] In other words, the assumption that ‘alignment is solvable’ might be a very dangerous X-risk amplifier, in its own right[...]It may be leading China to assume that some clever Americans are already handling all those thorny X-risk issues, such that China doesn’t really need to duplicate those ongoing AI safety efforts, and will be able to just copy our alignment solutions once we get them.
This isn’t the first of Geoffrey’s comments like this I’ve found interesting, so I just checked for other views on the matter he has expressed.
I was surprised to find 3 pages worth of search results on the EA Forum for his thoughtful comments about the underrated relevance, to EA and AI alignment, of cultural/political divides between China and western countries. This includes over a dozen such comments in the last year alone. Here is a cross-section of Geoffrey’s viewpoints among all that commentary I’ve found most insightful.
Politics tends to be very nation-specific and culture-specific, whereas EA aspires to global relevance. Insofar as EAs tend to be from the US, UK, Germany, Australia, and few other ‘Western liberal democracies’, we might end up focusing too much on the kinds of political institutions and issues typical of these countries. This would lead to neglect of other countries with other political values and issues. But even worse, it might lead us to neglect geopolitically important nation-states such as China and Russia where our ‘Western liberal democracy’ models of politics just don’t apply very well. This could lead us to neglect certain ideas and interventions that could help nudge those countries in directions that will be good for humanity long-term (e.g. minimizing global catastrophic risks from Russian nukes or Chinese AI).
I also encounter this claim [that China could or will easily exploit any slowdown of AI capabilities research in the US] very often on social media. ‘If the US doesn’t rush ahead towards AGI, China will, & then we lose’. It’s become one of the most common objections to slowing down AI research by US companies, and is repeated ad nauseum by anti-AI-safety accelerationists.[...] It’s not at all obvious that China would rush ahead with AI if the US slowed down. [...] If China was more expansionist, imperialistic, and aggressive, I’d be more concerned that they would push ahead with AI development for military applications. Yes, they want to retake Taiwan, and they will, sooner or later. But they’re not showing the kind of generalized western-Pacific expansionist ambitions that Japan showed in the 1930s. As long as the US doesn’t meddle too much in the ‘internal affairs of China’ (which they see as including Taiwan), there’s little need for a military arms race involving AI.
I worry that Americans tend to think and act as if we are the only people in the world who are capable of long-term thinking, X risk reduction, or appreciation of humanity’s shared fate.
I’m not a China expert, but I have some experience running classes and discussion forums in a Chinese university. In my experience, people in China feel considerably more freedom to express their views on a wide variety of issues than Westerners typically think they do. There is a short list of censored topics, centered around criticism of the CCP itself, Xi Jinping, Uyghurs, Tibet, and Taiwan. But I would bet that they have plenty of freedom to discuss AI X risks, alignment, and geopolitical issues around AI, as exemplified by the fact that Kai-Fu Lee, author of ‘AI Superpowers’ (2018), and based in Beijing, is a huge tech celebrity in China who speaks frequently on college campuses there—despite being a vocal critic of some [government] tech policies.
Conversely, there are plenty of topics in the West, especially in American academia, that are de facto censored (through cancel culture). For example, it was much less trouble to teach about evolutionary psychology, behavior genetics, intelligence research, and even sex research in a Chinese university than in an American university.
Geoffrey Miller on Cross-Cultural Understanding Between China and Western Countries as a Neglected Consideration in AI Alignment
Geoffrey Miller is professor of evolutionary psychology, and a long-time participant in the effective altruism (EA) movement. He has recently been emphasizing more how the extreme variability in human social psychology is a neglected consideration for AI alignment, relative to the field’s more technical side. Further, he has been raising the alarm about how this presents a thorny set of cultural, moral and political issues that the EA and AI alignment communities are woefully unprepared to contend with.
In other words, for the problem of AI alignment:
There is the more philosophical problem of how humans might retain control of general/transformative AI, and what that would even look like.
There is the more STEM-oriented side tangling with the technologies and mathematics to accomplish that goal.
There is another, underrated element of AI alignment, in terms of human psychology, posing questions about how, even assuming AGI would be aligned with a set of fundamentally human values, what those (sets of) values are, and who controls which set(s) of values transformative/general AI would be aligned with.
His commentary on this subject often touches on its importance for relations between China and western countries, especially the United States, for EA and AI alignment. He most recently reinforced all of this in an in-depth and well-received comment on a post, by Leopold, evaluating how currently the entire field of AI alignment is in a generally abysmal state. From Geoffrey’s comment on how that all relates to China specifically:
This isn’t the first of Geoffrey’s comments like this I’ve found interesting, so I just checked for other views on the matter he has expressed.
I was surprised to find 3 pages worth of search results on the EA Forum for his thoughtful comments about the underrated relevance, to EA and AI alignment, of cultural/political divides between China and western countries. This includes over a dozen such comments in the last year alone. Here is a cross-section of Geoffrey’s viewpoints among all that commentary I’ve found most insightful.
On the cruciality for AI alignment and EA of gaining a better understanding of the culture and politics of China and other non-western countries:
On the risk of excessive pro-America/pro-western bias, and anti-China bias, in effective altruism and AI alignment (This is a long comment, though I’m not excerpting any one part of it, as it’s comprehensive and worth reading in its entirety if you can spare the time for it.)
On the AI arms race in terms of political tensions between China and the United States:
On the relevance to AI alignment of differences in academic freedom between China and the US: