I did have a bit of a think about what the implications are for finding feasible AI governance solutions, and here’s my personal take:
If it is true that ‘inhibitive’ governance measures (perhaps like those that are in effect at Google) cause ML engineers to move to more dangerous research zones, I believe it might be prudent to explore models of AI governance that ‘accelerate’ progress towards alignment, rather than slow down the progression towards misalignment.
My general argument would be as follows:
If we assume that it will be unfeasible to buy-out or convince most of the ML engineers on the planet to intrinsically value alignment, then it means that global actors with poor intentions (e.g. imperialist autocracies) will benefit from a system where well-intentioned actors have created a comparatively frustrating & unproductive environment for ML engineers. I.e. not only will they have a more efficient R&D pipeline due to lower restrictions, they may also have better capacity to hire & retain talent over the long-term.
One possible implication from this assertion is that the best course of action is to initiate an AI-alignment Manhattan project that focuses on working towards a state of ‘stabilisation’ in the geopolitical/technology realm. The intention of this is to change the structure of the AI ecosystem so that it favours ‘aligned’ AI by promoting progress in that area, rather than accidentally proliferating ‘misaligned’ AI by stifling progress in ‘pro-alignment’ zones.
I find this conclusion fairly disturbing and I hope there’s some research out there that can disprove it.
Hi Justin, thanks for this reply. Lots to think about. For the moment, just one point:
I worry that EA culture tends to trust the US/Western companies, governments, & culture too much, and is too quick to portray China as an ‘imperialist autocracy’ that can’t be trusted at all, and that’s incapable of taking a long view about humanity in general, or about X risks in particular. (Not that this is what you’re necessarily doing here; your comment just provoked this mini-rant about EA views of China in general).
I’m far from a China expert, but I have some experience teaching at a Chinese university, reading a fair amount about China, and following their rise rather closely over the last few decades. My sense is that Chinese government and people are somewhat more likely to value AI alignment than American politicians, media, and voters do.
And that they have good reasons not to trust any American political or cultural strategy for trying to make AI research safe. They see the US as much more aggressively imperialistic over the last couple of hundred years than China has ever been. They understand that the US fancies itself a representative democracy, but that, in practice, it is, like all stable countries, an oligarchy pretending to be something other than an oligarchy. They see their system as, at least, honest about the nature of its political power; whereas Americans look deluded into thinking that their votes can actually change the political power structure.
I worry that the US/UK strategies for trying to make AI research safer will simply not be credible to Chinese leaders, AI researchers, or ordinary people, and will be seen as just another form of American exceptionalism, in which we act as if we’re the only people in the world who can be trusted to reduce global X risks. From what I’ve seen so far (e.g. a virtual absence of any serious political debate about AI in the US), China would be right not to trust our capacity to take this problem seriously, let alone to solve it.
Hi Geoffrey
Thanks for the kind words.
I did have a bit of a think about what the implications are for finding feasible AI governance solutions, and here’s my personal take:
If it is true that ‘inhibitive’ governance measures (perhaps like those that are in effect at Google) cause ML engineers to move to more dangerous research zones, I believe it might be prudent to explore models of AI governance that ‘accelerate’ progress towards alignment, rather than slow down the progression towards misalignment.
My general argument would be as follows:
If we assume that it will be unfeasible to buy-out or convince most of the ML engineers on the planet to intrinsically value alignment, then it means that global actors with poor intentions (e.g. imperialist autocracies) will benefit from a system where well-intentioned actors have created a comparatively frustrating & unproductive environment for ML engineers. I.e. not only will they have a more efficient R&D pipeline due to lower restrictions, they may also have better capacity to hire & retain talent over the long-term.
One possible implication from this assertion is that the best course of action is to initiate an AI-alignment Manhattan project that focuses on working towards a state of ‘stabilisation’ in the geopolitical/technology realm. The intention of this is to change the structure of the AI ecosystem so that it favours ‘aligned’ AI by promoting progress in that area, rather than accidentally proliferating ‘misaligned’ AI by stifling progress in ‘pro-alignment’ zones.
I find this conclusion fairly disturbing and I hope there’s some research out there that can disprove it.
Hi Justin, thanks for this reply. Lots to think about. For the moment, just one point:
I worry that EA culture tends to trust the US/Western companies, governments, & culture too much, and is too quick to portray China as an ‘imperialist autocracy’ that can’t be trusted at all, and that’s incapable of taking a long view about humanity in general, or about X risks in particular. (Not that this is what you’re necessarily doing here; your comment just provoked this mini-rant about EA views of China in general).
I’m far from a China expert, but I have some experience teaching at a Chinese university, reading a fair amount about China, and following their rise rather closely over the last few decades. My sense is that Chinese government and people are somewhat more likely to value AI alignment than American politicians, media, and voters do.
And that they have good reasons not to trust any American political or cultural strategy for trying to make AI research safe. They see the US as much more aggressively imperialistic over the last couple of hundred years than China has ever been. They understand that the US fancies itself a representative democracy, but that, in practice, it is, like all stable countries, an oligarchy pretending to be something other than an oligarchy. They see their system as, at least, honest about the nature of its political power; whereas Americans look deluded into thinking that their votes can actually change the political power structure.
I worry that the US/UK strategies for trying to make AI research safer will simply not be credible to Chinese leaders, AI researchers, or ordinary people, and will be seen as just another form of American exceptionalism, in which we act as if we’re the only people in the world who can be trusted to reduce global X risks. From what I’ve seen so far (e.g. a virtual absence of any serious political debate about AI in the US), China would be right not to trust our capacity to take this problem seriously, let alone to solve it.