Max—thanks for the reply. I’m familiar with the CEV concept. But I don’t see how it helps solve any of the hard problems in alignment that involve any conflicts of interest between human individuals, groups, corporations, or nation-states. It just sweeps all of those conflicts of interest under the rug.
In reality, corporations and nation-states won’t be building AIs to embody the collective CEV of humanity. They will build AIs to embody the profit-making or geopolitical interests of the builders.
We can preach at them that their AIs should embody humanity’s collective CEV. But they would get no comparative advantage from doing so. It wouldn’t help promote their group profit or power. It would be a purely altruistic act. So, given the current state of for-profit corporate governance, and for-power nation-state governance, that seems very unlikely.
So, given the current state of for-profit corporate governance, and for-power nation-state governance, that seems very unlikely.
Yep. I think in my ideal world, there would be exactly one operationally adequate organization permitted to build AGI. Membership in that organization would require a credible pledge to altruism and a test of oath-keeping ability.
Monopoly power of this organization to build AGI would be enforced by a global majority of nation states, with monitoring and deterrence against defection.
I think a stable equilibrium of that kind is possible in principle, though obviously we’re pretty far away from it being anywhere near the Overton Window. (For good reason—it’s a scary idea, and probably ends up looking pretty dystopian when implemented by existing Earth governments. Alas! Sometimes draconian measures really are necessary; reality is not always nice.)
In the absence of such a radically different global political order we might have to take our chances on the hope that the decision-makers at OpenAI, Deepmind, Anthropic, etc. will all be reasonably nice and altruistic, and not power / profit-seeking. Not great!
There might be worlds in between the most radical one sketched above and our current trajectory, but I worry that any “half measures” end up being ineffective and costly and worse than nothing, mirroring many countries’ approach to COVID lockdowns.
Max—thanks for the reply. I’m familiar with the CEV concept. But I don’t see how it helps solve any of the hard problems in alignment that involve any conflicts of interest between human individuals, groups, corporations, or nation-states. It just sweeps all of those conflicts of interest under the rug.
In reality, corporations and nation-states won’t be building AIs to embody the collective CEV of humanity. They will build AIs to embody the profit-making or geopolitical interests of the builders.
We can preach at them that their AIs should embody humanity’s collective CEV. But they would get no comparative advantage from doing so. It wouldn’t help promote their group profit or power. It would be a purely altruistic act. So, given the current state of for-profit corporate governance, and for-power nation-state governance, that seems very unlikely.
Yep. I think in my ideal world, there would be exactly one operationally adequate organization permitted to build AGI. Membership in that organization would require a credible pledge to altruism and a test of oath-keeping ability.
Monopoly power of this organization to build AGI would be enforced by a global majority of nation states, with monitoring and deterrence against defection.
I think a stable equilibrium of that kind is possible in principle, though obviously we’re pretty far away from it being anywhere near the Overton Window. (For good reason—it’s a scary idea, and probably ends up looking pretty dystopian when implemented by existing Earth governments. Alas! Sometimes draconian measures really are necessary; reality is not always nice.)
In the absence of such a radically different global political order we might have to take our chances on the hope that the decision-makers at OpenAI, Deepmind, Anthropic, etc. will all be reasonably nice and altruistic, and not power / profit-seeking. Not great!
There might be worlds in between the most radical one sketched above and our current trajectory, but I worry that any “half measures” end up being ineffective and costly and worse than nothing, mirroring many countries’ approach to COVID lockdowns.