So, given the current state of for-profit corporate governance, and for-power nation-state governance, that seems very unlikely.
Yep. I think in my ideal world, there would be exactly one operationally adequate organization permitted to build AGI. Membership in that organization would require a credible pledge to altruism and a test of oath-keeping ability.
Monopoly power of this organization to build AGI would be enforced by a global majority of nation states, with monitoring and deterrence against defection.
I think a stable equilibrium of that kind is possible in principle, though obviously we’re pretty far away from it being anywhere near the Overton Window. (For good reason—it’s a scary idea, and probably ends up looking pretty dystopian when implemented by existing Earth governments. Alas! Sometimes draconian measures really are necessary; reality is not always nice.)
In the absence of such a radically different global political order we might have to take our chances on the hope that the decision-makers at OpenAI, Deepmind, Anthropic, etc. will all be reasonably nice and altruistic, and not power / profit-seeking. Not great!
There might be worlds in between the most radical one sketched above and our current trajectory, but I worry that any “half measures” end up being ineffective and costly and worse than nothing, mirroring many countries’ approach to COVID lockdowns.
Yep. I think in my ideal world, there would be exactly one operationally adequate organization permitted to build AGI. Membership in that organization would require a credible pledge to altruism and a test of oath-keeping ability.
Monopoly power of this organization to build AGI would be enforced by a global majority of nation states, with monitoring and deterrence against defection.
I think a stable equilibrium of that kind is possible in principle, though obviously we’re pretty far away from it being anywhere near the Overton Window. (For good reason—it’s a scary idea, and probably ends up looking pretty dystopian when implemented by existing Earth governments. Alas! Sometimes draconian measures really are necessary; reality is not always nice.)
In the absence of such a radically different global political order we might have to take our chances on the hope that the decision-makers at OpenAI, Deepmind, Anthropic, etc. will all be reasonably nice and altruistic, and not power / profit-seeking. Not great!
There might be worlds in between the most radical one sketched above and our current trajectory, but I worry that any “half measures” end up being ineffective and costly and worse than nothing, mirroring many countries’ approach to COVID lockdowns.