Strongly agreed. Somehow taking over the world and preventing anybody else from building AI seems like a core part of the plan for Yudkowsky and others. (When I asked about this on LW, somebody said they expected the first aligned AGI to implement global surveillance to prevent unaligned AGIs.) That sounds absolutely terrible—see risks from stable totalitarianism.
If Yudkowsky is right and the only way to save the world is by global domination, then I think we’re already doomed. But there’s lots of cruxes to his worldview: short timelines, short takeoff speeds, the difficulty of the alignment problem, the idea that AGI will be a single entity rather than many different systems in different domains. Most people in AI safety are not nearly as pessimistic. I’d much rather bet on the wide range of scenarios where his dire predictions are incorrect.
But this wouldn’t be global domination in any conventional sense. When humans implement such things, its methods are extremely harsh and inhibit freedoms on all levels of society. A human-run domination would need to enforce such measures with harsh prison time, executions, fear and intimidation, etc. But this is mostly because humans are not very smart, so they don’t know any other way to stop human y from doing x. A powerful AGI wouldn’t have this problem. I don’t think it would even have to be as crude as “burn all GPUs”. It could probably monitor and enforce things so efficiently that trying to create another AGI would be like trying to fight gravity. For a human, it would simply be that you can’t achieve it, no matter how many times you try, almost a new rule interwoven into the fabric of reality. This could probably be made less severe with an implementation such as “can’t achieve AGI that is above intelligence threshold X” or “poses X amount of risk to population”. In this less severe form, humans would still be free to develop AIs that could solve aging, cancer, space travel, etc., but couldn’t develop anything too powerful or dangerous.
Strongly agreed. Somehow taking over the world and preventing anybody else from building AI seems like a core part of the plan for Yudkowsky and others. (When I asked about this on LW, somebody said they expected the first aligned AGI to implement global surveillance to prevent unaligned AGIs.) That sounds absolutely terrible—see risks from stable totalitarianism.
If Yudkowsky is right and the only way to save the world is by global domination, then I think we’re already doomed. But there’s lots of cruxes to his worldview: short timelines, short takeoff speeds, the difficulty of the alignment problem, the idea that AGI will be a single entity rather than many different systems in different domains. Most people in AI safety are not nearly as pessimistic. I’d much rather bet on the wide range of scenarios where his dire predictions are incorrect.
But this wouldn’t be global domination in any conventional sense. When humans implement such things, its methods are extremely harsh and inhibit freedoms on all levels of society. A human-run domination would need to enforce such measures with harsh prison time, executions, fear and intimidation, etc. But this is mostly because humans are not very smart, so they don’t know any other way to stop human y from doing x. A powerful AGI wouldn’t have this problem. I don’t think it would even have to be as crude as “burn all GPUs”. It could probably monitor and enforce things so efficiently that trying to create another AGI would be like trying to fight gravity. For a human, it would simply be that you can’t achieve it, no matter how many times you try, almost a new rule interwoven into the fabric of reality. This could probably be made less severe with an implementation such as “can’t achieve AGI that is above intelligence threshold X” or “poses X amount of risk to population”. In this less severe form, humans would still be free to develop AIs that could solve aging, cancer, space travel, etc., but couldn’t develop anything too powerful or dangerous.