Given the draconian control that this state imposes on everyone, it seems likely that at least 15% of the population would strongly resent and resist this government. Executing dissenters could easily be a catastrophic risk in itself. Beyond directly killing or imprisoning anyone who might try to disobey the state, the government will likely find that scapegoating certain groupsis a good way to justify their power, shift blame for stagnant or deteriorating economic conditions, and maintain stability.
My assumption—although I haven’t read the paper recently—was that this would only be implemented if the state had access to immensely powerful AI technologies in which case there would be no sense in executing dissenters because they wouldn’t pose any threat at all. Similarly, at this point the state wouldn’t have any need to scrapegoat any particular group as their power would be absolute and essentially unchallengeable and economic conditions would essentially be irrelevant given the abundance that would result.
I feel slightly bad saying this, but I downvoted your post because I felt that it was likely to cause people to misunderstand Bostrom’s position and I don’t think we want to encourage criticism that confuses people on the position that someone holds. At the same time, I appreciated all the hard work you put into this, so I felt it was a shame that I couldn’t upvote it. (I also feel slightly nervous as I haven’t read the paper recently, so maybe I’m actually the one misunderstanding him, which would be rather embarrassing).
Bostrom may have talked about this elsewhere since I’ve heard other people say this, but he doesn’t make this point in the paper. He only mentions AI briefly as a tool the panopticon government could use to analyze the video and audio coming in from their surveillance. He also says:
“Being even further removed from individuals and culturally cohesive ‘peoples’ than are typical state governments, such an institution might by some be perceived as less legitimate, and it may be more susceptible to agency problems such as bureaucratic sclerosis or political drift away from the public interest.”
He also considers what might be required for a global state to bring other world governments to heel. So I don’t think he is assuming that the state can completely ignore all dissent or resistance because it FOOMs into an all powerful AI.
Either way I think that is a really bad argument. It’s basically just saying “if we had aligned superintelligence running the world everything would be fine” which is almost tautologically true. But what are we supposed to conclude from that? I don’t think that tells us anything about increasing state power on the margin. Also, aligning the interests of powerful AI with a powerful global state is not sufficient for alignment of AI with humanity more generally. Powerful global states are not very well aligned with the interests of their constituents.
My reading is that Bostrom is making arguments about how human governance would need to change to address risks from some types of technology. The arguments aren’t explicitly contingent on any AI technology that isn’t available today.
My assumption—although I haven’t read the paper recently—was that this would only be implemented if the state had access to immensely powerful AI technologies in which case there would be no sense in executing dissenters because they wouldn’t pose any threat at all. Similarly, at this point the state wouldn’t have any need to scrapegoat any particular group as their power would be absolute and essentially unchallengeable and economic conditions would essentially be irrelevant given the abundance that would result.
I feel slightly bad saying this, but I downvoted your post because I felt that it was likely to cause people to misunderstand Bostrom’s position and I don’t think we want to encourage criticism that confuses people on the position that someone holds. At the same time, I appreciated all the hard work you put into this, so I felt it was a shame that I couldn’t upvote it. (I also feel slightly nervous as I haven’t read the paper recently, so maybe I’m actually the one misunderstanding him, which would be rather embarrassing).
Bostrom may have talked about this elsewhere since I’ve heard other people say this, but he doesn’t make this point in the paper. He only mentions AI briefly as a tool the panopticon government could use to analyze the video and audio coming in from their surveillance. He also says:
“Being even further removed from individuals and culturally cohesive ‘peoples’ than are typical state governments, such an institution might by some be perceived as less legitimate, and it may be more susceptible to agency problems such as bureaucratic sclerosis or political drift away from the public interest.”
He also considers what might be required for a global state to bring other world governments to heel. So I don’t think he is assuming that the state can completely ignore all dissent or resistance because it FOOMs into an all powerful AI.
Either way I think that is a really bad argument. It’s basically just saying “if we had aligned superintelligence running the world everything would be fine” which is almost tautologically true. But what are we supposed to conclude from that? I don’t think that tells us anything about increasing state power on the margin. Also, aligning the interests of powerful AI with a powerful global state is not sufficient for alignment of AI with humanity more generally. Powerful global states are not very well aligned with the interests of their constituents.
My reading is that Bostrom is making arguments about how human governance would need to change to address risks from some types of technology. The arguments aren’t explicitly contingent on any AI technology that isn’t available today.