My main objection is that people working in government need to be able to get away with a mild level of lying and scheming to do their jobs (eg broker compromises, meet with constituents). AI could upset this equilibrium in a couple ways, making it harder to govern.
If the AI is just naive, it might do things like call out a politician for telling a harmless white lie, jeopardizing eg an international agreement that was about to be signed.
One response is that human overseers will discipline these naive mistakes, but the more human oversight is required, the more you run into the typical problems of human oversight you outlined above. “These evaluators can do so while not seeing critical private information” is not always true. (Eg if the AI realizes that Biden is telling a minor lie to Xi based on classified information, revealing the existence of the lie to the overseer would necessarily reveal classified information).
Even if the AI is not naive, and can distinguish white lies from outright misinformation, say, I still worry that it undermines the current equilibrium. The public would call for stricter and stricter oversight standards, while government workers will struggle to fight back because
That’s a bad look, and
The benefits of a small level of deception are hard to identify and articulate.
TLDR: Government needs some humans in the loop making decisions and working together. To work together, humans need some latitude to behave in ways that would become difficult with greater AI integration.
1. I agree that these systems having severe flaws, especially if over-trusted, in ways that are similar to what I feel about managements. Finding ways to make sure they are reliable will be hard, though many organizations might want it enough to do much of that work anyway. I obviously wouldn’t suggest or encourage bad implementations.
2. I feel comfortable that we can choose where to apply it. I assume these systems should/can/would be rolled out gradually.
At the same time, I would hope that we could move towards an equilibrium with much less lying. A lot of lies in business are both highly normalized and definitely not white lies.
All in all, I’m proposing powerful technology. This is a tool, and it’s possible use almost any tool in incompetent or malicious ways, if you really want.
I am very pessimistic about this—my assumption is the state will use it to attack its enemies, even when they make true statements, while ignoring the falsehoods of its allies.
I think the example in the article is pretty strong evidence of this. They claim a politician is guilty of a “direct lie” for saying that a policy would entitle illegal immigrants to £1,600/month of welfare. The claim is somewhat misleading, because illegal immigrants would not be the only ones entitled. But it is true that some of those entitled would be illegal immigrants, and their inclusion was a deliberate policy choice.
If this is the canonical example they’re using to illustrate the rule, rather than something more objective and clear-cut, this makes me very pessimistic it will actually be applied in a truth-seeking manner. Rather this seems like it will undermine rational decision making and democracy—the state can simply declare opposition politicians to be liars and remove them from the ballot, preventing voters from course-correcting.
My main objection is that people working in government need to be able to get away with a mild level of lying and scheming to do their jobs (eg broker compromises, meet with constituents). AI could upset this equilibrium in a couple ways, making it harder to govern.
If the AI is just naive, it might do things like call out a politician for telling a harmless white lie, jeopardizing eg an international agreement that was about to be signed.
One response is that human overseers will discipline these naive mistakes, but the more human oversight is required, the more you run into the typical problems of human oversight you outlined above. “These evaluators can do so while not seeing critical private information” is not always true. (Eg if the AI realizes that Biden is telling a minor lie to Xi based on classified information, revealing the existence of the lie to the overseer would necessarily reveal classified information).
Even if the AI is not naive, and can distinguish white lies from outright misinformation, say, I still worry that it undermines the current equilibrium. The public would call for stricter and stricter oversight standards, while government workers will struggle to fight back because
That’s a bad look, and
The benefits of a small level of deception are hard to identify and articulate.
TLDR: Government needs some humans in the loop making decisions and working together. To work together, humans need some latitude to behave in ways that would become difficult with greater AI integration.
Thanks for vocalizing your thoughts!
1. I agree that these systems having severe flaws, especially if over-trusted, in ways that are similar to what I feel about managements. Finding ways to make sure they are reliable will be hard, though many organizations might want it enough to do much of that work anyway. I obviously wouldn’t suggest or encourage bad implementations.
2. I feel comfortable that we can choose where to apply it. I assume these systems should/can/would be rolled out gradually.
At the same time, I would hope that we could move towards an equilibrium with much less lying. A lot of lies in business are both highly normalized and definitely not white lies.
All in all, I’m proposing powerful technology. This is a tool, and it’s possible use almost any tool in incompetent or malicious ways, if you really want.
Welsh government commits to making lying in politics illegal
This sounds awesome at first blush, would love to see it battle-tested.
I am very pessimistic about this—my assumption is the state will use it to attack its enemies, even when they make true statements, while ignoring the falsehoods of its allies.
I think the example in the article is pretty strong evidence of this. They claim a politician is guilty of a “direct lie” for saying that a policy would entitle illegal immigrants to £1,600/month of welfare. The claim is somewhat misleading, because illegal immigrants would not be the only ones entitled. But it is true that some of those entitled would be illegal immigrants, and their inclusion was a deliberate policy choice.
If this is the canonical example they’re using to illustrate the rule, rather than something more objective and clear-cut, this makes me very pessimistic it will actually be applied in a truth-seeking manner. Rather this seems like it will undermine rational decision making and democracy—the state can simply declare opposition politicians to be liars and remove them from the ballot, preventing voters from course-correcting.