You might say that we could train an AI system to learn what is and isnât breaking the law; but then you might as well train an AI system to learn what is and isnât the thing you want it to do. Itâs not clear why training to follow laws would be easier than training it to do what you want; the latter would be a much more useful AI system.
Some reasons why this might be true:
Law is less indeterminate than you might think, and probably more definite than human values
Law has authoritative corpora readily available
Law has built-in, authoritative adjudication/âdispute resolution mechanisms. Cf. AI Safety by Debate.
In general, my guess is that there is a large space of actions that:
Are unaligned, and
Are illegal, and
Due to the formality of parts of law and the legal process, an AI can be made to have higher confidence that an action is (2) than (1).
However, itâs very possible that, as you suggest, solving AI legal compliance requires solving AI Safety generally. This seems somewhat unlikely to me but I have low confidence in this since Iâm not an expert. :-)
Law is less indeterminate than you might think, and probably more definite than human values
Agreed that âhuman valuesâ is harder and more indeterminate, because itâs a tricky philosophical problem that may not even have a solution.
I donât think âalignmentâ is harder or more indeterminate, where âalignmentâ means something like âI have in mind something I want the AI system to do, it does that thing, without trying to manipulate me /â deceive me etc.â
Like, idk, imagine there was a law that said âAll AI systems must not deceive their users, and must do what they believe their users wantâ. A real law would probably only be slightly more explicit than that? If so, just creating an AI system that followed only this law would lead to something that meets the criterion Iâm imagining. Creating an AI system that follows all laws seems a lot harder.
Due to the formality of parts of law and the legal process, an AI can be made to have higher confidence that an action is (2) than (1).
I think this would probably have been true of expert systems but not so true of deep learning-based systems.
Also, personally I find it easier to tell when my actions are unaligned with <person X whom I know> than when my actions are illegal.
I donât think âalignmentâ is harder or more indeterminate, where âalignmentâ means something like âI have in mind something I want the AI system to do, it does that thing, without trying to manipulate me /â deceive me etc.â
Yeah, I agree with this.
imagine there was a law that said âAll AI systems must not deceive their users, and must do what they believe their users wantâ. A real law would probably only be slightly more explicit than that?
Iâm not sure thatâs true. (Most) real laws have huge bodies of interpretative text surrounding them and examples of real-world applications of them to real-world facts.
Creating an AI system that follows all laws seems a lot harder.
Lawyers approximate generalists: they can take arbitrary written laws and give advice on how to conform behavior to those laws. So a lawyerlike AI might be able to learn general interpretative principles and research skills and be able to simulate legal adjudications of proposed actions.
I think this would probably have been true of expert systems but not so true of deep learning-based systems.
Interesting; I donât have good intuitions on this!
(Most) real laws have huge bodies of interpretative text surrounding them and examples of real-world applications of them to real-world facts.
Right, I was trying to factor this part out, because it seemed to me that the hope was âthe law is explicit and therefore can be programmed inâ. But if you want to include all of the interpretative text and examples of real-world application, it starts looking more like âhere is a crap ton of data about this law, please understand what this law means and then act in accordance to itâ, as opposed to directly hardcoding in the law.
Under this interpretation (which may not be what you meant) this becomes a claim that laws have a lot more data that pinpoints what exactly they mean, relative to something like âwhat humans wantâ, and so an AI system will more easily pinpoint it. Iâm somewhat sympathetic to this claim, though I think there is a lot of data about âwhat humans wantâ in everyday life that the AI can learn from. But my real reason for not caring too much about this is that in this story we rely on the AIâs âintelligenceâ to âunderstandâ laws, as opposed to âprogramming it inâ; given that weâre worried about superintelligent AI it should be âintelligentâ enough to âunderstandâ what humans want as well (given that humans seem to be able to do that).
Lawyers approximate generalists: they can take arbitrary written laws and give advice on how to conform behavior to those laws. So a lawyerlike AI might be able to learn general interpretative principles and research skills and be able to simulate legal adjudications of proposed actions.
Iâm not sure what youâre trying to imply with thisâdoes this make the AIs task easier? Harder? The generality somehow implies that the AI is safer?
Like, I donât get why this point has any bearing on whether it is better to train âlawyerlike AIâ or âAI that tries to do what humans wantâ. If anything, I think it pushes in the âdo what humans wantâ direction, since historically it has been very difficult to create generalist AIs, and easier to create specialist AIs.
(Though Iâm not sure I think âAI that tries to do what humans wantâ is less âgeneralâ than lawyerlike AI.)
But my real reason for not caring too much about this is that in this story we rely on the AIâs âintelligenceâ to âunderstandâ laws, as opposed to âprogramming it inâ; given that weâre worried about superintelligent AI it should be âintelligentâ enough to âunderstandâ what humans want as well (given that humans seem to be able to do that).
My intuition is that more formal systems will be easier for AI to understand earlier in the âevolutionâ of SOTA AI intelligence than less-formal systems. Since law is more formal than human values (including both the way itâs written and the formal significance of interpretative texts), then we might get good law-following before good value alignment.
Iâm not sure what youâre trying to imply with thisâdoes this make the AIs task easier? Harder? The generality somehow implies that the AI is safer?
Sorry. I was responding to the âall lawsâ point. My point was that I think that making a law-following AI that can follow (A) all enumerated laws is not much harder than one that can be made to follow (B) any given law. That is, difficulty of construction scales sub-linearly with the number of laws it needs to follow. The interpretative tools that should get to (B) should be pretty generalizable to (A).
My intuition is that more formal systems will be easier for AI to understand earlier in the âevolutionâ of SOTA AI intelligence than less-formal systems.
I agree for fully formal systems (e.g. solving SAT problems), but donât agree for âmore formalâ systems like law.
Mostly Iâm thinking that understanding law would require you to understand language, but once youâve understood language you also understand âwhat humans wantâ. You could imagine a world in which AI systems understand the literal meaning of language but donât grasp the figurative /â pedagogic /â Gricean aspects of language, and in that world I think AI systems will understand law earlier than normal English, but that doesnât seem to be the world we live in:
GPT-2 and other language models donât seem particularly literal.
We have way more training data about natural language as it is normally used (most of the Internet), relative to natural language meant to be interpreted mostly literally.
Humans find it easier /â more ânativeâ to interpret language in the figurative /â pedagogic way than to interpret it in the literal way.
My point was that I think that making a law-following AI that can follow (A) all enumerated laws is not much harder than one that can be made to follow (B) any given law.
The key difference in my mind is that the AI system does not need to determine the relative authoritativeness of different pronouncements of human value, since the legal authoritativeness of e.g. caselaw is pretty formalized. But I agree that this is less of an issue if the primary route to alignment is just getting an AI to follow the instructions of its principal.
Some reasons why this might be true:
Law is less indeterminate than you might think, and probably more definite than human values
Law has authoritative corpora readily available
Law has built-in, authoritative adjudication/âdispute resolution mechanisms. Cf. AI Safety by Debate.
In general, my guess is that there is a large space of actions that:
Are unaligned, and
Are illegal, and
Due to the formality of parts of law and the legal process, an AI can be made to have higher confidence that an action is (2) than (1).
However, itâs very possible that, as you suggest, solving AI legal compliance requires solving AI Safety generally. This seems somewhat unlikely to me but I have low confidence in this since Iâm not an expert. :-)
Agreed that âhuman valuesâ is harder and more indeterminate, because itâs a tricky philosophical problem that may not even have a solution.
I donât think âalignmentâ is harder or more indeterminate, where âalignmentâ means something like âI have in mind something I want the AI system to do, it does that thing, without trying to manipulate me /â deceive me etc.â
Like, idk, imagine there was a law that said âAll AI systems must not deceive their users, and must do what they believe their users wantâ. A real law would probably only be slightly more explicit than that? If so, just creating an AI system that followed only this law would lead to something that meets the criterion Iâm imagining. Creating an AI system that follows all laws seems a lot harder.
I think this would probably have been true of expert systems but not so true of deep learning-based systems.
Also, personally I find it easier to tell when my actions are unaligned with <person X whom I know> than when my actions are illegal.
Thanks Rohin!
Yeah, I agree with this.
Iâm not sure thatâs true. (Most) real laws have huge bodies of interpretative text surrounding them and examples of real-world applications of them to real-world facts.
Lawyers approximate generalists: they can take arbitrary written laws and give advice on how to conform behavior to those laws. So a lawyerlike AI might be able to learn general interpretative principles and research skills and be able to simulate legal adjudications of proposed actions.
Interesting; I donât have good intuitions on this!
Right, I was trying to factor this part out, because it seemed to me that the hope was âthe law is explicit and therefore can be programmed inâ. But if you want to include all of the interpretative text and examples of real-world application, it starts looking more like âhere is a crap ton of data about this law, please understand what this law means and then act in accordance to itâ, as opposed to directly hardcoding in the law.
Under this interpretation (which may not be what you meant) this becomes a claim that laws have a lot more data that pinpoints what exactly they mean, relative to something like âwhat humans wantâ, and so an AI system will more easily pinpoint it. Iâm somewhat sympathetic to this claim, though I think there is a lot of data about âwhat humans wantâ in everyday life that the AI can learn from. But my real reason for not caring too much about this is that in this story we rely on the AIâs âintelligenceâ to âunderstandâ laws, as opposed to âprogramming it inâ; given that weâre worried about superintelligent AI it should be âintelligentâ enough to âunderstandâ what humans want as well (given that humans seem to be able to do that).
Iâm not sure what youâre trying to imply with thisâdoes this make the AIs task easier? Harder? The generality somehow implies that the AI is safer?
Like, I donât get why this point has any bearing on whether it is better to train âlawyerlike AIâ or âAI that tries to do what humans wantâ. If anything, I think it pushes in the âdo what humans wantâ direction, since historically it has been very difficult to create generalist AIs, and easier to create specialist AIs.
(Though Iâm not sure I think âAI that tries to do what humans wantâ is less âgeneralâ than lawyerlike AI.)
My intuition is that more formal systems will be easier for AI to understand earlier in the âevolutionâ of SOTA AI intelligence than less-formal systems. Since law is more formal than human values (including both the way itâs written and the formal significance of interpretative texts), then we might get good law-following before good value alignment.
Sorry. I was responding to the âall lawsâ point. My point was that I think that making a law-following AI that can follow (A) all enumerated laws is not much harder than one that can be made to follow (B) any given law. That is, difficulty of construction scales sub-linearly with the number of laws it needs to follow. The interpretative tools that should get to (B) should be pretty generalizable to (A).
I agree for fully formal systems (e.g. solving SAT problems), but donât agree for âmore formalâ systems like law.
Mostly Iâm thinking that understanding law would require you to understand language, but once youâve understood language you also understand âwhat humans wantâ. You could imagine a world in which AI systems understand the literal meaning of language but donât grasp the figurative /â pedagogic /â Gricean aspects of language, and in that world I think AI systems will understand law earlier than normal English, but that doesnât seem to be the world we live in:
GPT-2 and other language models donât seem particularly literal.
We have way more training data about natural language as it is normally used (most of the Internet), relative to natural language meant to be interpreted mostly literally.
Humans find it easier /â more ânativeâ to interpret language in the figurative /â pedagogic way than to interpret it in the literal way.
Makes sense, that seems true to me.
The key difference in my mind is that the AI system does not need to determine the relative authoritativeness of different pronouncements of human value, since the legal authoritativeness of e.g. caselaw is pretty formalized. But I agree that this is less of an issue if the primary route to alignment is just getting an AI to follow the instructions of its principal.
Yeah, I certainly feel better about learning law relative to learning the One True Set of Human Values That Shall Then Be Optimized Forevermore.