(Most) real laws have huge bodies of interpretative text surrounding them and examples of real-world applications of them to real-world facts.
Right, I was trying to factor this part out, because it seemed to me that the hope was “the law is explicit and therefore can be programmed in”. But if you want to include all of the interpretative text and examples of real-world application, it starts looking more like “here is a crap ton of data about this law, please understand what this law means and then act in accordance to it”, as opposed to directly hardcoding in the law.
Under this interpretation (which may not be what you meant) this becomes a claim that laws have a lot more data that pinpoints what exactly they mean, relative to something like “what humans want”, and so an AI system will more easily pinpoint it. I’m somewhat sympathetic to this claim, though I think there is a lot of data about “what humans want” in everyday life that the AI can learn from. But my real reason for not caring too much about this is that in this story we rely on the AI’s “intelligence” to “understand” laws, as opposed to “programming it in”; given that we’re worried about superintelligent AI it should be “intelligent” enough to “understand” what humans want as well (given that humans seem to be able to do that).
Lawyers approximate generalists: they can take arbitrary written laws and give advice on how to conform behavior to those laws. So a lawyerlike AI might be able to learn general interpretative principles and research skills and be able to simulate legal adjudications of proposed actions.
I’m not sure what you’re trying to imply with this—does this make the AIs task easier? Harder? The generality somehow implies that the AI is safer?
Like, I don’t get why this point has any bearing on whether it is better to train “lawyerlike AI” or “AI that tries to do what humans want”. If anything, I think it pushes in the “do what humans want” direction, since historically it has been very difficult to create generalist AIs, and easier to create specialist AIs.
(Though I’m not sure I think “AI that tries to do what humans want” is less “general” than lawyerlike AI.)
But my real reason for not caring too much about this is that in this story we rely on the AI’s “intelligence” to “understand” laws, as opposed to “programming it in”; given that we’re worried about superintelligent AI it should be “intelligent” enough to “understand” what humans want as well (given that humans seem to be able to do that).
My intuition is that more formal systems will be easier for AI to understand earlier in the “evolution” of SOTA AI intelligence than less-formal systems. Since law is more formal than human values (including both the way it’s written and the formal significance of interpretative texts), then we might get good law-following before good value alignment.
I’m not sure what you’re trying to imply with this—does this make the AIs task easier? Harder? The generality somehow implies that the AI is safer?
Sorry. I was responding to the “all laws” point. My point was that I think that making a law-following AI that can follow (A) all enumerated laws is not much harder than one that can be made to follow (B) any given law. That is, difficulty of construction scales sub-linearly with the number of laws it needs to follow. The interpretative tools that should get to (B) should be pretty generalizable to (A).
My intuition is that more formal systems will be easier for AI to understand earlier in the “evolution” of SOTA AI intelligence than less-formal systems.
I agree for fully formal systems (e.g. solving SAT problems), but don’t agree for “more formal” systems like law.
Mostly I’m thinking that understanding law would require you to understand language, but once you’ve understood language you also understand “what humans want”. You could imagine a world in which AI systems understand the literal meaning of language but don’t grasp the figurative / pedagogic / Gricean aspects of language, and in that world I think AI systems will understand law earlier than normal English, but that doesn’t seem to be the world we live in:
GPT-2 and other language models don’t seem particularly literal.
We have way more training data about natural language as it is normally used (most of the Internet), relative to natural language meant to be interpreted mostly literally.
Humans find it easier / more “native” to interpret language in the figurative / pedagogic way than to interpret it in the literal way.
My point was that I think that making a law-following AI that can follow (A) all enumerated laws is not much harder than one that can be made to follow (B) any given law.
The key difference in my mind is that the AI system does not need to determine the relative authoritativeness of different pronouncements of human value, since the legal authoritativeness of e.g. caselaw is pretty formalized. But I agree that this is less of an issue if the primary route to alignment is just getting an AI to follow the instructions of its principal.
Right, I was trying to factor this part out, because it seemed to me that the hope was “the law is explicit and therefore can be programmed in”. But if you want to include all of the interpretative text and examples of real-world application, it starts looking more like “here is a crap ton of data about this law, please understand what this law means and then act in accordance to it”, as opposed to directly hardcoding in the law.
Under this interpretation (which may not be what you meant) this becomes a claim that laws have a lot more data that pinpoints what exactly they mean, relative to something like “what humans want”, and so an AI system will more easily pinpoint it. I’m somewhat sympathetic to this claim, though I think there is a lot of data about “what humans want” in everyday life that the AI can learn from. But my real reason for not caring too much about this is that in this story we rely on the AI’s “intelligence” to “understand” laws, as opposed to “programming it in”; given that we’re worried about superintelligent AI it should be “intelligent” enough to “understand” what humans want as well (given that humans seem to be able to do that).
I’m not sure what you’re trying to imply with this—does this make the AIs task easier? Harder? The generality somehow implies that the AI is safer?
Like, I don’t get why this point has any bearing on whether it is better to train “lawyerlike AI” or “AI that tries to do what humans want”. If anything, I think it pushes in the “do what humans want” direction, since historically it has been very difficult to create generalist AIs, and easier to create specialist AIs.
(Though I’m not sure I think “AI that tries to do what humans want” is less “general” than lawyerlike AI.)
My intuition is that more formal systems will be easier for AI to understand earlier in the “evolution” of SOTA AI intelligence than less-formal systems. Since law is more formal than human values (including both the way it’s written and the formal significance of interpretative texts), then we might get good law-following before good value alignment.
Sorry. I was responding to the “all laws” point. My point was that I think that making a law-following AI that can follow (A) all enumerated laws is not much harder than one that can be made to follow (B) any given law. That is, difficulty of construction scales sub-linearly with the number of laws it needs to follow. The interpretative tools that should get to (B) should be pretty generalizable to (A).
I agree for fully formal systems (e.g. solving SAT problems), but don’t agree for “more formal” systems like law.
Mostly I’m thinking that understanding law would require you to understand language, but once you’ve understood language you also understand “what humans want”. You could imagine a world in which AI systems understand the literal meaning of language but don’t grasp the figurative / pedagogic / Gricean aspects of language, and in that world I think AI systems will understand law earlier than normal English, but that doesn’t seem to be the world we live in:
GPT-2 and other language models don’t seem particularly literal.
We have way more training data about natural language as it is normally used (most of the Internet), relative to natural language meant to be interpreted mostly literally.
Humans find it easier / more “native” to interpret language in the figurative / pedagogic way than to interpret it in the literal way.
Makes sense, that seems true to me.
The key difference in my mind is that the AI system does not need to determine the relative authoritativeness of different pronouncements of human value, since the legal authoritativeness of e.g. caselaw is pretty formalized. But I agree that this is less of an issue if the primary route to alignment is just getting an AI to follow the instructions of its principal.
Yeah, I certainly feel better about learning law relative to learning the One True Set of Human Values That Shall Then Be Optimized Forevermore.