But my real reason for not caring too much about this is that in this story we rely on the AIâs âintelligenceâ to âunderstandâ laws, as opposed to âprogramming it inâ; given that weâre worried about superintelligent AI it should be âintelligentâ enough to âunderstandâ what humans want as well (given that humans seem to be able to do that).
My intuition is that more formal systems will be easier for AI to understand earlier in the âevolutionâ of SOTA AI intelligence than less-formal systems. Since law is more formal than human values (including both the way itâs written and the formal significance of interpretative texts), then we might get good law-following before good value alignment.
Iâm not sure what youâre trying to imply with thisâdoes this make the AIs task easier? Harder? The generality somehow implies that the AI is safer?
Sorry. I was responding to the âall lawsâ point. My point was that I think that making a law-following AI that can follow (A) all enumerated laws is not much harder than one that can be made to follow (B) any given law. That is, difficulty of construction scales sub-linearly with the number of laws it needs to follow. The interpretative tools that should get to (B) should be pretty generalizable to (A).
My intuition is that more formal systems will be easier for AI to understand earlier in the âevolutionâ of SOTA AI intelligence than less-formal systems.
I agree for fully formal systems (e.g. solving SAT problems), but donât agree for âmore formalâ systems like law.
Mostly Iâm thinking that understanding law would require you to understand language, but once youâve understood language you also understand âwhat humans wantâ. You could imagine a world in which AI systems understand the literal meaning of language but donât grasp the figurative /â pedagogic /â Gricean aspects of language, and in that world I think AI systems will understand law earlier than normal English, but that doesnât seem to be the world we live in:
GPT-2 and other language models donât seem particularly literal.
We have way more training data about natural language as it is normally used (most of the Internet), relative to natural language meant to be interpreted mostly literally.
Humans find it easier /â more ânativeâ to interpret language in the figurative /â pedagogic way than to interpret it in the literal way.
My point was that I think that making a law-following AI that can follow (A) all enumerated laws is not much harder than one that can be made to follow (B) any given law.
The key difference in my mind is that the AI system does not need to determine the relative authoritativeness of different pronouncements of human value, since the legal authoritativeness of e.g. caselaw is pretty formalized. But I agree that this is less of an issue if the primary route to alignment is just getting an AI to follow the instructions of its principal.
My intuition is that more formal systems will be easier for AI to understand earlier in the âevolutionâ of SOTA AI intelligence than less-formal systems. Since law is more formal than human values (including both the way itâs written and the formal significance of interpretative texts), then we might get good law-following before good value alignment.
Sorry. I was responding to the âall lawsâ point. My point was that I think that making a law-following AI that can follow (A) all enumerated laws is not much harder than one that can be made to follow (B) any given law. That is, difficulty of construction scales sub-linearly with the number of laws it needs to follow. The interpretative tools that should get to (B) should be pretty generalizable to (A).
I agree for fully formal systems (e.g. solving SAT problems), but donât agree for âmore formalâ systems like law.
Mostly Iâm thinking that understanding law would require you to understand language, but once youâve understood language you also understand âwhat humans wantâ. You could imagine a world in which AI systems understand the literal meaning of language but donât grasp the figurative /â pedagogic /â Gricean aspects of language, and in that world I think AI systems will understand law earlier than normal English, but that doesnât seem to be the world we live in:
GPT-2 and other language models donât seem particularly literal.
We have way more training data about natural language as it is normally used (most of the Internet), relative to natural language meant to be interpreted mostly literally.
Humans find it easier /â more ânativeâ to interpret language in the figurative /â pedagogic way than to interpret it in the literal way.
Makes sense, that seems true to me.
The key difference in my mind is that the AI system does not need to determine the relative authoritativeness of different pronouncements of human value, since the legal authoritativeness of e.g. caselaw is pretty formalized. But I agree that this is less of an issue if the primary route to alignment is just getting an AI to follow the instructions of its principal.
Yeah, I certainly feel better about learning law relative to learning the One True Set of Human Values That Shall Then Be Optimized Forevermore.