My intuition is that more formal systems will be easier for AI to understand earlier in the “evolution” of SOTA AI intelligence than less-formal systems.
I agree for fully formal systems (e.g. solving SAT problems), but don’t agree for “more formal” systems like law.
Mostly I’m thinking that understanding law would require you to understand language, but once you’ve understood language you also understand “what humans want”. You could imagine a world in which AI systems understand the literal meaning of language but don’t grasp the figurative / pedagogic / Gricean aspects of language, and in that world I think AI systems will understand law earlier than normal English, but that doesn’t seem to be the world we live in:
GPT-2 and other language models don’t seem particularly literal.
We have way more training data about natural language as it is normally used (most of the Internet), relative to natural language meant to be interpreted mostly literally.
Humans find it easier / more “native” to interpret language in the figurative / pedagogic way than to interpret it in the literal way.
My point was that I think that making a law-following AI that can follow (A) all enumerated laws is not much harder than one that can be made to follow (B) any given law.
The key difference in my mind is that the AI system does not need to determine the relative authoritativeness of different pronouncements of human value, since the legal authoritativeness of e.g. caselaw is pretty formalized. But I agree that this is less of an issue if the primary route to alignment is just getting an AI to follow the instructions of its principal.
I agree for fully formal systems (e.g. solving SAT problems), but don’t agree for “more formal” systems like law.
Mostly I’m thinking that understanding law would require you to understand language, but once you’ve understood language you also understand “what humans want”. You could imagine a world in which AI systems understand the literal meaning of language but don’t grasp the figurative / pedagogic / Gricean aspects of language, and in that world I think AI systems will understand law earlier than normal English, but that doesn’t seem to be the world we live in:
GPT-2 and other language models don’t seem particularly literal.
We have way more training data about natural language as it is normally used (most of the Internet), relative to natural language meant to be interpreted mostly literally.
Humans find it easier / more “native” to interpret language in the figurative / pedagogic way than to interpret it in the literal way.
Makes sense, that seems true to me.
The key difference in my mind is that the AI system does not need to determine the relative authoritativeness of different pronouncements of human value, since the legal authoritativeness of e.g. caselaw is pretty formalized. But I agree that this is less of an issue if the primary route to alignment is just getting an AI to follow the instructions of its principal.
Yeah, I certainly feel better about learning law relative to learning the One True Set of Human Values That Shall Then Be Optimized Forevermore.