Super excited about the artificial conscience paper. I’d note that a similar approach be very useful for creating law-following AIs:
An LFAI system does not need to store all knowledge regarding the set of laws that it is trained to follow. More likely, the practical way to create such a system would be to make the system capable of recognizing when it faces sufficient legal uncertainty,[10] then seeking evaluation from a legal expert system (“Counselor”).[11]
The Counselor could be a human lawyer, but in the long-run is probably most robust and efficient if (at least partially) automated. The Counselor would then render advice on the pure basis of idealized legality: the probability and expected legal downsides that would result from an idealized legal dispute regarding the action if everyone knew all the relevant facts.
Super excited about the artificial conscience paper. I’d note that a similar approach be very useful for creating law-following AIs: