Thanks; appreciate the feedback, and for sharing these links.
I agree that AI alignment with actual humans & groups needs to take law much more seriously as a legacy system for trying to manage ‘misalignments’ amongst actual humans and groups. New legal concepts may need to be invented—but AI alignment shouldn’t find itself in the hubristic position of trying to reinvent law from scratch.
I think AI alignment can draw from existing law to a large degree. New legal concepts may be needed but I think there is a lot legal reasoning, legal concepts, legal methods, etc. that are directly applicable now (discussed in more detail here https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4218031).
Also, I think we should keep the involvement of AI in law-making (broadly defined) as limited as we can. And we should train AI to understand when there is sufficient legal uncertainty that a human is needed to resolve the correct action to be taken.
Thanks; appreciate the feedback, and for sharing these links.
I agree that AI alignment with actual humans & groups needs to take law much more seriously as a legacy system for trying to manage ‘misalignments’ amongst actual humans and groups. New legal concepts may need to be invented—but AI alignment shouldn’t find itself in the hubristic position of trying to reinvent law from scratch.
I think AI alignment can draw from existing law to a large degree. New legal concepts may be needed but I think there is a lot legal reasoning, legal concepts, legal methods, etc. that are directly applicable now (discussed in more detail here https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4218031).
Also, I think we should keep the involvement of AI in law-making (broadly defined) as limited as we can. And we should train AI to understand when there is sufficient legal uncertainty that a human is needed to resolve the correct action to be taken.