Thanks; appreciate the feedback, and for sharing these links.
I agree that AI alignment with actual humans & groups needs to take law much more seriously as a legacy system for trying to manage ‘misalignments’ amongst actual humans and groups. New legal concepts may need to be invented—but AI alignment shouldn’t find itself in the hubristic position of trying to reinvent law from scratch.
I think AI alignment can draw from existing law to a large degree. New legal concepts may be needed but I think there is a lot legal reasoning, legal concepts, legal methods, etc. that are directly applicable now (discussed in more detail here https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4218031).
Also, I think we should keep the involvement of AI in law-making (broadly defined) as limited as we can. And we should train AI to understand when there is sufficient legal uncertainty that a human is needed to resolve the correct action to be taken.
This is a great post.
Law is the best solution I can think to address the issues you raise.
Here https://forum.effectivealtruism.org/posts/9YLbtehKLT4ByLvos/agi-misalignment-x-risk-may-be-lower-due-to-an-overlooked I argue that law-informed AI is likely the best path forward for societal alignment
Here https://forum.effectivealtruism.org/posts/4ykDJA57wstYWq9HK/intent-alignment-should-not-be-the-goal-for-agi-x-risk I explore the difference between intent alignment and societal alignment.
Thanks; appreciate the feedback, and for sharing these links.
I agree that AI alignment with actual humans & groups needs to take law much more seriously as a legacy system for trying to manage ‘misalignments’ amongst actual humans and groups. New legal concepts may need to be invented—but AI alignment shouldn’t find itself in the hubristic position of trying to reinvent law from scratch.
I think AI alignment can draw from existing law to a large degree. New legal concepts may be needed but I think there is a lot legal reasoning, legal concepts, legal methods, etc. that are directly applicable now (discussed in more detail here https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4218031).
Also, I think we should keep the involvement of AI in law-making (broadly defined) as limited as we can. And we should train AI to understand when there is sufficient legal uncertainty that a human is needed to resolve the correct action to be taken.