Thanks a ton for your substantive engagement, Luke! I’m sorry it took so long to respond, but I highly value it.
Law doesn’t always reflect morals we want it to. In the 1940s, an AI system under this type of governance would have turned any Jews it found in to the Nazis because it was the lawful thing to do, despite the fact it is objectively to a human the wrong thing to do. Further examples are turning in escaped slaves it encountered, fully collaborating with Gestapo hunting partisans, full collaboration with Russians seizing Ukrainian territory, and more. These are all extreme examples sprinkled throughout the past—but the point is we don’t know what the future holds. Currently here in the UK the government are throwing lots of effort and resources into passing laws to restrict basic protest rights under the Policing and Crime Bill. It’s not impossible in 30 years time for the UK (or anywhere else) to be a police state or authoritarian regime. An AI rigidly sticking to the law would be misaligned morally. You did focus on this when discussing a balance of outcomes so it’s not so much a weakness as an area we need to explore much more.
Yeah, definitely agree that this is tricky and should be analyzed more (especially drawing on the substantial existing literature about moral permissibility of lawbreaking, which I haven’t had the time to fully engage in).
For Common Law systems such as the UK (and I believe USA? Please correct if wrong), the AI utilising case law for its moral alignment would change faster than we could reasonably foresee. Just look at the impact R v Cartwright (1569) had on Shanley v Harvey (1763) by not mentioning the race of the slave who was to be considered a person and freed because he was on English soil. For Civil Law systems such as those in mainland Europe this would be a much easier idea, but Common Law systems would need a bit more finesse. Or just not including case law and relying on legislation, which could be a bit more barebones.
Yeah, I do think there’s an interesting thing here where LFAI would make apparent the existing need to adopt some jurisprudential stance about how to think about the evolution of law, and particularly of predicted changes in the law. As an example of how this already comes up in the US, judges sometimes regard higher courts’ precedents as bad law, notwithstanding the fact that the higher court has not yet overruled it. The addition of AI into the mix—as both a predictor of and possible participant in the legal system, as well as a general accelerator of the rate of societal change—certainly threatens to stretch our existing ways of thinking about this. This is also why I’m worried about asymmetrical use of advanced AI in legal proceedings. See footnote 6.
Different countries have different laws. Some companies in the USA don’t (or can’t) operate in the EU and in the UK because GDPR affords Europeans more data rights than the USA does on its own citizens, which means that for data processing companies which even allow access to their websites from European jurisdiction would face large fines for doing to EU resident data what they regularly do to US data with no problem. I can see this being an issue with moral alignment. When we say for AI to follow law—whose law? US rights and freedoms laws imposed on European AI would be a collossal backstep for human and civil rights in Europe not seen since WW2, whereas European copyright and patent laws imposed to US AI would be a huge backstep for the US (don’t get me started on the EU’s patent laws!). If the AI just follows the laws where the AI is based, it will be difficult to operate internationally for all except physical robot systems which could follow the same rules as people currently do without much effort given to changing its thinking as soon as it passed physical boundaries or borders. Perhaps we could create an international agreement on core concepts such as the Universal Declaration of Human Rights (UDHR), the International Covenant on Civil and Political Rights (ICCPR), the International Covenant on Economic, Social, and Cultural Rights (ICCPR), and the European Convention on Human Rights (ECHR). We did it for unifying laws about rights to avoid another holocaust—why not about alignment to avoid a similar, yet potentially larger, catastrophe?
Definitely agree. I think the practical baby step is to develop the capability of AI to interpret and apply any given legal system. But insofar as we actually want AIs to be law-following, we obviously need to solve the jurisdictional and choice of law questions, as a policy matter. I don’t think we’re close to doing that—even many of the jurisdictional issues in cyber are currently contentious. And as I think you allude to, there’s also a risk of regulatory arbitrage, which seems bad.
Thanks a ton for your substantive engagement, Luke! I’m sorry it took so long to respond, but I highly value it.
Yeah, definitely agree that this is tricky and should be analyzed more (especially drawing on the substantial existing literature about moral permissibility of lawbreaking, which I haven’t had the time to fully engage in).
Yeah, I do think there’s an interesting thing here where LFAI would make apparent the existing need to adopt some jurisprudential stance about how to think about the evolution of law, and particularly of predicted changes in the law. As an example of how this already comes up in the US, judges sometimes regard higher courts’ precedents as bad law, notwithstanding the fact that the higher court has not yet overruled it. The addition of AI into the mix—as both a predictor of and possible participant in the legal system, as well as a general accelerator of the rate of societal change—certainly threatens to stretch our existing ways of thinking about this. This is also why I’m worried about asymmetrical use of advanced AI in legal proceedings. See footnote 6.
(And yes, the US[1] is also common law. :-) )
Definitely agree. I think the practical baby step is to develop the capability of AI to interpret and apply any given legal system. But insofar as we actually want AIs to be law-following, we obviously need to solve the jurisdictional and choice of law questions, as a policy matter. I don’t think we’re close to doing that—even many of the jurisdictional issues in cyber are currently contentious. And as I think you allude to, there’s also a risk of regulatory arbitrage, which seems bad.
Except the civil law of Louisiana, interestingly.
No problem RE timescale of reply! Thank you for such a detailed and thoughtful one :)