bob—I think this is a brilliant idea, and it could be quite effective in slowing down reckless AI development.
For this to be effective, it would require working with experienced lawyers who know relevant national and international laws and regulations (e.g. in US, UK, or EU) very well, who understand AI to some degree, and who are creative in seeing ways that new AI systems might inadvertently (or deliberately) violate those laws and regulations. They’d also need to be willing to sue powerful tech companies—but these tech companies also have very deep pockets, so litigation could be very lucrative for law firms that have the guts to go after them.
For example, in the US, there are HIPAA privacy rules regarding companies accessing private medical information. Any AI system that allows or encourages users to share private medical information (such as asking questions about their symptoms, diseases, medications, or psychiatric issues when using a chatbot) is probably not going to be very well-designed to comply with these HIPAA regulations—and violating HIPAA is a very serious legal issue.
More generally, any AI system that offers advice to users regarding medical, psychiatric, clinical psychology, legal, or financial matters might be in violation of laws that give various professional guilds a government-regulated monopoly on these services. For example, if a chatbot is basically practicing law without a license, practicing medicine without a license, practicing clinical psychology without a license, or giving financial advice without a license, then the company that created that chatbot might be violating some pretty serious laws. Moreover, the professional guilds have every incentive to protect their turf against AI intrusions that could result in mass unemployment among their guild members. And those guilds have plenty of legal experience suing interlopers who challenge their monopoly. The average small law firm might not be able to effectively challenge Microsoft’s corporate legal team that would help defend OpenAI. But the American Medical Association might be ready and willing to challenge Microsoft.
AI companies would also have to be very careful not to violate laws and regulations regarding production of terrorist propaganda, adult pornography (illegal in many countries such as China, India, etc), child pornography (illegal in most countries), heresy (e.g. violating Sharia law in fundamentalist Muslim countries), etc. I doubt that most devs or managers at OpenAI or DeepMind are thinking very clearly or proactively about how not to fall afoul of state security laws in China, Sharia laws in Pakistan, or even EU privacy laws. But lawyers in each of those countries might realize that American tech companies are rich enough to be worth suing in their own national courts. How long will Microsoft or Google have the stomach for defending their AI subsidiaries in the courts of Beijing, Islamabad, or Brussels?
There are probably dozens of other legal angles for slowing down AI. Insofar as AI systems are getting more general purpose and more globally deployed, the number of ways they might violate laws and regulations across different nations is getting very large, and the legal ‘attack surface’ that makes AI companies vulnerable to litigation will get larger and larger.
Long story short, rather than focusing on trying to pass new global regulations to limit AI, there are probably thousands of ways that new AI systems will violate existing laws and regulations in different countries. Identifying those, and using them as leverage to slow down dangerous AI developments, might be a very fast, clever, and effective use of EA resources to reduce X risk.
There are definitely a lot of legal angles that AI will implicate, although some of the examples you provided suggest the situation is more mixed:
The HIPAA rules don’t apply to everyone. See, e.g., 45 C.F.R. § 164.104 (stating the entities to which HIPAA Privacy Rule applies). If you tell me about your medical condition (not in my capacity as a lawyer), HIPAA doesn’t stop me from telling whoever I would like. I don’t see how telling a generalized version of ChatGPT is likely to be different.
I agree that professional-practice laws will be relevant in the AI context, although I think AI companies know that the real money is in providing services to licensed professionals to super-charge their work and not in providing advice to laypersons. I don’t think you can realistically monetize a layperson-directed service without creating some rather significant liability concerns even apart from unauthorized-practice concerns.
The foreign law problem you describe is about as old as the global Internet. Companies can and do take steps to avoid doing business in countries where the laws are considered unfriendly. Going after a U.S. tech company in a foreign court often only makes sense if (a) the tech company has assets in the foreign jurisdiction; or (b) a court in a country where the tech company has assets will enforce the foreign court order. For instance, no U.S. court will enforce a judgment for heresy.
More fundamentally, I don’t think it will be OpenAI, etc. who are providing most of these services. They will license their technology to other companies who will actually provide the services, and those companies will not necessarily have the deep pockets. Generally, we don’t hold tool manufacturers liable when someone uses their tools to break the law (e.g., Microsoft Windows, Amazon Web Services, a gun). So you’d need to find a legal theory that allowed imputing liability onto the AI company that provided an AI tool to the actual service provider. That may be possible but is not obvious in many cases.
bob—I think this is a brilliant idea, and it could be quite effective in slowing down reckless AI development.
For this to be effective, it would require working with experienced lawyers who know relevant national and international laws and regulations (e.g. in US, UK, or EU) very well, who understand AI to some degree, and who are creative in seeing ways that new AI systems might inadvertently (or deliberately) violate those laws and regulations. They’d also need to be willing to sue powerful tech companies—but these tech companies also have very deep pockets, so litigation could be very lucrative for law firms that have the guts to go after them.
For example, in the US, there are HIPAA privacy rules regarding companies accessing private medical information. Any AI system that allows or encourages users to share private medical information (such as asking questions about their symptoms, diseases, medications, or psychiatric issues when using a chatbot) is probably not going to be very well-designed to comply with these HIPAA regulations—and violating HIPAA is a very serious legal issue.
More generally, any AI system that offers advice to users regarding medical, psychiatric, clinical psychology, legal, or financial matters might be in violation of laws that give various professional guilds a government-regulated monopoly on these services. For example, if a chatbot is basically practicing law without a license, practicing medicine without a license, practicing clinical psychology without a license, or giving financial advice without a license, then the company that created that chatbot might be violating some pretty serious laws. Moreover, the professional guilds have every incentive to protect their turf against AI intrusions that could result in mass unemployment among their guild members. And those guilds have plenty of legal experience suing interlopers who challenge their monopoly. The average small law firm might not be able to effectively challenge Microsoft’s corporate legal team that would help defend OpenAI. But the American Medical Association might be ready and willing to challenge Microsoft.
AI companies would also have to be very careful not to violate laws and regulations regarding production of terrorist propaganda, adult pornography (illegal in many countries such as China, India, etc), child pornography (illegal in most countries), heresy (e.g. violating Sharia law in fundamentalist Muslim countries), etc. I doubt that most devs or managers at OpenAI or DeepMind are thinking very clearly or proactively about how not to fall afoul of state security laws in China, Sharia laws in Pakistan, or even EU privacy laws. But lawyers in each of those countries might realize that American tech companies are rich enough to be worth suing in their own national courts. How long will Microsoft or Google have the stomach for defending their AI subsidiaries in the courts of Beijing, Islamabad, or Brussels?
There are probably dozens of other legal angles for slowing down AI. Insofar as AI systems are getting more general purpose and more globally deployed, the number of ways they might violate laws and regulations across different nations is getting very large, and the legal ‘attack surface’ that makes AI companies vulnerable to litigation will get larger and larger.
Long story short, rather than focusing on trying to pass new global regulations to limit AI, there are probably thousands of ways that new AI systems will violate existing laws and regulations in different countries. Identifying those, and using them as leverage to slow down dangerous AI developments, might be a very fast, clever, and effective use of EA resources to reduce X risk.
There are definitely a lot of legal angles that AI will implicate, although some of the examples you provided suggest the situation is more mixed:
The HIPAA rules don’t apply to everyone. See, e.g., 45 C.F.R. § 164.104 (stating the entities to which HIPAA Privacy Rule applies). If you tell me about your medical condition (not in my capacity as a lawyer), HIPAA doesn’t stop me from telling whoever I would like. I don’t see how telling a generalized version of ChatGPT is likely to be different.
I agree that professional-practice laws will be relevant in the AI context, although I think AI companies know that the real money is in providing services to licensed professionals to super-charge their work and not in providing advice to laypersons. I don’t think you can realistically monetize a layperson-directed service without creating some rather significant liability concerns even apart from unauthorized-practice concerns.
The foreign law problem you describe is about as old as the global Internet. Companies can and do take steps to avoid doing business in countries where the laws are considered unfriendly. Going after a U.S. tech company in a foreign court often only makes sense if (a) the tech company has assets in the foreign jurisdiction; or (b) a court in a country where the tech company has assets will enforce the foreign court order. For instance, no U.S. court will enforce a judgment for heresy.
More fundamentally, I don’t think it will be OpenAI, etc. who are providing most of these services. They will license their technology to other companies who will actually provide the services, and those companies will not necessarily have the deep pockets. Generally, we don’t hold tool manufacturers liable when someone uses their tools to break the law (e.g., Microsoft Windows, Amazon Web Services, a gun). So you’d need to find a legal theory that allowed imputing liability onto the AI company that provided an AI tool to the actual service provider. That may be possible but is not obvious in many cases.
Jason—thanks for these helpful corrections, clarifications, and extensions.
My comment was rather half-baked, and you’ve added a lot to think about!