Policy Action 11: Ensuring Responsibility, Accountability and Privacy 94. Member States should review and adapt, as appropriate, regulatory and legal frameworks to achieve accountability and responsibility for the content and outcomes of AI systems at the different phases of their lifecycle. Governments should introduce liability frameworks or clarify the interpretation of existing frameworks to make it possible to attribute accountability for the decisions and behaviour of AI systems. When developing regulatory frameworks governments should, in particular, take into account that responsibility and accountability must always lie with a natural or legal person; responsibility should not be delegated to an AI system, nor should a legal personality be given to an AI system.
I see the point in the last sentence is to prevent individuals and companies from escaping liability due to AI failures. However, the last bit also seems to prevent us from creating some sort of “AI DAO”—i.e., from creating a legal entity totally implemented by an autonomous system. This doesn’t seem reasonable; after all, what is company if not some sort of artificial agent?
Legal personality & AI systems
From the first draft of the UNESCO Recommendation on AI Ethics:
I see the point in the last sentence is to prevent individuals and companies from escaping liability due to AI failures. However, the last bit also seems to prevent us from creating some sort of “AI DAO”—i.e., from creating a legal entity totally implemented by an autonomous system. This doesn’t seem reasonable; after all, what is company if not some sort of artificial agent?