Current and foreseeable AI systems are not moral agents
“…there is general agreement that current and foreseeable AI systems do not have what it takes to be responsible for their actions (moral agents), or to be systems that humans should have responsibility towards (moral patients). So, the responsibility remains firmly with the humans and for the humans—as well as other animals.”
Therefore, the main ethical issues are about the human design and use of AI.
The goal of AI ethics is to say what is right and wrong in the design and use of AI.
This is important because it defines what the main policy aims of AI policy should be.
Policy consists of policy aims and policy means.
AI policy aims
There is global convergence around five ethical principles for AI:
Transparency
Justice and fairness
Non-maleficence
Responsibility
Privacy
These are specific policy aims. AI Policy also needs general policy aims which will differ by nation.
E.g. China and US placing greater emphasis on geostrategic aims (and thus reluctant to limit automated weapons).
E.g. EU will be sensitive to monopolies and place great emphasis on privacy.
General policy aims will be subject to:
Public opinion
Lobbying
Technical feasibility
Cost
AI policy means
The practical instruments and methods to further policy aims.
The main bottlenecks of AI policy are with the appropriate policy means.
Options
Educational efforts (e.g. curriculum of AI degrees)
Framework for legal liability (e.g. insurance)
Impact assessment tools
Legal regulation
PR measures
Public spending
Self-assessment frameworks
Self-regulation (in industry)
Self-regulation (e.g. a “Hippocratic oath”)
Supporting ethics by design
Taxation
Technical standards (in a framework of legal regulation)
Moving forward we can draw from political science and ethics-driven policy (e.g. medical or engineering ethics) to overcome the bottlenecks of practical policy means.
The Bottleneck in AI Policy Isn’t Ethics—It’s Implementation
This is a summary of Vincent Müller’s article Basic issues in AI policy.
Current and foreseeable AI systems are not moral agents
“…there is general agreement that current and foreseeable AI systems do not have what it takes to be responsible for their actions (moral agents), or to be systems that humans should have responsibility towards (moral patients). So, the responsibility remains firmly with the humans and for the humans—as well as other animals.”
Therefore, the main ethical issues are about the human design and use of AI.
The main issues in AI ethics
Privacy & Surveillance
Manipulation of Behaviour
Opacity of AI Systems
Bias in Decision Systems
Human-Robot Interaction
Automation and Employment
Autonomous Systems and Responsibility
Machine Ethics
Artificial Moral Agents
Singularity
See Ethics of Artificial Intelligence and Robotics (Stanford Encyclopedia of Philosophy) for a more detailed overview.
AI ethics informs AI policy
The goal of AI ethics is to say what is right and wrong in the design and use of AI.
This is important because it defines what the main policy aims of AI policy should be.
Policy consists of policy aims and policy means.
AI policy aims
There is global convergence around five ethical principles for AI:
Transparency
Justice and fairness
Non-maleficence
Responsibility
Privacy
These are specific policy aims. AI Policy also needs general policy aims which will differ by nation.
E.g. China and US placing greater emphasis on geostrategic aims (and thus reluctant to limit automated weapons).
E.g. EU will be sensitive to monopolies and place great emphasis on privacy.
General policy aims will be subject to:
Public opinion
Lobbying
Technical feasibility
Cost
AI policy means
The practical instruments and methods to further policy aims.
The main bottlenecks of AI policy are with the appropriate policy means.
Options
Educational efforts (e.g. curriculum of AI degrees)
Framework for legal liability (e.g. insurance)
Impact assessment tools
Legal regulation
PR measures
Public spending
Self-assessment frameworks
Self-regulation (in industry)
Self-regulation (e.g. a “Hippocratic oath”)
Supporting ethics by design
Taxation
Technical standards (in a framework of legal regulation)
Moving forward we can draw from political science and ethics-driven policy (e.g. medical or engineering ethics) to overcome the bottlenecks of practical policy means.