On AI governance we (Regulatory Institute) published a two-part report on artificial intelligence, which provides an overview of considerations for the regulation of AI as well as a comparative legal analysis of what various jurisdictions have done to regulate AI. For those interested the links are below:
Its a burgeoning area of regulatory research the effectiveness of current regulations of AI particularly in the banking and trade sector, where such regulations are mature. Government’s are also turning their minds to the safe use of AI in the deployment of government services and decision-making, through various guidelines or in some cases following failed deployment of automated decision-making systems (ie. Australia’s Robodebt). We are looking now at how best to regulate the use of AI-based automated decision-making systems in government, particularly when the end-user has no choice of service or is a vulnerable member of the population.
But what of the role of government in identifying, assessing and mitigating risks of AI in research for example. We also published a four-part report about how research and technology risks could be covered by regulation, see below for links:
On AI governance we (Regulatory Institute) published a two-part report on artificial intelligence, which provides an overview of considerations for the regulation of AI as well as a comparative legal analysis of what various jurisdictions have done to regulate AI. For those interested the links are below:
(1) covering the regulatory landscape (http://www.howtoregulate.org/artificial_intelligence/#more-322), and
(2) an outline of future AI regulation (http://www.howtoregulate.org/aipart2/#more-327).
Its a burgeoning area of regulatory research the effectiveness of current regulations of AI particularly in the banking and trade sector, where such regulations are mature. Government’s are also turning their minds to the safe use of AI in the deployment of government services and decision-making, through various guidelines or in some cases following failed deployment of automated decision-making systems (ie. Australia’s Robodebt). We are looking now at how best to regulate the use of AI-based automated decision-making systems in government, particularly when the end-user has no choice of service or is a vulnerable member of the population.
But what of the role of government in identifying, assessing and mitigating risks of AI in research for example. We also published a four-part report about how research and technology risks could be covered by regulation, see below for links:
(1) Regulating Research and Technology Risks: Part I – Research Risks: http://www.howtoregulate.org/regulating-research-technology-risks-part-i-research-risks/#more-248
(2) Regulating Research and Technology Risks: Part II – Technology Risks: http://www.howtoregulate.org/technology-risks/
(3) Research and Technology Risks: Part III – Risk Classification: http://www.howtoregulate.org/classification-research-technology-risks/#more-296
(4) Research and Technology Risks: Part IV – A Prototype Regulation: http://www.howtoregulate.org/prototype-regulation-research-technology/
Reply