There has been sustained activism from the AI community to emphasise that AI should be developed and deployed in a safe and beneficial manner. This has involved Open Letters, AI principles, the establishment of new centres, and influencing governments.
The Puerto Rico Conference in January 2015 was a landmark event to promote the beneficial and safe development of AI. It led to an Open Letter signed by over 8,000 people calling for the safe and beneficial development of AI, and a research agenda to that end [21]. The Asilomar Conference in January 2017 led to the Asilomar AI Principles, signed by several thousand AI researchers [23]. Over a dozen sets of principles from a range of groups followed [61].
The AI community has established several research groups to understand and shape the societal impact of AI. AI conferences have also expanded their work to consider the impact of AI. New groups include:
OpenAI (December 2015)
Centre for Human-Compatible AI (August 2016)
Leverhulme Centre for the Future of Intelligence (October 2016)3
DeepMind Ethics and Society (October 2017)
UK Government’s Centre for Data Ethics and Innovation (November 2017)”
There’s a whole AI ethics and safety field that would have been much smaller and less influential.
From my paper Activism by the AI Community: Analysing Recent Achievements and Future Prospects.
“2.2 Ethics and safety
There has been sustained activism from the AI community to emphasise that AI should be developed and deployed in a safe and beneficial manner. This has involved Open Letters, AI principles, the establishment of new centres, and influencing governments.
The Puerto Rico Conference in January 2015 was a landmark event to promote the beneficial and safe development of AI. It led to an Open Letter signed by over 8,000 people calling for the safe and beneficial development of AI, and a research agenda to that end [21]. The Asilomar Conference in January 2017 led to the Asilomar AI Principles, signed by several thousand AI researchers [23]. Over a dozen sets of principles from a range of groups followed [61].
The AI community has established several research groups to understand and shape the societal impact of AI. AI conferences have also expanded their work to consider the impact of AI. New groups include:
OpenAI (December 2015)
Centre for Human-Compatible AI (August 2016)
Leverhulme Centre for the Future of Intelligence (October 2016)3
DeepMind Ethics and Society (October 2017)
UK Government’s Centre for Data Ethics and Innovation (November 2017)”