I would call the cluster “AI ethics”. But there’s no hard cutoff, no sufficient philosophical difference: it’s mostly just social clustering. Here’s my short diplomatic piece about the gap.
(Though it is hard to avoid being factional when one group are being extremely factional at you. And we don’t need to think that each point in the space is equally worrying.)
I like Jon Kleinberg, Zachary Lipton, Carolyn Ashurst, Andrew Trask, Shakir Mohamed, Hanna Wallach, Michael Kearns, Cynthia Rudin, Yonadav Shavit, Deborah Raji, Aaron Roth, Adrian Weller, McKane Andrus, Subbarao Kambhampati, Iason Gabriel, Max Langenkamp, Arvind Narayanan. Zoe Cremer is hard to classify but shares their animus. David Manheim, Andrew Critch, and Dylan Hadfield-Menell cross the hall to some extent. You can look up AIES papers for more as well as FaccT. The big names tend to be less fair (ha). (I’ve never seen anyone near the other cluster make such a list about safety people.)
I would call the cluster “AI ethics”. But there’s no hard cutoff, no sufficient philosophical difference: it’s mostly just social clustering. Here’s my short diplomatic piece about the gap.
We should do our best to resist forming explicit competing factions; as Prunkl and Whittlestone note, it’s all one space. Here’s a principled argument for doing this.
(Though it is hard to avoid being factional when one group are being extremely factional at you. And we don’t need to think that each point in the space is equally worrying.)
I like Jon Kleinberg, Zachary Lipton, Carolyn Ashurst, Andrew Trask, Shakir Mohamed, Hanna Wallach, Michael Kearns, Cynthia Rudin, Yonadav Shavit, Deborah Raji, Aaron Roth, Adrian Weller, McKane Andrus, Subbarao Kambhampati, Iason Gabriel, Max Langenkamp, Arvind Narayanan. Zoe Cremer is hard to classify but shares their animus. David Manheim, Andrew Critch, and Dylan Hadfield-Menell cross the hall to some extent. You can look up AIES papers for more as well as FaccT. The big names tend to be less fair (ha). (I’ve never seen anyone near the other cluster make such a list about safety people.)
To add to the other papers coming from the “AI safety / AGI” cluster calling for a synthesis in these views...
https://www.repository.cam.ac.uk/handle/1810/293033
https://arxiv.org/abs/2101.06110