https://facctconference.org is the major conference in the area. It’s interdisciplinary—mix of technical ML work, social/legal scholarship, and humanities-type papers.
Some big names: Moritz Hardt, Arvind Narayanan, and Solon Barocas wrote a textbook https://fairmlbook.org and they and many of their students are important contributors. Cynthia Dwork is another big name in fairness, and Cynthia Rudin in explainable/interpretable ML. That’s a non-exhaustive list but I think is a decent seed for a search through coauthors.
I believe there is in fact important technical overlap in the two problem areas. For example, https://causalincentives.com is research from a group of people who see themselves as working in AI safety. Yet people in the fair ML community are also very interested in causality, and study it for similar reasons using similar tools.
I think much of the expressed animosity is only because the two research communities seem to select for people with very different preexisting political commitments (left/social justice vs. neoliberal), and they find each other threatening for that reason.
On the other hand, there are differences. An illustrative one is that fair ML people care a lot about the fairness properties of linear models, both in theory and in practice right now. Whereas it would be strange if an AI Safety person cared at all about a linear model—they’re just too small and nothing like the kind of AI that could become unsafe.
https://facctconference.org is the major conference in the area. It’s interdisciplinary—mix of technical ML work, social/legal scholarship, and humanities-type papers.
Some big names: Moritz Hardt, Arvind Narayanan, and Solon Barocas wrote a textbook https://fairmlbook.org and they and many of their students are important contributors. Cynthia Dwork is another big name in fairness, and Cynthia Rudin in explainable/interpretable ML. That’s a non-exhaustive list but I think is a decent seed for a search through coauthors.
I believe there is in fact important technical overlap in the two problem areas. For example, https://causalincentives.com is research from a group of people who see themselves as working in AI safety. Yet people in the fair ML community are also very interested in causality, and study it for similar reasons using similar tools.
I think much of the expressed animosity is only because the two research communities seem to select for people with very different preexisting political commitments (left/social justice vs. neoliberal), and they find each other threatening for that reason.
On the other hand, there are differences. An illustrative one is that fair ML people care a lot about the fairness properties of linear models, both in theory and in practice right now. Whereas it would be strange if an AI Safety person cared at all about a linear model—they’re just too small and nothing like the kind of AI that could become unsafe.