Good post! I’m curious if you have any thoughts on the potential conflicts or contradictions between the “AI ethics” community, which focuses on narrow AI and harms from current AI systems (members of this community include Gebru and Whittaker) and the AI governance community that has sprung out of the AI safety/alignment community (e.g GovAI)? In my view, these two groups are quite opposed in priorities and ways of thinking about AI (take a look at Timnit Gebru’s twitter feed for a very stark example) and trying to put them under one banner doesn’t really make sense. This contradiction seems to encourage some strange tactics (such as AI governance people proposing different regulations of narrow AI purely to slow down timelines rather than for any of the usual reasons given by the AI ethics community) which could lead to a significant backlash.
Hi, yes good question, and one that has been much discussed—here’s three papers on the topic. I’m personally of the view that there shouldn’t really be much conflict/contradictions—we’re all pushing for the safe, beneficial and responsible development and deployment of AI, and there’s lots of common ground.
Agreed. One book that made it really clear for me was The Alignment Problem by Brian Christian. I think that book does a really good job of showing how it’s all part of the same overarching problem area.
Good post! I’m curious if you have any thoughts on the potential conflicts or contradictions between the “AI ethics” community, which focuses on narrow AI and harms from current AI systems (members of this community include Gebru and Whittaker) and the AI governance community that has sprung out of the AI safety/alignment community (e.g GovAI)? In my view, these two groups are quite opposed in priorities and ways of thinking about AI (take a look at Timnit Gebru’s twitter feed for a very stark example) and trying to put them under one banner doesn’t really make sense. This contradiction seems to encourage some strange tactics (such as AI governance people proposing different regulations of narrow AI purely to slow down timelines rather than for any of the usual reasons given by the AI ethics community) which could lead to a significant backlash.
Hi, yes good question, and one that has been much discussed—here’s three papers on the topic. I’m personally of the view that there shouldn’t really be much conflict/contradictions—we’re all pushing for the safe, beneficial and responsible development and deployment of AI, and there’s lots of common ground.
Bridging near- and long-term concerns about AI
Bridging the Gap: the case for an Incompletely Theorized Agreement on AI policy
Reconciliation between Factions Focused on Near-Term and Long-Term Artificial Intelligence
Agreed. One book that made it really clear for me was The Alignment Problem by Brian Christian. I think that book does a really good job of showing how it’s all part of the same overarching problem area.