just really haven’t seen where the Safety->Ethics hostility has been
From the perspective of the AI Ethics researchers, AI Safety researchers and engineers contributed to the development of “everything for everyone” models – and also distracted away from the increasing harms that result from the development and use of those models.
Which, frankly, is both true, given how much people in AI Safety collaborated and mingled with people in large AI labs.
I understand that on Twitter, AI Ethics researchers are explicitly critiquing AI Safety folk (and longtermist tech folk in general) more than the other way around.
That feels unfair if we focus on the explicit exchange in the moment. But there is more to it.
AI Ethics folk are responding with words to harms that resulted from misguided efforts by some key people in AI Safety in the past. There are implicit background goings-on they are concerned about that is hard to convey, and not immediately obvious from their writing.
It might not feel like we in AI Safety have much power in steering the development of large AI models, but historically the AI Safety community has been able to exert way more influence here than the AI Ethics community.
I understand if you look at tweets by people like Dr Gebru, that it can appear overly intense and like it’s not warranted (what did we ever say to them?). But we need to be aware of the historical position of power that the AI Safety community has actually had, what narratives we ended up spreading (moving the Overton window over “AGI”), and what that has led to.
From the perspective of AI Ethics researchers, here is this dominant group of longtermists broadly that has overall caused all this damage. And AI Ethics people are organising and screaming from the top of their lungs to get the harms to stop.
From their perspective, they need to put pressure on longtermists, and they need to call them out in public, otherwise the harms will continue. The longtermists are not as much aware of those harms (or don’t care about that much compared to their techno-future aspirations), so longtermists see it as unfair/bad to be called out this way as a group.
Then when AI Ethics researchers critique us with words, some people involved around our community (usually the more blatant ones) are like “why are you so mean to us? why are you saying transhumanists are like eugenicists? why are you against us trying to steer technological progress? why don’t you consider extinction risks”?.
Hope that’s somewhat clarifying. I know this is not going to resonate for many people here, so I’m ready for the downvotes.
From the perspective of the AI Ethics researchers, AI Safety researchers and engineers contributed to the development of “everything for everyone” models – and also distracted away from the increasing harms that result from the development and use of those models.
Which, frankly, is both true, given how much people in AI Safety collaborated and mingled with people in large AI labs.
I understand that on Twitter, AI Ethics researchers are explicitly critiquing AI Safety folk (and longtermist tech folk in general) more than the other way around.
That feels unfair if we focus on the explicit exchange in the moment.
But there is more to it.
AI Ethics folk are responding with words to harms that resulted from misguided efforts by some key people in AI Safety in the past. There are implicit background goings-on they are concerned about that is hard to convey, and not immediately obvious from their writing.
It might not feel like we in AI Safety have much power in steering the development of large AI models, but historically the AI Safety community has been able to exert way more influence here than the AI Ethics community.
I understand if you look at tweets by people like Dr Gebru, that it can appear overly intense and like it’s not warranted (what did we ever say to them?). But we need to be aware of the historical position of power that the AI Safety community has actually had, what narratives we ended up spreading (moving the Overton window over “AGI”), and what that has led to.
From the perspective of AI Ethics researchers, here is this dominant group of longtermists broadly that has overall caused all this damage. And AI Ethics people are organising and screaming from the top of their lungs to get the harms to stop.
From their perspective, they need to put pressure on longtermists, and they need to call them out in public, otherwise the harms will continue. The longtermists are not as much aware of those harms (or don’t care about that much compared to their techno-future aspirations), so longtermists see it as unfair/bad to be called out this way as a group.
Then when AI Ethics researchers critique us with words, some people involved around our community (usually the more blatant ones) are like “why are you so mean to us? why are you saying transhumanists are like eugenicists? why are you against us trying to steer technological progress? why don’t you consider extinction risks”?.
Hope that’s somewhat clarifying.
I know this is not going to resonate for many people here, so I’m ready for the downvotes.
I found this comment very helpful Remmelt, so thank you. I think I’m going to respond to this comment via PM.