Some are hostile but not all, and there are disagreements and divisions just as deep if not deeper in AI ethics as there are in EA or any other broad community with multiple important aims that you can think of.
epistemic status: a frustrated outlet for sad thoughts, could definitely be reworded with more nuance
I really wish I had your positive view on this Sean, but I really don’t think there’s much chance of inroads unless capabilities advance to an extent that makes xRisk seem even more salient.
Gebru is, imo, never going to view EA positively. And she’ll use her influence as strongly as possible in the ‘AI Ethics’ community.
Seth Lazar also seems intractably anti-EA. It’s annoying how much of this dialogue happens on Twitter/X, especially since it’s very difficult for me as a non-Twitter user to find them, but I remember he posted one terrible anti-longtermist thread and later deleted it.
Shannon Vallor once also posted a similarly anti-longtermist thread, and then respond to Jess Whittlestone once saying lamenting the gap between the Safety and Ethics fields. I just really haven’t seen where the Safety->Ethics hostility has been, I’ve really only ever seen the reverse, but of course I’m 100% sure my sample is biased here.
The Belfield<>McInerney collaboration is extremely promising for sure, and I look forward to the outputs. I hope my impression is wrong and more work along these lines can happen.
But I really think there’s a strong anti-EA sentiment amongst the generally left-wing/critical-aligned parts of the ‘AI Ethics’ fields, and they aren’t taking any prisoners. In there eyes AI xRisk Safety is bad, EA is bad, and we’re in a direct zero-sum conflict over public attention and power. I think offering a hand is commendable, but any AI Safety researchers reading better have their shield at the ready just in case the hostile attacks come.
just really haven’t seen where the Safety->Ethics hostility has been
From the perspective of the AI Ethics researchers, AI Safety researchers and engineers contributed to the development of “everything for everyone” models – and also distracted away from the increasing harms that result from the development and use of those models.
Which, frankly, is both true, given how much people in AI Safety collaborated and mingled with people in large AI labs.
I understand that on Twitter, AI Ethics researchers are explicitly critiquing AI Safety folk (and longtermist tech folk in general) more than the other way around.
That feels unfair if we focus on the explicit exchange in the moment. But there is more to it.
AI Ethics folk are responding with words to harms that resulted from misguided efforts by some key people in AI Safety in the past. There are implicit background goings-on they are concerned about that is hard to convey, and not immediately obvious from their writing.
It might not feel like we in AI Safety have much power in steering the development of large AI models, but historically the AI Safety community has been able to exert way more influence here than the AI Ethics community.
I understand if you look at tweets by people like Dr Gebru, that it can appear overly intense and like it’s not warranted (what did we ever say to them?). But we need to be aware of the historical position of power that the AI Safety community has actually had, what narratives we ended up spreading (moving the Overton window over “AGI”), and what that has led to.
From the perspective of AI Ethics researchers, here is this dominant group of longtermists broadly that has overall caused all this damage. And AI Ethics people are organising and screaming from the top of their lungs to get the harms to stop.
From their perspective, they need to put pressure on longtermists, and they need to call them out in public, otherwise the harms will continue. The longtermists are not as much aware of those harms (or don’t care about that much compared to their techno-future aspirations), so longtermists see it as unfair/bad to be called out this way as a group.
Then when AI Ethics researchers critique us with words, some people involved around our community (usually the more blatant ones) are like “why are you so mean to us? why are you saying transhumanists are like eugenicists? why are you against us trying to steer technological progress? why don’t you consider extinction risks”?.
Hope that’s somewhat clarifying. I know this is not going to resonate for many people here, so I’m ready for the downvotes.
I think this is imprecise. In my mind there are two categories:
People who think EA is a distraction from near term issues and competing for funding and attention (e.g. Seth Lazar as seen by his complaints about the UK taskforce and trying to tag Dustin Moskovitz and Ian Hogarth in his thinkpieces). These more classical ethicists are just from what I can see analytical philosophers looking for funding and clout competition with EA. They’ve lost a lot of social capital because they repeated a lot of old canards about AI and just repeats them. My model for them is something akin to they can’t do fizzbuzz or know what a transformer is, thus they’ll just say sentences about how AI can’t do things and there’s a lot of hype and power centralisation. These are more likely to be white men from the UK, Canada, Australia, and NZ. Status games are especially important to them and they seem to just not have a great understanding of the field of alignment at all. A good example I show people is this tweet which tries to say RLHF solves alignment and “Paul [Christiano] is an actual researcher I respect, the AI alignment people that bother me are more the longtermists.”
People in the other camp are more likely to think EA is problematic and power hungry and covers for big tech. People in this camp would be your Dr. Gebru, DAIR etc. I think these individuals are often much more technically proficient than the people in the first camp and their view of EA is more akin to seeing EA as a cult that seeks to indoctrinate within a bundle of longtermist beliefs and carry water for AI labs. I will say the strategic collaborations are more fruitful here because there is more technical proficiency and personally I believe the latter group have better epistemics and are more truth-seeking even if much more acerbic in their rhetoric. The higher level of technical proficiency means they can contribute to the UK Task force on things like cybersecurity and evals.
I think measuring along only the axis of tractability of gaining allies is the wrong question but the real question is what are the fruits of collaboration.
FAccT attendees are mostly a distinct group of researchers from the AI ethics researchers who come from or are actively assisting marginalised communities (and not with eg. fairness and bias abstractions).
Hmm I’m not quite sure I agree that there’s such a clear division of two camps. For example, I think Seth is actually not that far off from Timnit’s perspective on AI Safety/EA. Perhaps and bit less extreme and hostile, but I see that more of a degree in difference rather than a degree in kind.
I also disagree that people in your second camp are going to be useful for fruitful for collaboration, as they don’t just have technical objections but I think core philosophical objections to EA (or what they view as EA).
I guess overall I’m not sure. It’d be interesting to see some mapping of AI-researchers in some kind of belief-space plot so different groups could be distinguished. I think it’s very easy to extrapolate from a few small examples and miss what’s actually going—which I admit I might very well be doing with my pessimism here, but I sadly think it’s telling that I see so few counterexamples of collaboration but I can easily find examples of AI researchers dismissive or hostile to the AI Safety/xRisk perspective.
I don’t think you have to agree on deep philosophical stuff to collaborate on specific projects. I do think it’ll be hard to collaborate if one/both sides are frequently publicly claiming the other is malign and sinister or idiotic and incompetent or incredibly ideogically rigid and driven by emotion not reason (etc.)
I totally buy “there are lots of good sensible AI ethics people with good ideas, we should co-operate with them”. I don’t actually think that all of the criticisms of EA from the harshest critics are entirely wrong either. It’s only the idea that “be co-operative” will have much effect on whether articles like this get written and hostile quotes from some prominent AI ethics people turn up in them, that I’m a bit skeptical of. My claim is not “AI ethics bad”, but “you are unlikely to be able to persuade the most AI hostile figures within AI ethics”.
Sure, I agree with that. I also have parallel conversations with AI ethics colleagues—you’re never going to be able to make much headway with a few of the most hardcore safety people that your justice/bias etc work is anything but a trivial waste of time; anyone sane is working on averting the coming doom.
Don’t need to convince everyone; and there will always be some background of articles like this. But it’ll be a lot better if there’s a core of cooperative work too, on the things that benefit from cooperation.
I know you’re probably extremely busy, but if you’d like to see more collaboration between the x-risks community and ai ethics, it might be worth writing up a list of ways in which you think we could collaborate as a top-level post.
I’m significantly more enthusiastic about the potential for collaboration after seeing the impact of the FLI letter.
I expect many communities would agree on working to restrict Big Tech’s use of AI to consolidate power. List of quotes from different communities here.
Some are hostile but not all, and there are disagreements and divisions just as deep if not deeper in AI ethics as there are in EA or any other broad community with multiple important aims that you can think of.
External oversight over the power of big tech is a good goal to help accomplish. This is from one of the leading AI ethics orgs; it could almost as easily have come from an org like GovAI:
https://ainowinstitute.org/publication/gpai-is-high-risk-should-not-be-excluded-from-eu-ai-act
epistemic status: a frustrated outlet for sad thoughts, could definitely be reworded with more nuance
I really wish I had your positive view on this Sean, but I really don’t think there’s much chance of inroads unless capabilities advance to an extent that makes xRisk seem even more salient.
Gebru is, imo, never going to view EA positively. And she’ll use her influence as strongly as possible in the ‘AI Ethics’ community.
Seth Lazar also seems intractably anti-EA. It’s annoying how much of this dialogue happens on Twitter/X, especially since it’s very difficult for me as a non-Twitter user to find them, but I remember he posted one terrible anti-longtermist thread and later deleted it.
Shannon Vallor once also posted a similarly anti-longtermist thread, and then respond to Jess Whittlestone once saying lamenting the gap between the Safety and Ethics fields. I just really haven’t seen where the Safety->Ethics hostility has been, I’ve really only ever seen the reverse, but of course I’m 100% sure my sample is biased here.
The Belfield<>McInerney collaboration is extremely promising for sure, and I look forward to the outputs. I hope my impression is wrong and more work along these lines can happen.
But I really think there’s a strong anti-EA sentiment amongst the generally left-wing/critical-aligned parts of the ‘AI Ethics’ fields, and they aren’t taking any prisoners. In there eyes AI xRisk Safety is bad, EA is bad, and we’re in a direct zero-sum conflict over public attention and power. I think offering a hand is commendable, but any AI Safety researchers reading better have their shield at the ready just in case the hostile attacks come.
From the perspective of the AI Ethics researchers, AI Safety researchers and engineers contributed to the development of “everything for everyone” models – and also distracted away from the increasing harms that result from the development and use of those models.
Which, frankly, is both true, given how much people in AI Safety collaborated and mingled with people in large AI labs.
I understand that on Twitter, AI Ethics researchers are explicitly critiquing AI Safety folk (and longtermist tech folk in general) more than the other way around.
That feels unfair if we focus on the explicit exchange in the moment.
But there is more to it.
AI Ethics folk are responding with words to harms that resulted from misguided efforts by some key people in AI Safety in the past. There are implicit background goings-on they are concerned about that is hard to convey, and not immediately obvious from their writing.
It might not feel like we in AI Safety have much power in steering the development of large AI models, but historically the AI Safety community has been able to exert way more influence here than the AI Ethics community.
I understand if you look at tweets by people like Dr Gebru, that it can appear overly intense and like it’s not warranted (what did we ever say to them?). But we need to be aware of the historical position of power that the AI Safety community has actually had, what narratives we ended up spreading (moving the Overton window over “AGI”), and what that has led to.
From the perspective of AI Ethics researchers, here is this dominant group of longtermists broadly that has overall caused all this damage. And AI Ethics people are organising and screaming from the top of their lungs to get the harms to stop.
From their perspective, they need to put pressure on longtermists, and they need to call them out in public, otherwise the harms will continue. The longtermists are not as much aware of those harms (or don’t care about that much compared to their techno-future aspirations), so longtermists see it as unfair/bad to be called out this way as a group.
Then when AI Ethics researchers critique us with words, some people involved around our community (usually the more blatant ones) are like “why are you so mean to us? why are you saying transhumanists are like eugenicists? why are you against us trying to steer technological progress? why don’t you consider extinction risks”?.
Hope that’s somewhat clarifying.
I know this is not going to resonate for many people here, so I’m ready for the downvotes.
I found this comment very helpful Remmelt, so thank you. I think I’m going to respond to this comment via PM.
I think this is imprecise. In my mind there are two categories:
People who think EA is a distraction from near term issues and competing for funding and attention (e.g. Seth Lazar as seen by his complaints about the UK taskforce and trying to tag Dustin Moskovitz and Ian Hogarth in his thinkpieces). These more classical ethicists are just from what I can see analytical philosophers looking for funding and clout competition with EA. They’ve lost a lot of social capital because they repeated a lot of old canards about AI and just repeats them. My model for them is something akin to they can’t do fizzbuzz or know what a transformer is, thus they’ll just say sentences about how AI can’t do things and there’s a lot of hype and power centralisation. These are more likely to be white men from the UK, Canada, Australia, and NZ. Status games are especially important to them and they seem to just not have a great understanding of the field of alignment at all. A good example I show people is this tweet which tries to say RLHF solves alignment and “Paul [Christiano] is an actual researcher I respect, the AI alignment people that bother me are more the longtermists.”
People in the other camp are more likely to think EA is problematic and power hungry and covers for big tech. People in this camp would be your Dr. Gebru, DAIR etc. I think these individuals are often much more technically proficient than the people in the first camp and their view of EA is more akin to seeing EA as a cult that seeks to indoctrinate within a bundle of longtermist beliefs and carry water for AI labs. I will say the strategic collaborations are more fruitful here because there is more technical proficiency and personally I believe the latter group have better epistemics and are more truth-seeking even if much more acerbic in their rhetoric. The higher level of technical proficiency means they can contribute to the UK Task force on things like cybersecurity and evals.
I think measuring along only the axis of tractability of gaining allies is the wrong question but the real question is what are the fruits of collaboration.
I don’t know why people overindex on loud grumpy twitter people. I haven’t seen evidence that most FAccT attendees are hostile and unsophisticated.
FAccT attendees are mostly a distinct group of researchers from the AI ethics researchers who come from or are actively assisting marginalised communities (and not with eg. fairness and bias abstractions).
Hmm I’m not quite sure I agree that there’s such a clear division of two camps. For example, I think Seth is actually not that far off from Timnit’s perspective on AI Safety/EA. Perhaps and bit less extreme and hostile, but I see that more of a degree in difference rather than a degree in kind.
I also disagree that people in your second camp are going to be useful for fruitful for collaboration, as they don’t just have technical objections but I think core philosophical objections to EA (or what they view as EA).
I guess overall I’m not sure. It’d be interesting to see some mapping of AI-researchers in some kind of belief-space plot so different groups could be distinguished. I think it’s very easy to extrapolate from a few small examples and miss what’s actually going—which I admit I might very well be doing with my pessimism here, but I sadly think it’s telling that I see so few counterexamples of collaboration but I can easily find examples of AI researchers dismissive or hostile to the AI Safety/xRisk perspective.
I don’t think you have to agree on deep philosophical stuff to collaborate on specific projects. I do think it’ll be hard to collaborate if one/both sides are frequently publicly claiming the other is malign and sinister or idiotic and incompetent or incredibly ideogically rigid and driven by emotion not reason (etc.)
I totally buy “there are lots of good sensible AI ethics people with good ideas, we should co-operate with them”. I don’t actually think that all of the criticisms of EA from the harshest critics are entirely wrong either. It’s only the idea that “be co-operative” will have much effect on whether articles like this get written and hostile quotes from some prominent AI ethics people turn up in them, that I’m a bit skeptical of. My claim is not “AI ethics bad”, but “you are unlikely to be able to persuade the most AI hostile figures within AI ethics”.
Sure, I agree with that. I also have parallel conversations with AI ethics colleagues—you’re never going to be able to make much headway with a few of the most hardcore safety people that your justice/bias etc work is anything but a trivial waste of time; anyone sane is working on averting the coming doom.
Don’t need to convince everyone; and there will always be some background of articles like this. But it’ll be a lot better if there’s a core of cooperative work too, on the things that benefit from cooperation.
My favourite recent example of (2) is this paper:
https://arxiv.org/pdf/2302.10329.pdf
Other examples might include my coauthored papers with Stephen Cave (ethics/justice), e.g.
https://dl.acm.org/doi/10.1145/3278721.3278780
Another would be Haydn Belfield’s new collaboration with Kerry McInerney
http://lcfi.ac.uk/projects/ai-futures-and-responsibility/global-politics-ai/
Jess Whittlestone’s online engagements with Seth Lazar have been pretty productive, I thought.
I know you’re probably extremely busy, but if you’d like to see more collaboration between the x-risks community and ai ethics, it might be worth writing up a list of ways in which you think we could collaborate as a top-level post.
I’m significantly more enthusiastic about the potential for collaboration after seeing the impact of the FLI letter.
I expect many communities would agree on working to restrict Big Tech’s use of AI to consolidate power. List of quotes from different communities here.