epistemic status: a frustrated outlet for sad thoughts, could definitely be reworded with more nuance
I really wish I had your positive view on this Sean, but I really donāt think thereās much chance of inroads unless capabilities advance to an extent that makes xRisk seem even more salient.
Gebru is, imo, never going to view EA positively. And sheāll use her influence as strongly as possible in the āAI Ethicsā community.
Seth Lazar also seems intractably anti-EA. Itās annoying how much of this dialogue happens on Twitter/āX, especially since itās very difficult for me as a non-Twitter user to find them, but I remember he posted one terrible anti-longtermist thread and later deleted it.
Shannon Vallor once also posted a similarly anti-longtermist thread, and then respond to Jess Whittlestone once saying lamenting the gap between the Safety and Ethics fields. I just really havenāt seen where the Safety->Ethics hostility has been, Iāve really only ever seen the reverse, but of course Iām 100% sure my sample is biased here.
The Belfield<>McInerney collaboration is extremely promising for sure, and I look forward to the outputs. I hope my impression is wrong and more work along these lines can happen.
But I really think thereās a strong anti-EA sentiment amongst the generally left-wing/ācritical-aligned parts of the āAI Ethicsā fields, and they arenāt taking any prisoners. In there eyes AI xRisk Safety is bad, EA is bad, and weāre in a direct zero-sum conflict over public attention and power. I think offering a hand is commendable, but any AI Safety researchers reading better have their shield at the ready just in case the hostile attacks come.
just really havenāt seen where the Safety->Ethics hostility has been
From the perspective of the AI Ethics researchers, AI Safety researchers and engineers contributed to the development of āeverything for everyoneā models ā and also distracted away from the increasing harms that result from the development and use of those models.
Which, frankly, is both true, given how much people in AI Safety collaborated and mingled with people in large AI labs.
I understand that on Twitter, AI Ethics researchers are explicitly critiquing AI Safety folk (and longtermist tech folk in general) more than the other way around.
That feels unfair if we focus on the explicit exchange in the moment. But there is more to it.
AI Ethics folk are responding with words to harms that resulted from misguided efforts by some key people in AI Safety in the past. There are implicit background goings-on they are concerned about that is hard to convey, and not immediately obvious from their writing.
It might not feel like we in AI Safety have much power in steering the development of large AI models, but historically the AI Safety community has been able to exert way more influence here than the AI Ethics community.
I understand if you look at tweets by people like Dr Gebru, that it can appear overly intense and like itās not warranted (what did we ever say to them?). But we need to be aware of the historical position of power that the AI Safety community has actually had, what narratives we ended up spreading (moving the Overton window over āAGIā), and what that has led to.
From the perspective of AI Ethics researchers, here is this dominant group of longtermists broadly that has overall caused all this damage. And AI Ethics people are organising and screaming from the top of their lungs to get the harms to stop.
From their perspective, they need to put pressure on longtermists, and they need to call them out in public, otherwise the harms will continue. The longtermists are not as much aware of those harms (or donāt care about that much compared to their techno-future aspirations), so longtermists see it as unfair/ābad to be called out this way as a group.
Then when AI Ethics researchers critique us with words, some people involved around our community (usually the more blatant ones) are like āwhy are you so mean to us? why are you saying transhumanists are like eugenicists? why are you against us trying to steer technological progress? why donāt you consider extinction risksā?.
Hope thatās somewhat clarifying. I know this is not going to resonate for many people here, so Iām ready for the downvotes.
I think this is imprecise. In my mind there are two categories:
People who think EA is a distraction from near term issues and competing for funding and attention (e.g. Seth Lazar as seen by his complaints about the UK taskforce and trying to tag Dustin Moskovitz and Ian Hogarth in his thinkpieces). These more classical ethicists are just from what I can see analytical philosophers looking for funding and clout competition with EA. Theyāve lost a lot of social capital because they repeated a lot of old canards about AI and just repeats them. My model for them is something akin to they canāt do fizzbuzz or know what a transformer is, thus theyāll just say sentences about how AI canāt do things and thereās a lot of hype and power centralisation. These are more likely to be white men from the UK, Canada, Australia, and NZ. Status games are especially important to them and they seem to just not have a great understanding of the field of alignment at all. A good example I show people is this tweet which tries to say RLHF solves alignment and āPaul [Christiano] is an actual researcher I respect, the AI alignment people that bother me are more the longtermists.ā
People in the other camp are more likely to think EA is problematic and power hungry and covers for big tech. People in this camp would be your Dr. Gebru, DAIR etc. I think these individuals are often much more technically proficient than the people in the first camp and their view of EA is more akin to seeing EA as a cult that seeks to indoctrinate within a bundle of longtermist beliefs and carry water for AI labs. I will say the strategic collaborations are more fruitful here because there is more technical proficiency and personally I believe the latter group have better epistemics and are more truth-seeking even if much more acerbic in their rhetoric. The higher level of technical proficiency means they can contribute to the UK Task force on things like cybersecurity and evals.
I think measuring along only the axis of tractability of gaining allies is the wrong question but the real question is what are the fruits of collaboration.
I donāt know why people overindex on loud grumpy twitter people. I havenāt seen evidence that most FAccT attendees are hostile and unsophisticated.
FAccT attendees are mostly a distinct group of researchers from the AI ethics researchers who come from or are actively assisting marginalised communities (and not with eg. fairness and bias abstractions).
Hmm Iām not quite sure I agree that thereās such a clear division of two camps. For example, I think Seth is actually not that far off from Timnitās perspective on AI Safety/āEA. Perhaps and bit less extreme and hostile, but I see that more of a degree in difference rather than a degree in kind.
I also disagree that people in your second camp are going to be useful for fruitful for collaboration, as they donāt just have technical objections but I think core philosophical objections to EA (or what they view as EA).
I guess overall Iām not sure. Itād be interesting to see some mapping of AI-researchers in some kind of belief-space plot so different groups could be distinguished. I think itās very easy to extrapolate from a few small examples and miss whatās actually goingāwhich I admit I might very well be doing with my pessimism here, but I sadly think itās telling that I see so few counterexamples of collaboration but I can easily find examples of AI researchers dismissive or hostile to the AI Safety/āxRisk perspective.
I donāt think you have to agree on deep philosophical stuff to collaborate on specific projects. I do think itāll be hard to collaborate if one/āboth sides are frequently publicly claiming the other is malign and sinister or idiotic and incompetent or incredibly ideogically rigid and driven by emotion not reason (etc.)
epistemic status: a frustrated outlet for sad thoughts, could definitely be reworded with more nuance
I really wish I had your positive view on this Sean, but I really donāt think thereās much chance of inroads unless capabilities advance to an extent that makes xRisk seem even more salient.
Gebru is, imo, never going to view EA positively. And sheāll use her influence as strongly as possible in the āAI Ethicsā community.
Seth Lazar also seems intractably anti-EA. Itās annoying how much of this dialogue happens on Twitter/āX, especially since itās very difficult for me as a non-Twitter user to find them, but I remember he posted one terrible anti-longtermist thread and later deleted it.
Shannon Vallor once also posted a similarly anti-longtermist thread, and then respond to Jess Whittlestone once saying lamenting the gap between the Safety and Ethics fields. I just really havenāt seen where the Safety->Ethics hostility has been, Iāve really only ever seen the reverse, but of course Iām 100% sure my sample is biased here.
The Belfield<>McInerney collaboration is extremely promising for sure, and I look forward to the outputs. I hope my impression is wrong and more work along these lines can happen.
But I really think thereās a strong anti-EA sentiment amongst the generally left-wing/ācritical-aligned parts of the āAI Ethicsā fields, and they arenāt taking any prisoners. In there eyes AI xRisk Safety is bad, EA is bad, and weāre in a direct zero-sum conflict over public attention and power. I think offering a hand is commendable, but any AI Safety researchers reading better have their shield at the ready just in case the hostile attacks come.
From the perspective of the AI Ethics researchers, AI Safety researchers and engineers contributed to the development of āeverything for everyoneā models ā and also distracted away from the increasing harms that result from the development and use of those models.
Which, frankly, is both true, given how much people in AI Safety collaborated and mingled with people in large AI labs.
I understand that on Twitter, AI Ethics researchers are explicitly critiquing AI Safety folk (and longtermist tech folk in general) more than the other way around.
That feels unfair if we focus on the explicit exchange in the moment.
But there is more to it.
AI Ethics folk are responding with words to harms that resulted from misguided efforts by some key people in AI Safety in the past. There are implicit background goings-on they are concerned about that is hard to convey, and not immediately obvious from their writing.
It might not feel like we in AI Safety have much power in steering the development of large AI models, but historically the AI Safety community has been able to exert way more influence here than the AI Ethics community.
I understand if you look at tweets by people like Dr Gebru, that it can appear overly intense and like itās not warranted (what did we ever say to them?). But we need to be aware of the historical position of power that the AI Safety community has actually had, what narratives we ended up spreading (moving the Overton window over āAGIā), and what that has led to.
From the perspective of AI Ethics researchers, here is this dominant group of longtermists broadly that has overall caused all this damage. And AI Ethics people are organising and screaming from the top of their lungs to get the harms to stop.
From their perspective, they need to put pressure on longtermists, and they need to call them out in public, otherwise the harms will continue. The longtermists are not as much aware of those harms (or donāt care about that much compared to their techno-future aspirations), so longtermists see it as unfair/ābad to be called out this way as a group.
Then when AI Ethics researchers critique us with words, some people involved around our community (usually the more blatant ones) are like āwhy are you so mean to us? why are you saying transhumanists are like eugenicists? why are you against us trying to steer technological progress? why donāt you consider extinction risksā?.
Hope thatās somewhat clarifying.
I know this is not going to resonate for many people here, so Iām ready for the downvotes.
I found this comment very helpful Remmelt, so thank you. I think Iām going to respond to this comment via PM.
I think this is imprecise. In my mind there are two categories:
People who think EA is a distraction from near term issues and competing for funding and attention (e.g. Seth Lazar as seen by his complaints about the UK taskforce and trying to tag Dustin Moskovitz and Ian Hogarth in his thinkpieces). These more classical ethicists are just from what I can see analytical philosophers looking for funding and clout competition with EA. Theyāve lost a lot of social capital because they repeated a lot of old canards about AI and just repeats them. My model for them is something akin to they canāt do fizzbuzz or know what a transformer is, thus theyāll just say sentences about how AI canāt do things and thereās a lot of hype and power centralisation. These are more likely to be white men from the UK, Canada, Australia, and NZ. Status games are especially important to them and they seem to just not have a great understanding of the field of alignment at all. A good example I show people is this tweet which tries to say RLHF solves alignment and āPaul [Christiano] is an actual researcher I respect, the AI alignment people that bother me are more the longtermists.ā
People in the other camp are more likely to think EA is problematic and power hungry and covers for big tech. People in this camp would be your Dr. Gebru, DAIR etc. I think these individuals are often much more technically proficient than the people in the first camp and their view of EA is more akin to seeing EA as a cult that seeks to indoctrinate within a bundle of longtermist beliefs and carry water for AI labs. I will say the strategic collaborations are more fruitful here because there is more technical proficiency and personally I believe the latter group have better epistemics and are more truth-seeking even if much more acerbic in their rhetoric. The higher level of technical proficiency means they can contribute to the UK Task force on things like cybersecurity and evals.
I think measuring along only the axis of tractability of gaining allies is the wrong question but the real question is what are the fruits of collaboration.
I donāt know why people overindex on loud grumpy twitter people. I havenāt seen evidence that most FAccT attendees are hostile and unsophisticated.
FAccT attendees are mostly a distinct group of researchers from the AI ethics researchers who come from or are actively assisting marginalised communities (and not with eg. fairness and bias abstractions).
Hmm Iām not quite sure I agree that thereās such a clear division of two camps. For example, I think Seth is actually not that far off from Timnitās perspective on AI Safety/āEA. Perhaps and bit less extreme and hostile, but I see that more of a degree in difference rather than a degree in kind.
I also disagree that people in your second camp are going to be useful for fruitful for collaboration, as they donāt just have technical objections but I think core philosophical objections to EA (or what they view as EA).
I guess overall Iām not sure. Itād be interesting to see some mapping of AI-researchers in some kind of belief-space plot so different groups could be distinguished. I think itās very easy to extrapolate from a few small examples and miss whatās actually goingāwhich I admit I might very well be doing with my pessimism here, but I sadly think itās telling that I see so few counterexamples of collaboration but I can easily find examples of AI researchers dismissive or hostile to the AI Safety/āxRisk perspective.
I donāt think you have to agree on deep philosophical stuff to collaborate on specific projects. I do think itāll be hard to collaborate if one/āboth sides are frequently publicly claiming the other is malign and sinister or idiotic and incompetent or incredibly ideogically rigid and driven by emotion not reason (etc.)