First, I want to thank you for engaging David. I get the sense we’ve disagreed a lot on some recent topics on the Forum, so I do want to say I appreciate you explaining your point of view to me on them, even if I do struggle to understand. Your comment above covers a lot of ground, so if you want to switch to a higher-bandwidth way of discussing them, I would be happy to. I apologise in advance if my reply below comes across as overly hostile or in bad-faith—it’s not my intention, but I do admit I’ve somewhat lost my cool on this topic of late. But in my defence, sometimes that’s the appropriate response. As I tried to summarise in my earlier comment, continuing to co-operate when the other player is defecting is a bad approach.
As for your comment/reply though, I’m not entirely sure what to make of it. To try to clarify, I was trying to understand why the Twitter discourse between people focused on AI xRisk and the FAact Community[1] has been so toxic over the last week, almost entirely (as far as I can see) from the latter to the former. Instead, I feel like you’ve steered the conversation away to a discussion about the implications of naïve utilitariansim. I also feel we may disagree on how much Torres has legitimate criticisms and how of their work is simply wilful ‘misrepresentation’ (I wonder if you’ve changed your mind on Torres since last year?). There are definitely connections there, but I don’t think it’s quite the same conversation, and I think it somewhat telling that you responded to suggestions 3 & 4, and not 1 & 2, which I think are far less controversial (fwiw I agree that legal action should only be used once all other courses of actions have failed).
To clarify what I’m trying to get at here with some more examples, which I hope will be reasonably unobjectionable even if incorrect:
Yesterday Timnit again insinuated that William MacAskill was a eugenicist. You can read that tweet and I don’t think she means this in a ‘belongs to a historical tradition’ way I think she means it in a ‘this is what he believes’ way. I haven’t seen anyone from the FAact Community call this out. In fact, Margaret Mitchell’s response to Jess Whittlestone’s attempt to offer an olive branch was met with confusion that there’s any extreme behaviour amongst the AI Ethics field.
People working in AI Safety and/or associated with EA should therefore expect to be called eugencists, and the more Timnit’s perspective gains prominence that more they will have to deal with the consequences of this.
Noah Giansiracua’s thread that I linked in the last tweet is highly conspiratiorial, spreads reckless misinformation, and is often just wrong. But not only has he doubled down despite pushback,[2] but he today tried to bridge the Safety/Ethics divide today seemingly unware that trashing the other side in a 26 tweet screed is massively damaging to this goal.
This suggests that while AI Safety efforts to build bridges may have some success, there may a strong and connected group of scholars who will either not countenance it at all, or be happy to stick the knife in once the opportunity appears. If I were an AI Safety academic, I wouldn’t trust Noah.
In general, my hope is that work is going on behind the scenes and off Twitter to build bridges between the two camps. But a lot of names on the FAact side that seem to be more toxic are quite prominent, and given the culture of silence/bullying involved there (again, see the Rumman Chowdhury tweet in the original comment, with further evidence here) I’m not sure I feel as hopeful it will happen as I did in recent weeks.
The more I look into it, the more I see the hostility as asymmetric. I’d be very open to counter-evidence on this point, but I don’t see AI Safety people treating the other camp with such naked hostility, and definitely not from the more influential members of the movement, as far as I can tell. (And almost certainly not any more than usual over the past week or so? As I said, a lot of this seems to have kicked off post CAIS Letter).
My call to not ‘be passive’ was one in which I expect hostility to the field of AI Safety to continue, perhaps grow, and be amplified by influential figures in the AI space. I maintian the general EA media strategy of ignoring critics, and if engaging them only doing so with the utmost politeness, has been a net negative strategy, and will continue to be so if continued—with perhaps very bad consequences.
Anyway, I’d like to thank you for sharing your perspective, and I do hope my perceptions have been skewed to be too pessimistic. To others reading, I’d really appreciate hearing your thoughts on these topics, and points of view or explanations that might change my mind
I guess I thought my points about total utilitarianism were relevant, because ‘we can make people like us more by pushing back more against misrepresentation’ is only true insofar as the real views we have will not offend people. I’m also just generically anxious about people in EA believing things that feel scary to me. (As I say, I’m not actually against people correcting misrepresentations obviously.)
I don’t really have much sense of how reasonable critics are or aren’t being, beyond the claim that sometimes they touch on genuinely scary things about total utilitarianism, and that it’s a bit of a problem that the main group arguing for AI safety contains a lot of prominent people with views that (theoretically) imply that we should be prepared to take big chances of AI catastrophe rather than pass up small chances of lots of v. happy digital people.
On Torres specifically: I don’t really follow them in detail (these topics make me anxious), but I didn’t intend to be claiming that they are a fair or measured critic, just that they have decent technical understanding of the philosophical issues involved and sometimes puts their finger on real weaknesses. That is compatible with them also saying a lot of stuff that’s just false. I think motivated reasoning is a more likely explanation for why they says false things than conscious lying, but that’s just because that’s my prior about most people. (Edit: Actually, I’m a little less sure of that, after being reminded of the sockpuppetry allegations by quinn below. If those are true, that is deliberate dishonesty.)
Regarding Gebru calling Will a eugenicist. Well, I really doubt you could “sue” over that, or demonstrate to the people most concerned about this that he doesn’t count as one by any reasonable definition. Some people use “eugenicist” for any preference that a non-disabled person comes into existence rather than a different disabled person. And Will does have that preference. In What We Owe the Future, he takes it as obvious that if you have a medical condition that means if you conceive right now, your child will have awful painful migraines, then you should wait a few weeks to conceive so that you have a different child who doesn’t have migraines. I think plenty ordinary people would be fine with that and puzzled by Gebru-like reactions, but it probably does meet some literal definitions that have been given for “eugenics”. Just suggesting he is a “eugenicist” without further clarification is nonetheless misleading and unfair in my view, but that’s not quite what libel is. Certainly I have met philosophers with strong disability rights views who regard Will’s kind of reaction to the migraine case as bigoted. (Not endorsing that view myself.)
None of this is some kind of overall endorsement of how the ‘AI ethics’ crowd on Twitter talk overall, or about EAs specifically. I haven’t been much exposed to it, and when I have been, I generally haven’t liked it.
First, I want to thank you for engaging David. I get the sense we’ve disagreed a lot on some recent topics on the Forum, so I do want to say I appreciate you explaining your point of view to me on them, even if I do struggle to understand. Your comment above covers a lot of ground, so if you want to switch to a higher-bandwidth way of discussing them, I would be happy to. I apologise in advance if my reply below comes across as overly hostile or in bad-faith—it’s not my intention, but I do admit I’ve somewhat lost my cool on this topic of late. But in my defence, sometimes that’s the appropriate response. As I tried to summarise in my earlier comment, continuing to co-operate when the other player is defecting is a bad approach.
As for your comment/reply though, I’m not entirely sure what to make of it. To try to clarify, I was trying to understand why the Twitter discourse between people focused on AI xRisk and the FAact Community[1] has been so toxic over the last week, almost entirely (as far as I can see) from the latter to the former. Instead, I feel like you’ve steered the conversation away to a discussion about the implications of naïve utilitariansim. I also feel we may disagree on how much Torres has legitimate criticisms and how of their work is simply wilful ‘misrepresentation’ (I wonder if you’ve changed your mind on Torres since last year?). There are definitely connections there, but I don’t think it’s quite the same conversation, and I think it somewhat telling that you responded to suggestions 3 & 4, and not 1 & 2, which I think are far less controversial (fwiw I agree that legal action should only be used once all other courses of actions have failed).
To clarify what I’m trying to get at here with some more examples, which I hope will be reasonably unobjectionable even if incorrect:
Yesterday Timnit again insinuated that William MacAskill was a eugenicist. You can read that tweet and I don’t think she means this in a ‘belongs to a historical tradition’ way I think she means it in a ‘this is what he believes’ way. I haven’t seen anyone from the FAact Community call this out. In fact, Margaret Mitchell’s response to Jess Whittlestone’s attempt to offer an olive branch was met with confusion that there’s any extreme behaviour amongst the AI Ethics field.
People working in AI Safety and/or associated with EA should therefore expect to be called eugencists, and the more Timnit’s perspective gains prominence that more they will have to deal with the consequences of this.
Noah Giansiracua’s thread that I linked in the last tweet is highly conspiratiorial, spreads reckless misinformation, and is often just wrong. But not only has he doubled down despite pushback,[2] but he today tried to bridge the Safety/Ethics divide today seemingly unware that trashing the other side in a 26 tweet screed is massively damaging to this goal.
This suggests that while AI Safety efforts to build bridges may have some success, there may a strong and connected group of scholars who will either not countenance it at all, or be happy to stick the knife in once the opportunity appears. If I were an AI Safety academic, I wouldn’t trust Noah.
In general, my hope is that work is going on behind the scenes and off Twitter to build bridges between the two camps. But a lot of names on the FAact side that seem to be more toxic are quite prominent, and given the culture of silence/bullying involved there (again, see the Rumman Chowdhury tweet in the original comment, with further evidence here) I’m not sure I feel as hopeful it will happen as I did in recent weeks.
The more I look into it, the more I see the hostility as asymmetric. I’d be very open to counter-evidence on this point, but I don’t see AI Safety people treating the other camp with such naked hostility, and definitely not from the more influential members of the movement, as far as I can tell. (And almost certainly not any more than usual over the past week or so? As I said, a lot of this seems to have kicked off post CAIS Letter).
My call to not ‘be passive’ was one in which I expect hostility to the field of AI Safety to continue, perhaps grow, and be amplified by influential figures in the AI space. I maintian the general EA media strategy of ignoring critics, and if engaging them only doing so with the utmost politeness, has been a net negative strategy, and will continue to be so if continued—with perhaps very bad consequences.
Anyway, I’d like to thank you for sharing your perspective, and I do hope my perceptions have been skewed to be too pessimistic. To others reading, I’d really appreciate hearing your thoughts on these topics, and points of view or explanations that might change my mind
I think this is better than the Safety/Ethics labelling, but I’m referring to the same divide here
Long may EA Twitter dunk on him until a retraction appears
I mean in a sense a venue that hosts torres is definitionally trashy due to https://markfuentes1.substack.com/p/emile-p-torress-history-of-dishonesty except insofar as they haven’t seen or don’t believe this Fuentes person.
I guess I thought my points about total utilitarianism were relevant, because ‘we can make people like us more by pushing back more against misrepresentation’ is only true insofar as the real views we have will not offend people. I’m also just generically anxious about people in EA believing things that feel scary to me. (As I say, I’m not actually against people correcting misrepresentations obviously.)
I don’t really have much sense of how reasonable critics are or aren’t being, beyond the claim that sometimes they touch on genuinely scary things about total utilitarianism, and that it’s a bit of a problem that the main group arguing for AI safety contains a lot of prominent people with views that (theoretically) imply that we should be prepared to take big chances of AI catastrophe rather than pass up small chances of lots of v. happy digital people.
On Torres specifically: I don’t really follow them in detail (these topics make me anxious), but I didn’t intend to be claiming that they are a fair or measured critic, just that they have decent technical understanding of the philosophical issues involved and sometimes puts their finger on real weaknesses. That is compatible with them also saying a lot of stuff that’s just false. I think motivated reasoning is a more likely explanation for why they says false things than conscious lying, but that’s just because that’s my prior about most people. (Edit: Actually, I’m a little less sure of that, after being reminded of the sockpuppetry allegations by quinn below. If those are true, that is deliberate dishonesty.)
Regarding Gebru calling Will a eugenicist. Well, I really doubt you could “sue” over that, or demonstrate to the people most concerned about this that he doesn’t count as one by any reasonable definition. Some people use “eugenicist” for any preference that a non-disabled person comes into existence rather than a different disabled person. And Will does have that preference. In What We Owe the Future, he takes it as obvious that if you have a medical condition that means if you conceive right now, your child will have awful painful migraines, then you should wait a few weeks to conceive so that you have a different child who doesn’t have migraines. I think plenty ordinary people would be fine with that and puzzled by Gebru-like reactions, but it probably does meet some literal definitions that have been given for “eugenics”. Just suggesting he is a “eugenicist” without further clarification is nonetheless misleading and unfair in my view, but that’s not quite what libel is. Certainly I have met philosophers with strong disability rights views who regard Will’s kind of reaction to the migraine case as bigoted. (Not endorsing that view myself.)
None of this is some kind of overall endorsement of how the ‘AI ethics’ crowd on Twitter talk overall, or about EAs specifically. I haven’t been much exposed to it, and when I have been, I generally haven’t liked it.