First, I want to thank you for engaging David. I get the sense weāve disagreed a lot on some recent topics on the Forum, so I do want to say I appreciate you explaining your point of view to me on them, even if I do struggle to understand. Your comment above covers a lot of ground, so if you want to switch to a higher-bandwidth way of discussing them, I would be happy to. I apologise in advance if my reply below comes across as overly hostile or in bad-faithāitās not my intention, but I do admit Iāve somewhat lost my cool on this topic of late. But in my defence, sometimes thatās the appropriate response. As I tried to summarise in my earlier comment, continuing to co-operate when the other player is defecting is a bad approach.
As for your comment/āreply though, Iām not entirely sure what to make of it. To try to clarify, I was trying to understand why the Twitter discourse between people focused on AI xRisk and the FAact Community[1] has been so toxic over the last week, almost entirely (as far as I can see) from the latter to the former. Instead, I feel like youāve steered the conversation away to a discussion about the implications of naĆÆve utilitariansim. I also feel we may disagree on how much Torres has legitimate criticisms and how of their work is simply wilful āmisrepresentationā (I wonder if youāve changed your mind on Torres since last year?). There are definitely connections there, but I donāt think itās quite the same conversation, and I think it somewhat telling that you responded to suggestions 3 & 4, and not 1 & 2, which I think are far less controversial (fwiw I agree that legal action should only be used once all other courses of actions have failed).
To clarify what Iām trying to get at here with some more examples, which I hope will be reasonably unobjectionable even if incorrect:
Yesterday Timnit again insinuated that William MacAskill was a eugenicist. You can read that tweet and I donāt think she means this in a ābelongs to a historical traditionā way I think she means it in a āthis is what he believesā way. I havenāt seen anyone from the FAact Community call this out. In fact, Margaret Mitchellās response to Jess Whittlestoneās attempt to offer an olive branch was met with confusion that thereās any extreme behaviour amongst the AI Ethics field.
People working in AI Safety and/āor associated with EA should therefore expect to be called eugencists, and the more Timnitās perspective gains prominence that more they will have to deal with the consequences of this.
Noah Giansiracuaās thread that I linked in the last tweet is highly conspiratiorial, spreads reckless misinformation, and is often just wrong. But not only has he doubled down despite pushback,[2] but he today tried to bridge the Safety/āEthics divide today seemingly unware that trashing the other side in a 26 tweet screed is massively damaging to this goal.
This suggests that while AI Safety efforts to build bridges may have some success, there may a strong and connected group of scholars who will either not countenance it at all, or be happy to stick the knife in once the opportunity appears. If I were an AI Safety academic, I wouldnāt trust Noah.
In general, my hope is that work is going on behind the scenes and off Twitter to build bridges between the two camps. But a lot of names on the FAact side that seem to be more toxic are quite prominent, and given the culture of silence/ābullying involved there (again, see the Rumman Chowdhury tweet in the original comment, with further evidence here) Iām not sure I feel as hopeful it will happen as I did in recent weeks.
The more I look into it, the more I see the hostility as asymmetric. Iād be very open to counter-evidence on this point, but I donāt see AI Safety people treating the other camp with such naked hostility, and definitely not from the more influential members of the movement, as far as I can tell. (And almost certainly not any more than usual over the past week or so? As I said, a lot of this seems to have kicked off post CAIS Letter).
My call to not ābe passiveā was one in which I expect hostility to the field of AI Safety to continue, perhaps grow, and be amplified by influential figures in the AI space. I maintian the general EA media strategy of ignoring critics, and if engaging them only doing so with the utmost politeness, has been a net negative strategy, and will continue to be so if continuedāwith perhaps very bad consequences.
Anyway, Iād like to thank you for sharing your perspective, and I do hope my perceptions have been skewed to be too pessimistic. To others reading, Iād really appreciate hearing your thoughts on these topics, and points of view or explanations that might change my mind
I guess I thought my points about total utilitarianism were relevant, because āwe can make people like us more by pushing back more against misrepresentationā is only true insofar as the real views we have will not offend people. Iām also just generically anxious about people in EA believing things that feel scary to me. (As I say, Iām not actually against people correcting misrepresentations obviously.)
I donāt really have much sense of how reasonable critics are or arenāt being, beyond the claim that sometimes they touch on genuinely scary things about total utilitarianism, and that itās a bit of a problem that the main group arguing for AI safety contains a lot of prominent people with views that (theoretically) imply that we should be prepared to take big chances of AI catastrophe rather than pass up small chances of lots of v. happy digital people.
On Torres specifically: I donāt really follow them in detail (these topics make me anxious), but I didnāt intend to be claiming that they are a fair or measured critic, just that they have decent technical understanding of the philosophical issues involved and sometimes puts their finger on real weaknesses. That is compatible with them also saying a lot of stuff thatās just false. I think motivated reasoning is a more likely explanation for why they says false things than conscious lying, but thatās just because thatās my prior about most people. (Edit: Actually, Iām a little less sure of that, after being reminded of the sockpuppetry allegations by quinn below. If those are true, that is deliberate dishonesty.)
Regarding Gebru calling Will a eugenicist. Well, I really doubt you could āsueā over that, or demonstrate to the people most concerned about this that he doesnāt count as one by any reasonable definition. Some people use āeugenicistā for any preference that a non-disabled person comes into existence rather than a different disabled person. And Will does have that preference. In What We Owe the Future, he takes it as obvious that if you have a medical condition that means if you conceive right now, your child will have awful painful migraines, then you should wait a few weeks to conceive so that you have a different child who doesnāt have migraines. I think plenty ordinary people would be fine with that and puzzled by Gebru-like reactions, but it probably does meet some literal definitions that have been given for āeugenicsā. Just suggesting he is a āeugenicistā without further clarification is nonetheless misleading and unfair in my view, but thatās not quite what libel is. Certainly I have met philosophers with strong disability rights views who regard Willās kind of reaction to the migraine case as bigoted. (Not endorsing that view myself.)
None of this is some kind of overall endorsement of how the āAI ethicsā crowd on Twitter talk overall, or about EAs specifically. I havenāt been much exposed to it, and when I have been, I generally havenāt liked it.
First, I want to thank you for engaging David. I get the sense weāve disagreed a lot on some recent topics on the Forum, so I do want to say I appreciate you explaining your point of view to me on them, even if I do struggle to understand. Your comment above covers a lot of ground, so if you want to switch to a higher-bandwidth way of discussing them, I would be happy to. I apologise in advance if my reply below comes across as overly hostile or in bad-faithāitās not my intention, but I do admit Iāve somewhat lost my cool on this topic of late. But in my defence, sometimes thatās the appropriate response. As I tried to summarise in my earlier comment, continuing to co-operate when the other player is defecting is a bad approach.
As for your comment/āreply though, Iām not entirely sure what to make of it. To try to clarify, I was trying to understand why the Twitter discourse between people focused on AI xRisk and the FAact Community[1] has been so toxic over the last week, almost entirely (as far as I can see) from the latter to the former. Instead, I feel like youāve steered the conversation away to a discussion about the implications of naĆÆve utilitariansim. I also feel we may disagree on how much Torres has legitimate criticisms and how of their work is simply wilful āmisrepresentationā (I wonder if youāve changed your mind on Torres since last year?). There are definitely connections there, but I donāt think itās quite the same conversation, and I think it somewhat telling that you responded to suggestions 3 & 4, and not 1 & 2, which I think are far less controversial (fwiw I agree that legal action should only be used once all other courses of actions have failed).
To clarify what Iām trying to get at here with some more examples, which I hope will be reasonably unobjectionable even if incorrect:
Yesterday Timnit again insinuated that William MacAskill was a eugenicist. You can read that tweet and I donāt think she means this in a ābelongs to a historical traditionā way I think she means it in a āthis is what he believesā way. I havenāt seen anyone from the FAact Community call this out. In fact, Margaret Mitchellās response to Jess Whittlestoneās attempt to offer an olive branch was met with confusion that thereās any extreme behaviour amongst the AI Ethics field.
People working in AI Safety and/āor associated with EA should therefore expect to be called eugencists, and the more Timnitās perspective gains prominence that more they will have to deal with the consequences of this.
Noah Giansiracuaās thread that I linked in the last tweet is highly conspiratiorial, spreads reckless misinformation, and is often just wrong. But not only has he doubled down despite pushback,[2] but he today tried to bridge the Safety/āEthics divide today seemingly unware that trashing the other side in a 26 tweet screed is massively damaging to this goal.
This suggests that while AI Safety efforts to build bridges may have some success, there may a strong and connected group of scholars who will either not countenance it at all, or be happy to stick the knife in once the opportunity appears. If I were an AI Safety academic, I wouldnāt trust Noah.
In general, my hope is that work is going on behind the scenes and off Twitter to build bridges between the two camps. But a lot of names on the FAact side that seem to be more toxic are quite prominent, and given the culture of silence/ābullying involved there (again, see the Rumman Chowdhury tweet in the original comment, with further evidence here) Iām not sure I feel as hopeful it will happen as I did in recent weeks.
The more I look into it, the more I see the hostility as asymmetric. Iād be very open to counter-evidence on this point, but I donāt see AI Safety people treating the other camp with such naked hostility, and definitely not from the more influential members of the movement, as far as I can tell. (And almost certainly not any more than usual over the past week or so? As I said, a lot of this seems to have kicked off post CAIS Letter).
My call to not ābe passiveā was one in which I expect hostility to the field of AI Safety to continue, perhaps grow, and be amplified by influential figures in the AI space. I maintian the general EA media strategy of ignoring critics, and if engaging them only doing so with the utmost politeness, has been a net negative strategy, and will continue to be so if continuedāwith perhaps very bad consequences.
Anyway, Iād like to thank you for sharing your perspective, and I do hope my perceptions have been skewed to be too pessimistic. To others reading, Iād really appreciate hearing your thoughts on these topics, and points of view or explanations that might change my mind
I think this is better than the Safety/āEthics labelling, but Iām referring to the same divide here
Long may EA Twitter dunk on him until a retraction appears
I mean in a sense a venue that hosts torres is definitionally trashy due to https://āāmarkfuentes1.substack.com/āāp/āāemile-p-torress-history-of-dishonesty except insofar as they havenāt seen or donāt believe this Fuentes person.
I guess I thought my points about total utilitarianism were relevant, because āwe can make people like us more by pushing back more against misrepresentationā is only true insofar as the real views we have will not offend people. Iām also just generically anxious about people in EA believing things that feel scary to me. (As I say, Iām not actually against people correcting misrepresentations obviously.)
I donāt really have much sense of how reasonable critics are or arenāt being, beyond the claim that sometimes they touch on genuinely scary things about total utilitarianism, and that itās a bit of a problem that the main group arguing for AI safety contains a lot of prominent people with views that (theoretically) imply that we should be prepared to take big chances of AI catastrophe rather than pass up small chances of lots of v. happy digital people.
On Torres specifically: I donāt really follow them in detail (these topics make me anxious), but I didnāt intend to be claiming that they are a fair or measured critic, just that they have decent technical understanding of the philosophical issues involved and sometimes puts their finger on real weaknesses. That is compatible with them also saying a lot of stuff thatās just false. I think motivated reasoning is a more likely explanation for why they says false things than conscious lying, but thatās just because thatās my prior about most people. (Edit: Actually, Iām a little less sure of that, after being reminded of the sockpuppetry allegations by quinn below. If those are true, that is deliberate dishonesty.)
Regarding Gebru calling Will a eugenicist. Well, I really doubt you could āsueā over that, or demonstrate to the people most concerned about this that he doesnāt count as one by any reasonable definition. Some people use āeugenicistā for any preference that a non-disabled person comes into existence rather than a different disabled person. And Will does have that preference. In What We Owe the Future, he takes it as obvious that if you have a medical condition that means if you conceive right now, your child will have awful painful migraines, then you should wait a few weeks to conceive so that you have a different child who doesnāt have migraines. I think plenty ordinary people would be fine with that and puzzled by Gebru-like reactions, but it probably does meet some literal definitions that have been given for āeugenicsā. Just suggesting he is a āeugenicistā without further clarification is nonetheless misleading and unfair in my view, but thatās not quite what libel is. Certainly I have met philosophers with strong disability rights views who regard Willās kind of reaction to the migraine case as bigoted. (Not endorsing that view myself.)
None of this is some kind of overall endorsement of how the āAI ethicsā crowd on Twitter talk overall, or about EAs specifically. I havenāt been much exposed to it, and when I have been, I generally havenāt liked it.