Suing people nearly always makes you look like the assholes I think.
As for Torres, it is fine for people to push back against specific false things they say. But fundamentally, even once you get past the misrepresentations, there is a bunch of stuff that they highlight that various prominent EAs really do believe and say that genuinely does seem outrageous or scary to most people, and no amount of pushback is likely to persuade most of those people otherwise.
In some cases, I think that outrage fairly clearly isnât really justified once you think things through very carefully: i.e. for example the quote from Nick Beckstead about saving lives being all-things-equal higher value in rich countries, because of flow-through effects which Torres always says makes Beckstead a white supremacist. But in other cases well, itâs hardly news that utilitarianism has a bunch of implications that strongly contradict moral commonsense, or that EAs are sympathetic to utilitarianism. And âoh, but I donât endorse [outrageous sounding view], I merely think there is like a 60% chance it is true, and you should be careful about moral uncertaintyâ does not sound very reassuring to a normal outside person.
For example, take Will on double-or-nothing gambles (https://ââconversationswithtyler.com/ââepisodes/ââwilliam-macaskill/ââ) where you do something that has a 49% chance of destroying everyone, and a 51% chance of doubling the number of humans in existence (now and in the future). Itâs a little hard to make out exactly what Willâs overall position on this, but he does say it is hard to justify not taking those gambles:
âThen, in this case, itâs not an example of very low probabilities, very large amounts of value. Then your view would have to argue that, âWell, the future, as it is, is like close to the upper bound of value,â in order to make sense of the idea that you shouldnât flip 50â50. I think, actually, that position would be pretty hard to defend, is my guess. My thought is that, probably, within a situation where any view you say ends up having pretty bad, implausible consequencesâ
And he does seem to say there are some gambles of this kind he might take:
âAlso, just briefly on the 51/â49: Because of the pluralism that I talked aboutâââalthough, again, itâs meta pluralismâââof putting weight on many different model views, I would at least need the probabilities to be quite a bit wider in order to take the gamble...â
Or to give another example, the Bostrom and Shulman paper on digital minds talks about how if digital minds really have better lives than us, than classical (total) utilitarianism says they should take all our resources and let us starve. Bostrom and Shulman are against that in the paper. But I think it is fair to say they take utilitarianism seriously as a moral theory. And lots of people are going to think taking seriously the idea that this could be right is already corrupt, and vaguely Hitler-ish/âreminiscent of white settler expansionism against Native Americans.
In my view, EAs should be more clearly committed to rejecting (total*) utilitarianism in these sorts of cases than they actually are. Though I understand that moral philosophers correctly think the arguments for utilitarianism, or views which have similar implications to utilitarianism in these contexts, are disturbingly strong.
*In both of the cases described, person-affecting versions of classical utilitarianism which deny creating happy people is good doesnât have the scary consequences.
First, I want to thank you for engaging David. I get the sense weâve disagreed a lot on some recent topics on the Forum, so I do want to say I appreciate you explaining your point of view to me on them, even if I do struggle to understand. Your comment above covers a lot of ground, so if you want to switch to a higher-bandwidth way of discussing them, I would be happy to. I apologise in advance if my reply below comes across as overly hostile or in bad-faithâitâs not my intention, but I do admit Iâve somewhat lost my cool on this topic of late. But in my defence, sometimes thatâs the appropriate response. As I tried to summarise in my earlier comment, continuing to co-operate when the other player is defecting is a bad approach.
As for your comment/âreply though, Iâm not entirely sure what to make of it. To try to clarify, I was trying to understand why the Twitter discourse between people focused on AI xRisk and the FAact Community[1] has been so toxic over the last week, almost entirely (as far as I can see) from the latter to the former. Instead, I feel like youâve steered the conversation away to a discussion about the implications of naĂŻve utilitariansim. I also feel we may disagree on how much Torres has legitimate criticisms and how of their work is simply wilful âmisrepresentationâ (I wonder if youâve changed your mind on Torres since last year?). There are definitely connections there, but I donât think itâs quite the same conversation, and I think it somewhat telling that you responded to suggestions 3 & 4, and not 1 & 2, which I think are far less controversial (fwiw I agree that legal action should only be used once all other courses of actions have failed).
To clarify what Iâm trying to get at here with some more examples, which I hope will be reasonably unobjectionable even if incorrect:
Yesterday Timnit again insinuated that William MacAskill was a eugenicist. You can read that tweet and I donât think she means this in a âbelongs to a historical traditionâ way I think she means it in a âthis is what he believesâ way. I havenât seen anyone from the FAact Community call this out. In fact, Margaret Mitchellâs response to Jess Whittlestoneâs attempt to offer an olive branch was met with confusion that thereâs any extreme behaviour amongst the AI Ethics field.
People working in AI Safety and/âor associated with EA should therefore expect to be called eugencists, and the more Timnitâs perspective gains prominence that more they will have to deal with the consequences of this.
Noah Giansiracuaâs thread that I linked in the last tweet is highly conspiratiorial, spreads reckless misinformation, and is often just wrong. But not only has he doubled down despite pushback,[2] but he today tried to bridge the Safety/âEthics divide today seemingly unware that trashing the other side in a 26 tweet screed is massively damaging to this goal.
This suggests that while AI Safety efforts to build bridges may have some success, there may a strong and connected group of scholars who will either not countenance it at all, or be happy to stick the knife in once the opportunity appears. If I were an AI Safety academic, I wouldnât trust Noah.
In general, my hope is that work is going on behind the scenes and off Twitter to build bridges between the two camps. But a lot of names on the FAact side that seem to be more toxic are quite prominent, and given the culture of silence/âbullying involved there (again, see the Rumman Chowdhury tweet in the original comment, with further evidence here) Iâm not sure I feel as hopeful it will happen as I did in recent weeks.
The more I look into it, the more I see the hostility as asymmetric. Iâd be very open to counter-evidence on this point, but I donât see AI Safety people treating the other camp with such naked hostility, and definitely not from the more influential members of the movement, as far as I can tell. (And almost certainly not any more than usual over the past week or so? As I said, a lot of this seems to have kicked off post CAIS Letter).
My call to not âbe passiveâ was one in which I expect hostility to the field of AI Safety to continue, perhaps grow, and be amplified by influential figures in the AI space. I maintian the general EA media strategy of ignoring critics, and if engaging them only doing so with the utmost politeness, has been a net negative strategy, and will continue to be so if continuedâwith perhaps very bad consequences.
Anyway, Iâd like to thank you for sharing your perspective, and I do hope my perceptions have been skewed to be too pessimistic. To others reading, Iâd really appreciate hearing your thoughts on these topics, and points of view or explanations that might change my mind
I guess I thought my points about total utilitarianism were relevant, because âwe can make people like us more by pushing back more against misrepresentationâ is only true insofar as the real views we have will not offend people. Iâm also just generically anxious about people in EA believing things that feel scary to me. (As I say, Iâm not actually against people correcting misrepresentations obviously.)
I donât really have much sense of how reasonable critics are or arenât being, beyond the claim that sometimes they touch on genuinely scary things about total utilitarianism, and that itâs a bit of a problem that the main group arguing for AI safety contains a lot of prominent people with views that (theoretically) imply that we should be prepared to take big chances of AI catastrophe rather than pass up small chances of lots of v. happy digital people.
On Torres specifically: I donât really follow them in detail (these topics make me anxious), but I didnât intend to be claiming that they are a fair or measured critic, just that they have decent technical understanding of the philosophical issues involved and sometimes puts their finger on real weaknesses. That is compatible with them also saying a lot of stuff thatâs just false. I think motivated reasoning is a more likely explanation for why they says false things than conscious lying, but thatâs just because thatâs my prior about most people. (Edit: Actually, Iâm a little less sure of that, after being reminded of the sockpuppetry allegations by quinn below. If those are true, that is deliberate dishonesty.)
Regarding Gebru calling Will a eugenicist. Well, I really doubt you could âsueâ over that, or demonstrate to the people most concerned about this that he doesnât count as one by any reasonable definition. Some people use âeugenicistâ for any preference that a non-disabled person comes into existence rather than a different disabled person. And Will does have that preference. In What We Owe the Future, he takes it as obvious that if you have a medical condition that means if you conceive right now, your child will have awful painful migraines, then you should wait a few weeks to conceive so that you have a different child who doesnât have migraines. I think plenty ordinary people would be fine with that and puzzled by Gebru-like reactions, but it probably does meet some literal definitions that have been given for âeugenicsâ. Just suggesting he is a âeugenicistâ without further clarification is nonetheless misleading and unfair in my view, but thatâs not quite what libel is. Certainly I have met philosophers with strong disability rights views who regard Willâs kind of reaction to the migraine case as bigoted. (Not endorsing that view myself.)
None of this is some kind of overall endorsement of how the âAI ethicsâ crowd on Twitter talk overall, or about EAs specifically. I havenât been much exposed to it, and when I have been, I generally havenât liked it.
Suing people nearly always makes you look like the assholes I think.
As for Torres, it is fine for people to push back against specific false things they say. But fundamentally, even once you get past the misrepresentations, there is a bunch of stuff that they highlight that various prominent EAs really do believe and say that genuinely does seem outrageous or scary to most people, and no amount of pushback is likely to persuade most of those people otherwise.
In some cases, I think that outrage fairly clearly isnât really justified once you think things through very carefully: i.e. for example the quote from Nick Beckstead about saving lives being all-things-equal higher value in rich countries, because of flow-through effects which Torres always says makes Beckstead a white supremacist. But in other cases well, itâs hardly news that utilitarianism has a bunch of implications that strongly contradict moral commonsense, or that EAs are sympathetic to utilitarianism. And âoh, but I donât endorse [outrageous sounding view], I merely think there is like a 60% chance it is true, and you should be careful about moral uncertaintyâ does not sound very reassuring to a normal outside person.
For example, take Will on double-or-nothing gambles (https://ââconversationswithtyler.com/ââepisodes/ââwilliam-macaskill/ââ) where you do something that has a 49% chance of destroying everyone, and a 51% chance of doubling the number of humans in existence (now and in the future). Itâs a little hard to make out exactly what Willâs overall position on this, but he does say it is hard to justify not taking those gambles:
âThen, in this case, itâs not an example of very low probabilities, very large amounts of value. Then your view would have to argue that, âWell, the future, as it is, is like close to the upper bound of value,â in order to make sense of the idea that you shouldnât flip 50â50. I think, actually, that position would be pretty hard to defend, is my guess. My thought is that, probably, within a situation where any view you say ends up having pretty bad, implausible consequencesâ
And he does seem to say there are some gambles of this kind he might take:
âAlso, just briefly on the 51/â49: Because of the pluralism that I talked aboutâââalthough, again, itâs meta pluralismâââof putting weight on many different model views, I would at least need the probabilities to be quite a bit wider in order to take the gamble...â
Or to give another example, the Bostrom and Shulman paper on digital minds talks about how if digital minds really have better lives than us, than classical (total) utilitarianism says they should take all our resources and let us starve. Bostrom and Shulman are against that in the paper. But I think it is fair to say they take utilitarianism seriously as a moral theory. And lots of people are going to think taking seriously the idea that this could be right is already corrupt, and vaguely Hitler-ish/âreminiscent of white settler expansionism against Native Americans.
In my view, EAs should be more clearly committed to rejecting (total*) utilitarianism in these sorts of cases than they actually are. Though I understand that moral philosophers correctly think the arguments for utilitarianism, or views which have similar implications to utilitarianism in these contexts, are disturbingly strong.
*In both of the cases described, person-affecting versions of classical utilitarianism which deny creating happy people is good doesnât have the scary consequences.
First, I want to thank you for engaging David. I get the sense weâve disagreed a lot on some recent topics on the Forum, so I do want to say I appreciate you explaining your point of view to me on them, even if I do struggle to understand. Your comment above covers a lot of ground, so if you want to switch to a higher-bandwidth way of discussing them, I would be happy to. I apologise in advance if my reply below comes across as overly hostile or in bad-faithâitâs not my intention, but I do admit Iâve somewhat lost my cool on this topic of late. But in my defence, sometimes thatâs the appropriate response. As I tried to summarise in my earlier comment, continuing to co-operate when the other player is defecting is a bad approach.
As for your comment/âreply though, Iâm not entirely sure what to make of it. To try to clarify, I was trying to understand why the Twitter discourse between people focused on AI xRisk and the FAact Community[1] has been so toxic over the last week, almost entirely (as far as I can see) from the latter to the former. Instead, I feel like youâve steered the conversation away to a discussion about the implications of naĂŻve utilitariansim. I also feel we may disagree on how much Torres has legitimate criticisms and how of their work is simply wilful âmisrepresentationâ (I wonder if youâve changed your mind on Torres since last year?). There are definitely connections there, but I donât think itâs quite the same conversation, and I think it somewhat telling that you responded to suggestions 3 & 4, and not 1 & 2, which I think are far less controversial (fwiw I agree that legal action should only be used once all other courses of actions have failed).
To clarify what Iâm trying to get at here with some more examples, which I hope will be reasonably unobjectionable even if incorrect:
Yesterday Timnit again insinuated that William MacAskill was a eugenicist. You can read that tweet and I donât think she means this in a âbelongs to a historical traditionâ way I think she means it in a âthis is what he believesâ way. I havenât seen anyone from the FAact Community call this out. In fact, Margaret Mitchellâs response to Jess Whittlestoneâs attempt to offer an olive branch was met with confusion that thereâs any extreme behaviour amongst the AI Ethics field.
People working in AI Safety and/âor associated with EA should therefore expect to be called eugencists, and the more Timnitâs perspective gains prominence that more they will have to deal with the consequences of this.
Noah Giansiracuaâs thread that I linked in the last tweet is highly conspiratiorial, spreads reckless misinformation, and is often just wrong. But not only has he doubled down despite pushback,[2] but he today tried to bridge the Safety/âEthics divide today seemingly unware that trashing the other side in a 26 tweet screed is massively damaging to this goal.
This suggests that while AI Safety efforts to build bridges may have some success, there may a strong and connected group of scholars who will either not countenance it at all, or be happy to stick the knife in once the opportunity appears. If I were an AI Safety academic, I wouldnât trust Noah.
In general, my hope is that work is going on behind the scenes and off Twitter to build bridges between the two camps. But a lot of names on the FAact side that seem to be more toxic are quite prominent, and given the culture of silence/âbullying involved there (again, see the Rumman Chowdhury tweet in the original comment, with further evidence here) Iâm not sure I feel as hopeful it will happen as I did in recent weeks.
The more I look into it, the more I see the hostility as asymmetric. Iâd be very open to counter-evidence on this point, but I donât see AI Safety people treating the other camp with such naked hostility, and definitely not from the more influential members of the movement, as far as I can tell. (And almost certainly not any more than usual over the past week or so? As I said, a lot of this seems to have kicked off post CAIS Letter).
My call to not âbe passiveâ was one in which I expect hostility to the field of AI Safety to continue, perhaps grow, and be amplified by influential figures in the AI space. I maintian the general EA media strategy of ignoring critics, and if engaging them only doing so with the utmost politeness, has been a net negative strategy, and will continue to be so if continuedâwith perhaps very bad consequences.
Anyway, Iâd like to thank you for sharing your perspective, and I do hope my perceptions have been skewed to be too pessimistic. To others reading, Iâd really appreciate hearing your thoughts on these topics, and points of view or explanations that might change my mind
I think this is better than the Safety/âEthics labelling, but Iâm referring to the same divide here
Long may EA Twitter dunk on him until a retraction appears
I mean in a sense a venue that hosts torres is definitionally trashy due to https://ââmarkfuentes1.substack.com/ââp/ââemile-p-torress-history-of-dishonesty except insofar as they havenât seen or donât believe this Fuentes person.
I guess I thought my points about total utilitarianism were relevant, because âwe can make people like us more by pushing back more against misrepresentationâ is only true insofar as the real views we have will not offend people. Iâm also just generically anxious about people in EA believing things that feel scary to me. (As I say, Iâm not actually against people correcting misrepresentations obviously.)
I donât really have much sense of how reasonable critics are or arenât being, beyond the claim that sometimes they touch on genuinely scary things about total utilitarianism, and that itâs a bit of a problem that the main group arguing for AI safety contains a lot of prominent people with views that (theoretically) imply that we should be prepared to take big chances of AI catastrophe rather than pass up small chances of lots of v. happy digital people.
On Torres specifically: I donât really follow them in detail (these topics make me anxious), but I didnât intend to be claiming that they are a fair or measured critic, just that they have decent technical understanding of the philosophical issues involved and sometimes puts their finger on real weaknesses. That is compatible with them also saying a lot of stuff thatâs just false. I think motivated reasoning is a more likely explanation for why they says false things than conscious lying, but thatâs just because thatâs my prior about most people. (Edit: Actually, Iâm a little less sure of that, after being reminded of the sockpuppetry allegations by quinn below. If those are true, that is deliberate dishonesty.)
Regarding Gebru calling Will a eugenicist. Well, I really doubt you could âsueâ over that, or demonstrate to the people most concerned about this that he doesnât count as one by any reasonable definition. Some people use âeugenicistâ for any preference that a non-disabled person comes into existence rather than a different disabled person. And Will does have that preference. In What We Owe the Future, he takes it as obvious that if you have a medical condition that means if you conceive right now, your child will have awful painful migraines, then you should wait a few weeks to conceive so that you have a different child who doesnât have migraines. I think plenty ordinary people would be fine with that and puzzled by Gebru-like reactions, but it probably does meet some literal definitions that have been given for âeugenicsâ. Just suggesting he is a âeugenicistâ without further clarification is nonetheless misleading and unfair in my view, but thatâs not quite what libel is. Certainly I have met philosophers with strong disability rights views who regard Willâs kind of reaction to the migraine case as bigoted. (Not endorsing that view myself.)
None of this is some kind of overall endorsement of how the âAI ethicsâ crowd on Twitter talk overall, or about EAs specifically. I havenât been much exposed to it, and when I have been, I generally havenât liked it.