Suing people nearly always makes you look like the assholes I think.
As for Torres, it is fine for people to push back against specific false things they say. But fundamentally, even once you get past the misrepresentations, there is a bunch of stuff that they highlight that various prominent EAs really do believe and say that genuinely does seem outrageous or scary to most people, and no amount of pushback is likely to persuade most of those people otherwise.
In some cases, I think that outrage fairly clearly isn’t really justified once you think things through very carefully: i.e. for example the quote from Nick Beckstead about saving lives being all-things-equal higher value in rich countries, because of flow-through effects which Torres always says makes Beckstead a white supremacist. But in other cases well, it’s hardly news that utilitarianism has a bunch of implications that strongly contradict moral commonsense, or that EAs are sympathetic to utilitarianism. And ‘oh, but I don’t endorse [outrageous sounding view], I merely think there is like a 60% chance it is true, and you should be careful about moral uncertainty’ does not sound very reassuring to a normal outside person.
For example, take Will on double-or-nothing gambles (https://conversationswithtyler.com/episodes/william-macaskill/) where you do something that has a 49% chance of destroying everyone, and a 51% chance of doubling the number of humans in existence (now and in the future). It’s a little hard to make out exactly what Will’s overall position on this, but he does say it is hard to justify not taking those gambles:
‘Then, in this case, it’s not an example of very low probabilities, very large amounts of value. Then your view would have to argue that, “Well, the future, as it is, is like close to the upper bound of value,” in order to make sense of the idea that you shouldn’t flip 50⁄50. I think, actually, that position would be pretty hard to defend, is my guess. My thought is that, probably, within a situation where any view you say ends up having pretty bad, implausible consequences’
And he does seem to say there are some gambles of this kind he might take:
’Also, just briefly on the 51/49: Because of the pluralism that I talked about — although, again, it’s meta pluralism — of putting weight on many different model views, I would at least need the probabilities to be quite a bit wider in order to take the gamble...’
Or to give another example, the Bostrom and Shulman paper on digital minds talks about how if digital minds really have better lives than us, than classical (total) utilitarianism says they should take all our resources and let us starve. Bostrom and Shulman are against that in the paper. But I think it is fair to say they take utilitarianism seriously as a moral theory. And lots of people are going to think taking seriously the idea that this could be right is already corrupt, and vaguely Hitler-ish/reminiscent of white settler expansionism against Native Americans.
In my view, EAs should be more clearly committed to rejecting (total*) utilitarianism in these sorts of cases than they actually are. Though I understand that moral philosophers correctly think the arguments for utilitarianism, or views which have similar implications to utilitarianism in these contexts, are disturbingly strong.
*In both of the cases described, person-affecting versions of classical utilitarianism which deny creating happy people is good doesn’t have the scary consequences.
First, I want to thank you for engaging David. I get the sense we’ve disagreed a lot on some recent topics on the Forum, so I do want to say I appreciate you explaining your point of view to me on them, even if I do struggle to understand. Your comment above covers a lot of ground, so if you want to switch to a higher-bandwidth way of discussing them, I would be happy to. I apologise in advance if my reply below comes across as overly hostile or in bad-faith—it’s not my intention, but I do admit I’ve somewhat lost my cool on this topic of late. But in my defence, sometimes that’s the appropriate response. As I tried to summarise in my earlier comment, continuing to co-operate when the other player is defecting is a bad approach.
As for your comment/reply though, I’m not entirely sure what to make of it. To try to clarify, I was trying to understand why the Twitter discourse between people focused on AI xRisk and the FAact Community[1] has been so toxic over the last week, almost entirely (as far as I can see) from the latter to the former. Instead, I feel like you’ve steered the conversation away to a discussion about the implications of naïve utilitariansim. I also feel we may disagree on how much Torres has legitimate criticisms and how of their work is simply wilful ‘misrepresentation’ (I wonder if you’ve changed your mind on Torres since last year?). There are definitely connections there, but I don’t think it’s quite the same conversation, and I think it somewhat telling that you responded to suggestions 3 & 4, and not 1 & 2, which I think are far less controversial (fwiw I agree that legal action should only be used once all other courses of actions have failed).
To clarify what I’m trying to get at here with some more examples, which I hope will be reasonably unobjectionable even if incorrect:
Yesterday Timnit again insinuated that William MacAskill was a eugenicist. You can read that tweet and I don’t think she means this in a ‘belongs to a historical tradition’ way I think she means it in a ‘this is what he believes’ way. I haven’t seen anyone from the FAact Community call this out. In fact, Margaret Mitchell’s response to Jess Whittlestone’s attempt to offer an olive branch was met with confusion that there’s any extreme behaviour amongst the AI Ethics field.
People working in AI Safety and/or associated with EA should therefore expect to be called eugencists, and the more Timnit’s perspective gains prominence that more they will have to deal with the consequences of this.
Noah Giansiracua’s thread that I linked in the last tweet is highly conspiratiorial, spreads reckless misinformation, and is often just wrong. But not only has he doubled down despite pushback,[2] but he today tried to bridge the Safety/Ethics divide today seemingly unware that trashing the other side in a 26 tweet screed is massively damaging to this goal.
This suggests that while AI Safety efforts to build bridges may have some success, there may a strong and connected group of scholars who will either not countenance it at all, or be happy to stick the knife in once the opportunity appears. If I were an AI Safety academic, I wouldn’t trust Noah.
In general, my hope is that work is going on behind the scenes and off Twitter to build bridges between the two camps. But a lot of names on the FAact side that seem to be more toxic are quite prominent, and given the culture of silence/bullying involved there (again, see the Rumman Chowdhury tweet in the original comment, with further evidence here) I’m not sure I feel as hopeful it will happen as I did in recent weeks.
The more I look into it, the more I see the hostility as asymmetric. I’d be very open to counter-evidence on this point, but I don’t see AI Safety people treating the other camp with such naked hostility, and definitely not from the more influential members of the movement, as far as I can tell. (And almost certainly not any more than usual over the past week or so? As I said, a lot of this seems to have kicked off post CAIS Letter).
My call to not ‘be passive’ was one in which I expect hostility to the field of AI Safety to continue, perhaps grow, and be amplified by influential figures in the AI space. I maintian the general EA media strategy of ignoring critics, and if engaging them only doing so with the utmost politeness, has been a net negative strategy, and will continue to be so if continued—with perhaps very bad consequences.
Anyway, I’d like to thank you for sharing your perspective, and I do hope my perceptions have been skewed to be too pessimistic. To others reading, I’d really appreciate hearing your thoughts on these topics, and points of view or explanations that might change my mind
I guess I thought my points about total utilitarianism were relevant, because ‘we can make people like us more by pushing back more against misrepresentation’ is only true insofar as the real views we have will not offend people. I’m also just generically anxious about people in EA believing things that feel scary to me. (As I say, I’m not actually against people correcting misrepresentations obviously.)
I don’t really have much sense of how reasonable critics are or aren’t being, beyond the claim that sometimes they touch on genuinely scary things about total utilitarianism, and that it’s a bit of a problem that the main group arguing for AI safety contains a lot of prominent people with views that (theoretically) imply that we should be prepared to take big chances of AI catastrophe rather than pass up small chances of lots of v. happy digital people.
On Torres specifically: I don’t really follow them in detail (these topics make me anxious), but I didn’t intend to be claiming that they are a fair or measured critic, just that they have decent technical understanding of the philosophical issues involved and sometimes puts their finger on real weaknesses. That is compatible with them also saying a lot of stuff that’s just false. I think motivated reasoning is a more likely explanation for why they says false things than conscious lying, but that’s just because that’s my prior about most people. (Edit: Actually, I’m a little less sure of that, after being reminded of the sockpuppetry allegations by quinn below. If those are true, that is deliberate dishonesty.)
Regarding Gebru calling Will a eugenicist. Well, I really doubt you could “sue” over that, or demonstrate to the people most concerned about this that he doesn’t count as one by any reasonable definition. Some people use “eugenicist” for any preference that a non-disabled person comes into existence rather than a different disabled person. And Will does have that preference. In What We Owe the Future, he takes it as obvious that if you have a medical condition that means if you conceive right now, your child will have awful painful migraines, then you should wait a few weeks to conceive so that you have a different child who doesn’t have migraines. I think plenty ordinary people would be fine with that and puzzled by Gebru-like reactions, but it probably does meet some literal definitions that have been given for “eugenics”. Just suggesting he is a “eugenicist” without further clarification is nonetheless misleading and unfair in my view, but that’s not quite what libel is. Certainly I have met philosophers with strong disability rights views who regard Will’s kind of reaction to the migraine case as bigoted. (Not endorsing that view myself.)
None of this is some kind of overall endorsement of how the ‘AI ethics’ crowd on Twitter talk overall, or about EAs specifically. I haven’t been much exposed to it, and when I have been, I generally haven’t liked it.
Suing people nearly always makes you look like the assholes I think.
As for Torres, it is fine for people to push back against specific false things they say. But fundamentally, even once you get past the misrepresentations, there is a bunch of stuff that they highlight that various prominent EAs really do believe and say that genuinely does seem outrageous or scary to most people, and no amount of pushback is likely to persuade most of those people otherwise.
In some cases, I think that outrage fairly clearly isn’t really justified once you think things through very carefully: i.e. for example the quote from Nick Beckstead about saving lives being all-things-equal higher value in rich countries, because of flow-through effects which Torres always says makes Beckstead a white supremacist. But in other cases well, it’s hardly news that utilitarianism has a bunch of implications that strongly contradict moral commonsense, or that EAs are sympathetic to utilitarianism. And ‘oh, but I don’t endorse [outrageous sounding view], I merely think there is like a 60% chance it is true, and you should be careful about moral uncertainty’ does not sound very reassuring to a normal outside person.
For example, take Will on double-or-nothing gambles (https://conversationswithtyler.com/episodes/william-macaskill/) where you do something that has a 49% chance of destroying everyone, and a 51% chance of doubling the number of humans in existence (now and in the future). It’s a little hard to make out exactly what Will’s overall position on this, but he does say it is hard to justify not taking those gambles:
‘Then, in this case, it’s not an example of very low probabilities, very large amounts of value. Then your view would have to argue that, “Well, the future, as it is, is like close to the upper bound of value,” in order to make sense of the idea that you shouldn’t flip 50⁄50. I think, actually, that position would be pretty hard to defend, is my guess. My thought is that, probably, within a situation where any view you say ends up having pretty bad, implausible consequences’
And he does seem to say there are some gambles of this kind he might take:
’Also, just briefly on the 51/49: Because of the pluralism that I talked about — although, again, it’s meta pluralism — of putting weight on many different model views, I would at least need the probabilities to be quite a bit wider in order to take the gamble...’
Or to give another example, the Bostrom and Shulman paper on digital minds talks about how if digital minds really have better lives than us, than classical (total) utilitarianism says they should take all our resources and let us starve. Bostrom and Shulman are against that in the paper. But I think it is fair to say they take utilitarianism seriously as a moral theory. And lots of people are going to think taking seriously the idea that this could be right is already corrupt, and vaguely Hitler-ish/reminiscent of white settler expansionism against Native Americans.
In my view, EAs should be more clearly committed to rejecting (total*) utilitarianism in these sorts of cases than they actually are. Though I understand that moral philosophers correctly think the arguments for utilitarianism, or views which have similar implications to utilitarianism in these contexts, are disturbingly strong.
*In both of the cases described, person-affecting versions of classical utilitarianism which deny creating happy people is good doesn’t have the scary consequences.
First, I want to thank you for engaging David. I get the sense we’ve disagreed a lot on some recent topics on the Forum, so I do want to say I appreciate you explaining your point of view to me on them, even if I do struggle to understand. Your comment above covers a lot of ground, so if you want to switch to a higher-bandwidth way of discussing them, I would be happy to. I apologise in advance if my reply below comes across as overly hostile or in bad-faith—it’s not my intention, but I do admit I’ve somewhat lost my cool on this topic of late. But in my defence, sometimes that’s the appropriate response. As I tried to summarise in my earlier comment, continuing to co-operate when the other player is defecting is a bad approach.
As for your comment/reply though, I’m not entirely sure what to make of it. To try to clarify, I was trying to understand why the Twitter discourse between people focused on AI xRisk and the FAact Community[1] has been so toxic over the last week, almost entirely (as far as I can see) from the latter to the former. Instead, I feel like you’ve steered the conversation away to a discussion about the implications of naïve utilitariansim. I also feel we may disagree on how much Torres has legitimate criticisms and how of their work is simply wilful ‘misrepresentation’ (I wonder if you’ve changed your mind on Torres since last year?). There are definitely connections there, but I don’t think it’s quite the same conversation, and I think it somewhat telling that you responded to suggestions 3 & 4, and not 1 & 2, which I think are far less controversial (fwiw I agree that legal action should only be used once all other courses of actions have failed).
To clarify what I’m trying to get at here with some more examples, which I hope will be reasonably unobjectionable even if incorrect:
Yesterday Timnit again insinuated that William MacAskill was a eugenicist. You can read that tweet and I don’t think she means this in a ‘belongs to a historical tradition’ way I think she means it in a ‘this is what he believes’ way. I haven’t seen anyone from the FAact Community call this out. In fact, Margaret Mitchell’s response to Jess Whittlestone’s attempt to offer an olive branch was met with confusion that there’s any extreme behaviour amongst the AI Ethics field.
People working in AI Safety and/or associated with EA should therefore expect to be called eugencists, and the more Timnit’s perspective gains prominence that more they will have to deal with the consequences of this.
Noah Giansiracua’s thread that I linked in the last tweet is highly conspiratiorial, spreads reckless misinformation, and is often just wrong. But not only has he doubled down despite pushback,[2] but he today tried to bridge the Safety/Ethics divide today seemingly unware that trashing the other side in a 26 tweet screed is massively damaging to this goal.
This suggests that while AI Safety efforts to build bridges may have some success, there may a strong and connected group of scholars who will either not countenance it at all, or be happy to stick the knife in once the opportunity appears. If I were an AI Safety academic, I wouldn’t trust Noah.
In general, my hope is that work is going on behind the scenes and off Twitter to build bridges between the two camps. But a lot of names on the FAact side that seem to be more toxic are quite prominent, and given the culture of silence/bullying involved there (again, see the Rumman Chowdhury tweet in the original comment, with further evidence here) I’m not sure I feel as hopeful it will happen as I did in recent weeks.
The more I look into it, the more I see the hostility as asymmetric. I’d be very open to counter-evidence on this point, but I don’t see AI Safety people treating the other camp with such naked hostility, and definitely not from the more influential members of the movement, as far as I can tell. (And almost certainly not any more than usual over the past week or so? As I said, a lot of this seems to have kicked off post CAIS Letter).
My call to not ‘be passive’ was one in which I expect hostility to the field of AI Safety to continue, perhaps grow, and be amplified by influential figures in the AI space. I maintian the general EA media strategy of ignoring critics, and if engaging them only doing so with the utmost politeness, has been a net negative strategy, and will continue to be so if continued—with perhaps very bad consequences.
Anyway, I’d like to thank you for sharing your perspective, and I do hope my perceptions have been skewed to be too pessimistic. To others reading, I’d really appreciate hearing your thoughts on these topics, and points of view or explanations that might change my mind
I think this is better than the Safety/Ethics labelling, but I’m referring to the same divide here
Long may EA Twitter dunk on him until a retraction appears
I mean in a sense a venue that hosts torres is definitionally trashy due to https://markfuentes1.substack.com/p/emile-p-torress-history-of-dishonesty except insofar as they haven’t seen or don’t believe this Fuentes person.
I guess I thought my points about total utilitarianism were relevant, because ‘we can make people like us more by pushing back more against misrepresentation’ is only true insofar as the real views we have will not offend people. I’m also just generically anxious about people in EA believing things that feel scary to me. (As I say, I’m not actually against people correcting misrepresentations obviously.)
I don’t really have much sense of how reasonable critics are or aren’t being, beyond the claim that sometimes they touch on genuinely scary things about total utilitarianism, and that it’s a bit of a problem that the main group arguing for AI safety contains a lot of prominent people with views that (theoretically) imply that we should be prepared to take big chances of AI catastrophe rather than pass up small chances of lots of v. happy digital people.
On Torres specifically: I don’t really follow them in detail (these topics make me anxious), but I didn’t intend to be claiming that they are a fair or measured critic, just that they have decent technical understanding of the philosophical issues involved and sometimes puts their finger on real weaknesses. That is compatible with them also saying a lot of stuff that’s just false. I think motivated reasoning is a more likely explanation for why they says false things than conscious lying, but that’s just because that’s my prior about most people. (Edit: Actually, I’m a little less sure of that, after being reminded of the sockpuppetry allegations by quinn below. If those are true, that is deliberate dishonesty.)
Regarding Gebru calling Will a eugenicist. Well, I really doubt you could “sue” over that, or demonstrate to the people most concerned about this that he doesn’t count as one by any reasonable definition. Some people use “eugenicist” for any preference that a non-disabled person comes into existence rather than a different disabled person. And Will does have that preference. In What We Owe the Future, he takes it as obvious that if you have a medical condition that means if you conceive right now, your child will have awful painful migraines, then you should wait a few weeks to conceive so that you have a different child who doesn’t have migraines. I think plenty ordinary people would be fine with that and puzzled by Gebru-like reactions, but it probably does meet some literal definitions that have been given for “eugenics”. Just suggesting he is a “eugenicist” without further clarification is nonetheless misleading and unfair in my view, but that’s not quite what libel is. Certainly I have met philosophers with strong disability rights views who regard Will’s kind of reaction to the migraine case as bigoted. (Not endorsing that view myself.)
None of this is some kind of overall endorsement of how the ‘AI ethics’ crowd on Twitter talk overall, or about EAs specifically. I haven’t been much exposed to it, and when I have been, I generally haven’t liked it.