In my view, Phil Torres’ stuff, whilst not entirely fair, and quite nasty rhetorically, is far from the worst this could get. He actually is familiar with what some people within EA think in detail, reports that information fairly accurately, even if he misleads by omission somewhat*, and makes criticisms of controversial philosophical assumptions of some leading EAs that have some genuine bite, and might be endorsed by many moral philosophers. His stuff actually falls into the dangerous sweet spot where legitimate ideas, like ‘is adding happy people actually good anyway’ get associated with less fair criticism-”Nick Beckstead did white supremacy when he briefly talked about different flow-through effects of saving lives in different places”, potentially biasing us against the legit stuff in a dangerous way.
But there could-again, in my view-easily be a wave of criticism coming from people who share Torres’ political viewpoint and tendency towards heated rhetoric, but who, unlike him, haven’t really taken the time to understand EA /longtermist/AI safety ideas in the first place. I’ve already seen one decently well-known anti-”tech” figure on twitter re-tweet a tweet that in it’s entirety consisted of “long-termism is eugenics!”. People should prepare emotionally (I have already mildly lost my temper on twitter in a way I shouldn’t have, but at least I’m not anyone important!) for keeping their cool in the face of criticisms that is: -Poorly argued -Very rhetorically forceful -Based on straightforward misunderstandings -Involves infuriatingly confident statements of highly contestable philosophical and empirical assumptions. -Deploy guilt-by-association tactics of an obviously unreasonable sort**: i.e. so-and-so once attended a conference with Peter Thiel, therefore they share [authoritarian view] with Thiel. -Attacks motives not just ideas -Gendered in a way that will play directly to the personal insecurities of some male EAs.
Alas, stuff can be all those things and also identify some genuine errors we’re making. It’s important we remain open to that, and also don’t get too polarized politically by this kind of stuff ourselves.
* (i.e. he leaves out reasons to be longtermist that don’t depend on total utilitarianism or adding happy people being good, doesn’t discuss why you might reject person-affecting population ethics etc.)
** I say “of an unreasonable sort” because in principle people’s associations can be legitimately criticized if they have bad effects, just like anything else.
Meta-point: I am not suggesting we do anything about this or that we start insulting people and losing our temper (my comment is not intended to be prescriptive). That would be bad and it is not the culture I want within EA. I do think it is, in general, the right call to avoid fanning the flames. However, my first comment is meant to point at something that is already happening: many people uninformed about EA are not being introduced in a fair and balanced way, and first impressions matter. And lastly, I did not mean to imply that Torres’ stuff was the worse we can expect. I am still reading Torres’ stuff with an open-mind to take away the good criticism (while keeping the entire context in consideration).
Regarding the articles: Their way of writing is by telling the general story in a way that it’s obvious they know a lot about EA and had been involved in the past, but then they bends the truth as much as possible so that the reader leaves with a misrepresentation of EA and what EAs really believe and take action on. Since this is a pattern in their writings, it’s hard not to believe they might be doing this because it gives them plausible deniability since what they’re saying is often not “wrong”, but it is bent to the point that the reader ends up inferring things that are false.
To me, in the case of their latest article, you could leave with the impression that Bostrom and MacAskill (as well as the entirety of EA) both think that the whole world should stop spending any money towards philanthropy that helps anyone in the present (and if you do, only to those who are privileged). The uninformed reader can leave with the impression that EA doesn’t even actually care about human lives. The way they write gives them credibility to the uninformed because it’s not just an all-out attack where it is obvious to the reader what they’re intentions are.
Whatever you want to call it, this does not seem good faith to me. I welcome criticism of EA and longtermism, but this is not criticism.
Thanks for this thoughtful challenge, and in particular flagging what future provocations could look like so we can prepare ourselves and let our more reactive selves come to the fore, less of the child selves.
In fact, I think I’ll reflect on this list for a long time to ensure I continue not to respond on Twitter!
In my view, Phil Torres’ stuff, whilst not entirely fair, and quite nasty rhetorically, is far from the worst this could get. He actually is familiar with what some people within EA think in detail, reports that information fairly accurately, even if he misleads by omission somewhat*, and makes criticisms of controversial philosophical assumptions of some leading EAs that have some genuine bite, and might be endorsed by many moral philosophers. His stuff actually falls into the dangerous sweet spot where legitimate ideas, like ‘is adding happy people actually good anyway’ get associated with less fair criticism-”Nick Beckstead did white supremacy when he briefly talked about different flow-through effects of saving lives in different places”, potentially biasing us against the legit stuff in a dangerous way.
But there could-again, in my view-easily be a wave of criticism coming from people who share Torres’ political viewpoint and tendency towards heated rhetoric, but who, unlike him, haven’t really taken the time to understand EA /longtermist/AI safety ideas in the first place. I’ve already seen one decently well-known anti-”tech” figure on twitter re-tweet a tweet that in it’s entirety consisted of “long-termism is eugenics!”. People should prepare emotionally (I have already mildly lost my temper on twitter in a way I shouldn’t have, but at least I’m not anyone important!) for keeping their cool in the face of criticisms that is:
-Poorly argued
-Very rhetorically forceful
-Based on straightforward misunderstandings
-Involves infuriatingly confident statements of highly contestable philosophical and empirical assumptions.
-Deploy guilt-by-association tactics of an obviously unreasonable sort**: i.e. so-and-so once attended a conference with Peter Thiel, therefore they share [authoritarian view] with Thiel.
-Attacks motives not just ideas
-Gendered in a way that will play directly to the personal insecurities of some male EAs.
Alas, stuff can be all those things and also identify some genuine errors we’re making. It’s important we remain open to that, and also don’t get too polarized politically by this kind of stuff ourselves.
* (i.e. he leaves out reasons to be longtermist that don’t depend on total utilitarianism or adding happy people being good, doesn’t discuss why you might reject person-affecting population ethics etc.)
** I say “of an unreasonable sort” because in principle people’s associations can be legitimately criticized if they have bad effects, just like anything else.
Great points, here’s my impression:
Meta-point: I am not suggesting we do anything about this or that we start insulting people and losing our temper (my comment is not intended to be prescriptive). That would be bad and it is not the culture I want within EA. I do think it is, in general, the right call to avoid fanning the flames. However, my first comment is meant to point at something that is already happening: many people uninformed about EA are not being introduced in a fair and balanced way, and first impressions matter. And lastly, I did not mean to imply that Torres’ stuff was the worse we can expect. I am still reading Torres’ stuff with an open-mind to take away the good criticism (while keeping the entire context in consideration).
Regarding the articles: Their way of writing is by telling the general story in a way that it’s obvious they know a lot about EA and had been involved in the past, but then they bends the truth as much as possible so that the reader leaves with a misrepresentation of EA and what EAs really believe and take action on. Since this is a pattern in their writings, it’s hard not to believe they might be doing this because it gives them plausible deniability since what they’re saying is often not “wrong”, but it is bent to the point that the reader ends up inferring things that are false.
To me, in the case of their latest article, you could leave with the impression that Bostrom and MacAskill (as well as the entirety of EA) both think that the whole world should stop spending any money towards philanthropy that helps anyone in the present (and if you do, only to those who are privileged). The uninformed reader can leave with the impression that EA doesn’t even actually care about human lives. The way they write gives them credibility to the uninformed because it’s not just an all-out attack where it is obvious to the reader what they’re intentions are.
Whatever you want to call it, this does not seem good faith to me. I welcome criticism of EA and longtermism, but this is not criticism.
*This is a response to both of your comments.
Thanks for this thoughtful challenge, and in particular flagging what future provocations could look like so we can prepare ourselves and let our more reactive selves come to the fore, less of the child selves.
In fact, I think I’ll reflect on this list for a long time to ensure I continue not to respond on Twitter!