In my view, Phil Torresâ stuff, whilst not entirely fair, and quite nasty rhetorically, is far from the worst this could get. He actually is familiar with what some people within EA think in detail, reports that information fairly accurately, even if he misleads by omission somewhat*, and makes criticisms of controversial philosophical assumptions of some leading EAs that have some genuine bite, and might be endorsed by many moral philosophers. His stuff actually falls into the dangerous sweet spot where legitimate ideas, like âis adding happy people actually good anywayâ get associated with less fair criticism-âNick Beckstead did white supremacy when he briefly talked about different flow-through effects of saving lives in different placesâ, potentially biasing us against the legit stuff in a dangerous way.
But there could-again, in my view-easily be a wave of criticism coming from people who share Torresâ political viewpoint and tendency towards heated rhetoric, but who, unlike him, havenât really taken the time to understand EA /âlongtermist/âAI safety ideas in the first place. Iâve already seen one decently well-known anti-âtechâ figure on twitter re-tweet a tweet that in itâs entirety consisted of âlong-termism is eugenics!â. People should prepare emotionally (I have already mildly lost my temper on twitter in a way I shouldnât have, but at least Iâm not anyone important!) for keeping their cool in the face of criticisms that is: -Poorly argued -Very rhetorically forceful -Based on straightforward misunderstandings -Involves infuriatingly confident statements of highly contestable philosophical and empirical assumptions. -Deploy guilt-by-association tactics of an obviously unreasonable sort**: i.e. so-and-so once attended a conference with Peter Thiel, therefore they share [authoritarian view] with Thiel. -Attacks motives not just ideas -Gendered in a way that will play directly to the personal insecurities of some male EAs.
Alas, stuff can be all those things and also identify some genuine errors weâre making. Itâs important we remain open to that, and also donât get too polarized politically by this kind of stuff ourselves.
* (i.e. he leaves out reasons to be longtermist that donât depend on total utilitarianism or adding happy people being good, doesnât discuss why you might reject person-affecting population ethics etc.)
** I say âof an unreasonable sortâ because in principle peopleâs associations can be legitimately criticized if they have bad effects, just like anything else.
Meta-point: I am not suggesting we do anything about this or that we start insulting people and losing our temper (my comment is not intended to be prescriptive). That would be bad and it is not the culture I want within EA. I do think it is, in general, the right call to avoid fanning the flames. However, my first comment is meant to point at something that is already happening: many people uninformed about EA are not being introduced in a fair and balanced way, and first impressions matter. And lastly, I did not mean to imply that Torresâ stuff was the worse we can expect. I am still reading Torresâ stuff with an open-mind to take away the good criticism (while keeping the entire context in consideration).
Regarding the articles: Their way of writing is by telling the general story in a way that itâs obvious they know a lot about EA and had been involved in the past, but then they bends the truth as much as possible so that the reader leaves with a misrepresentation of EA and what EAs really believe and take action on. Since this is a pattern in their writings, itâs hard not to believe they might be doing this because it gives them plausible deniability since what theyâre saying is often not âwrongâ, but it is bent to the point that the reader ends up inferring things that are false.
To me, in the case of their latest article, you could leave with the impression that Bostrom and MacAskill (as well as the entirety of EA) both think that the whole world should stop spending any money towards philanthropy that helps anyone in the present (and if you do, only to those who are privileged). The uninformed reader can leave with the impression that EA doesnât even actually care about human lives. The way they write gives them credibility to the uninformed because itâs not just an all-out attack where it is obvious to the reader what theyâre intentions are.
Whatever you want to call it, this does not seem good faith to me. I welcome criticism of EA and longtermism, but this is not criticism.
Thanks for this thoughtful challenge, and in particular flagging what future provocations could look like so we can prepare ourselves and let our more reactive selves come to the fore, less of the child selves.
In fact, I think Iâll reflect on this list for a long time to ensure I continue not to respond on Twitter!
In my view, Phil Torresâ stuff, whilst not entirely fair, and quite nasty rhetorically, is far from the worst this could get. He actually is familiar with what some people within EA think in detail, reports that information fairly accurately, even if he misleads by omission somewhat*, and makes criticisms of controversial philosophical assumptions of some leading EAs that have some genuine bite, and might be endorsed by many moral philosophers. His stuff actually falls into the dangerous sweet spot where legitimate ideas, like âis adding happy people actually good anywayâ get associated with less fair criticism-âNick Beckstead did white supremacy when he briefly talked about different flow-through effects of saving lives in different placesâ, potentially biasing us against the legit stuff in a dangerous way.
But there could-again, in my view-easily be a wave of criticism coming from people who share Torresâ political viewpoint and tendency towards heated rhetoric, but who, unlike him, havenât really taken the time to understand EA /âlongtermist/âAI safety ideas in the first place. Iâve already seen one decently well-known anti-âtechâ figure on twitter re-tweet a tweet that in itâs entirety consisted of âlong-termism is eugenics!â. People should prepare emotionally (I have already mildly lost my temper on twitter in a way I shouldnât have, but at least Iâm not anyone important!) for keeping their cool in the face of criticisms that is:
-Poorly argued
-Very rhetorically forceful
-Based on straightforward misunderstandings
-Involves infuriatingly confident statements of highly contestable philosophical and empirical assumptions.
-Deploy guilt-by-association tactics of an obviously unreasonable sort**: i.e. so-and-so once attended a conference with Peter Thiel, therefore they share [authoritarian view] with Thiel.
-Attacks motives not just ideas
-Gendered in a way that will play directly to the personal insecurities of some male EAs.
Alas, stuff can be all those things and also identify some genuine errors weâre making. Itâs important we remain open to that, and also donât get too polarized politically by this kind of stuff ourselves.
* (i.e. he leaves out reasons to be longtermist that donât depend on total utilitarianism or adding happy people being good, doesnât discuss why you might reject person-affecting population ethics etc.)
** I say âof an unreasonable sortâ because in principle peopleâs associations can be legitimately criticized if they have bad effects, just like anything else.
Great points, hereâs my impression:
Meta-point: I am not suggesting we do anything about this or that we start insulting people and losing our temper (my comment is not intended to be prescriptive). That would be bad and it is not the culture I want within EA. I do think it is, in general, the right call to avoid fanning the flames. However, my first comment is meant to point at something that is already happening: many people uninformed about EA are not being introduced in a fair and balanced way, and first impressions matter. And lastly, I did not mean to imply that Torresâ stuff was the worse we can expect. I am still reading Torresâ stuff with an open-mind to take away the good criticism (while keeping the entire context in consideration).
Regarding the articles: Their way of writing is by telling the general story in a way that itâs obvious they know a lot about EA and had been involved in the past, but then they bends the truth as much as possible so that the reader leaves with a misrepresentation of EA and what EAs really believe and take action on. Since this is a pattern in their writings, itâs hard not to believe they might be doing this because it gives them plausible deniability since what theyâre saying is often not âwrongâ, but it is bent to the point that the reader ends up inferring things that are false.
To me, in the case of their latest article, you could leave with the impression that Bostrom and MacAskill (as well as the entirety of EA) both think that the whole world should stop spending any money towards philanthropy that helps anyone in the present (and if you do, only to those who are privileged). The uninformed reader can leave with the impression that EA doesnât even actually care about human lives. The way they write gives them credibility to the uninformed because itâs not just an all-out attack where it is obvious to the reader what theyâre intentions are.
Whatever you want to call it, this does not seem good faith to me. I welcome criticism of EA and longtermism, but this is not criticism.
*This is a response to both of your comments.
Thanks for this thoughtful challenge, and in particular flagging what future provocations could look like so we can prepare ourselves and let our more reactive selves come to the fore, less of the child selves.
In fact, I think Iâll reflect on this list for a long time to ensure I continue not to respond on Twitter!