One thing that may backfire with the slow rollout of talking to journalists is that people who mean to write about EA in bad faith will be the ones at the top of the search results. If you search something like “ea longtermism”, you might find bad faith articles many of us are familiar with. I’m concerned we are setting ourselves up to give people unaware of EA a very bad faith introduction.
Note: when I say “bad faith“ here, it may just be a matter of semantics with how some people are seeing it as. I think I might not have the vocabulary to articulate what I mean by “bad faith.” I actually agree with pretty much everything David has said in response to this comment.
In my view, Phil Torres’ stuff, whilst not entirely fair, and quite nasty rhetorically, is far from the worst this could get. He actually is familiar with what some people within EA think in detail, reports that information fairly accurately, even if he misleads by omission somewhat*, and makes criticisms of controversial philosophical assumptions of some leading EAs that have some genuine bite, and might be endorsed by many moral philosophers. His stuff actually falls into the dangerous sweet spot where legitimate ideas, like ‘is adding happy people actually good anyway’ get associated with less fair criticism-”Nick Beckstead did white supremacy when he briefly talked about different flow-through effects of saving lives in different places”, potentially biasing us against the legit stuff in a dangerous way.
But there could-again, in my view-easily be a wave of criticism coming from people who share Torres’ political viewpoint and tendency towards heated rhetoric, but who, unlike him, haven’t really taken the time to understand EA /longtermist/AI safety ideas in the first place. I’ve already seen one decently well-known anti-”tech” figure on twitter re-tweet a tweet that in it’s entirety consisted of “long-termism is eugenics!”. People should prepare emotionally (I have already mildly lost my temper on twitter in a way I shouldn’t have, but at least I’m not anyone important!) for keeping their cool in the face of criticisms that is: -Poorly argued -Very rhetorically forceful -Based on straightforward misunderstandings -Involves infuriatingly confident statements of highly contestable philosophical and empirical assumptions. -Deploy guilt-by-association tactics of an obviously unreasonable sort**: i.e. so-and-so once attended a conference with Peter Thiel, therefore they share [authoritarian view] with Thiel. -Attacks motives not just ideas -Gendered in a way that will play directly to the personal insecurities of some male EAs.
Alas, stuff can be all those things and also identify some genuine errors we’re making. It’s important we remain open to that, and also don’t get too polarized politically by this kind of stuff ourselves.
* (i.e. he leaves out reasons to be longtermist that don’t depend on total utilitarianism or adding happy people being good, doesn’t discuss why you might reject person-affecting population ethics etc.)
** I say “of an unreasonable sort” because in principle people’s associations can be legitimately criticized if they have bad effects, just like anything else.
Meta-point: I am not suggesting we do anything about this or that we start insulting people and losing our temper (my comment is not intended to be prescriptive). That would be bad and it is not the culture I want within EA. I do think it is, in general, the right call to avoid fanning the flames. However, my first comment is meant to point at something that is already happening: many people uninformed about EA are not being introduced in a fair and balanced way, and first impressions matter. And lastly, I did not mean to imply that Torres’ stuff was the worse we can expect. I am still reading Torres’ stuff with an open-mind to take away the good criticism (while keeping the entire context in consideration).
Regarding the articles: Their way of writing is by telling the general story in a way that it’s obvious they know a lot about EA and had been involved in the past, but then they bends the truth as much as possible so that the reader leaves with a misrepresentation of EA and what EAs really believe and take action on. Since this is a pattern in their writings, it’s hard not to believe they might be doing this because it gives them plausible deniability since what they’re saying is often not “wrong”, but it is bent to the point that the reader ends up inferring things that are false.
To me, in the case of their latest article, you could leave with the impression that Bostrom and MacAskill (as well as the entirety of EA) both think that the whole world should stop spending any money towards philanthropy that helps anyone in the present (and if you do, only to those who are privileged). The uninformed reader can leave with the impression that EA doesn’t even actually care about human lives. The way they write gives them credibility to the uninformed because it’s not just an all-out attack where it is obvious to the reader what they’re intentions are.
Whatever you want to call it, this does not seem good faith to me. I welcome criticism of EA and longtermism, but this is not criticism.
Thanks for this thoughtful challenge, and in particular flagging what future provocations could look like so we can prepare ourselves and let our more reactive selves come to the fore, less of the child selves.
In fact, I think I’ll reflect on this list for a long time to ensure I continue not to respond on Twitter!
Definitely the case in Germany. Top 3 Google results for “longtermism” are all very negative posts. 2 of them by some of Germany biggest news magazines (ZEIT and Spiegel). As far as I know there is no positive content on Longtermism in German.
Also, I doubt Torres is writing in bad faith exactly. “Bad faith” to me has connotations of ‘is saying stuff they know to be untrue’, when with Torres I’m sure he believes what he’s saying he’s just angry about it, and anger biases.
My model is, he has a number of frustrations with EA. That on its own isn’t a big deal. There are plenty of valid, invalid, and arguable gripes with various aspects of EA.
But he also has a major bucket error where the concept of “far-right” is applied to a much bigger Category of bad stuff. Since some aspects of EA & longtermism seem to be X to him, and X goes in the Category, and stuff in the Category is far-right, EA must have far-right aspects. To inform people of the problem, he writes articles claiming they’re far-right.
If EA’s say his claims are factually false, he thinks the respondents are fooling themselves. After all, they’re ignoring his wider point that EA has stuff from the Category, in favor of the nitpicky technicalities of his examples. He may even think they’re trying to motte & bailey people into thinking EA & longtermism can’t possibly have X. To me, it sounds like his narrative is now that he’s waging a PR battle against Bad Guys.
I’m not sure what the Category is, though.
At first I thought it was an entirely emotional thing- stuff that make him sufficiently angry, or a certain flavor of angry, or anything where he can’t verbalize why it makes him angry, are assumed to be far-right. But I don’t think that fits his actions. I don’t expect many people can decide “this makes me mad, so it’s full of white supremacy and other ills”, run a years-long vendetta on that basis, and still have a nuanced conversation about which parts aren’t bad.
Now I think X has a “shape”- with time & motivation, in a safe environment, Torres could give a consistent definition of what X is and isn’t. And with more of those, he could explain what it is & why he hates it without any references to far-right stuff. Maybe he could even do an ELI5 of why X goes in the same Category as far right stuff in the first place. But not much chance of this actually happening, since it requires him being vulnerable with a mistrusted representative of the Bad Guys.
Yes, i’m always unsure of what “bad faith” really means. I often see it cited as a main reason to engage or not engage with an argument. But I don’t know why it should matter to me what a writer or journalist intends deep down. I would hope that “good faith” doesn’t just mean aligned on overall goals already.
To be more specific, i keep seeing reference hidden context behind Phil Torres’s pieces. To someone who doesn’t have the time to read through many cryptic old threads, it just makes me skeptical that the bad faith criticism is useful in discounting or not discounting an argument.
Have you ever had conversations where someone has misrepresented everything you’ve said or where they kept implying that you were a bad person every time you disagreed with them?
I’m sure / really hope Will’s new book does engage with the points made here. And if so, it provides the rebuttal to those who come across hit-pieces and take them at face value, or those who promulgate hit-pieces because of their own ideological drives.
Yup, I saw somebody on Medium speaking favorably about a Phil Torres piece as a footnote of his article on Ukraine (I responded here). And earlier I responded to Alice Crary’s piece. Right now the anti-EAs are often self-styled intellectual elites, but a chorus of bad faith could go mainstream at some point. (And then I hope you guys will see why I’m proposing an evidence clearinghouse, to help build a new and more efficient culture of good epistemics and better information… whether or not you think my idea would work as intended.)
One thing that may backfire with the slow rollout of talking to journalists is that people who mean to write about EA in bad faith will be the ones at the top of the search results. If you search something like “ea longtermism”, you might find bad faith articles many of us are familiar with. I’m concerned we are setting ourselves up to give people unaware of EA a very bad faith introduction.
Note: when I say “bad faith“ here, it may just be a matter of semantics with how some people are seeing it as. I think I might not have the vocabulary to articulate what I mean by “bad faith.” I actually agree with pretty much everything David has said in response to this comment.
In my view, Phil Torres’ stuff, whilst not entirely fair, and quite nasty rhetorically, is far from the worst this could get. He actually is familiar with what some people within EA think in detail, reports that information fairly accurately, even if he misleads by omission somewhat*, and makes criticisms of controversial philosophical assumptions of some leading EAs that have some genuine bite, and might be endorsed by many moral philosophers. His stuff actually falls into the dangerous sweet spot where legitimate ideas, like ‘is adding happy people actually good anyway’ get associated with less fair criticism-”Nick Beckstead did white supremacy when he briefly talked about different flow-through effects of saving lives in different places”, potentially biasing us against the legit stuff in a dangerous way.
But there could-again, in my view-easily be a wave of criticism coming from people who share Torres’ political viewpoint and tendency towards heated rhetoric, but who, unlike him, haven’t really taken the time to understand EA /longtermist/AI safety ideas in the first place. I’ve already seen one decently well-known anti-”tech” figure on twitter re-tweet a tweet that in it’s entirety consisted of “long-termism is eugenics!”. People should prepare emotionally (I have already mildly lost my temper on twitter in a way I shouldn’t have, but at least I’m not anyone important!) for keeping their cool in the face of criticisms that is:
-Poorly argued
-Very rhetorically forceful
-Based on straightforward misunderstandings
-Involves infuriatingly confident statements of highly contestable philosophical and empirical assumptions.
-Deploy guilt-by-association tactics of an obviously unreasonable sort**: i.e. so-and-so once attended a conference with Peter Thiel, therefore they share [authoritarian view] with Thiel.
-Attacks motives not just ideas
-Gendered in a way that will play directly to the personal insecurities of some male EAs.
Alas, stuff can be all those things and also identify some genuine errors we’re making. It’s important we remain open to that, and also don’t get too polarized politically by this kind of stuff ourselves.
* (i.e. he leaves out reasons to be longtermist that don’t depend on total utilitarianism or adding happy people being good, doesn’t discuss why you might reject person-affecting population ethics etc.)
** I say “of an unreasonable sort” because in principle people’s associations can be legitimately criticized if they have bad effects, just like anything else.
Great points, here’s my impression:
Meta-point: I am not suggesting we do anything about this or that we start insulting people and losing our temper (my comment is not intended to be prescriptive). That would be bad and it is not the culture I want within EA. I do think it is, in general, the right call to avoid fanning the flames. However, my first comment is meant to point at something that is already happening: many people uninformed about EA are not being introduced in a fair and balanced way, and first impressions matter. And lastly, I did not mean to imply that Torres’ stuff was the worse we can expect. I am still reading Torres’ stuff with an open-mind to take away the good criticism (while keeping the entire context in consideration).
Regarding the articles: Their way of writing is by telling the general story in a way that it’s obvious they know a lot about EA and had been involved in the past, but then they bends the truth as much as possible so that the reader leaves with a misrepresentation of EA and what EAs really believe and take action on. Since this is a pattern in their writings, it’s hard not to believe they might be doing this because it gives them plausible deniability since what they’re saying is often not “wrong”, but it is bent to the point that the reader ends up inferring things that are false.
To me, in the case of their latest article, you could leave with the impression that Bostrom and MacAskill (as well as the entirety of EA) both think that the whole world should stop spending any money towards philanthropy that helps anyone in the present (and if you do, only to those who are privileged). The uninformed reader can leave with the impression that EA doesn’t even actually care about human lives. The way they write gives them credibility to the uninformed because it’s not just an all-out attack where it is obvious to the reader what they’re intentions are.
Whatever you want to call it, this does not seem good faith to me. I welcome criticism of EA and longtermism, but this is not criticism.
*This is a response to both of your comments.
Thanks for this thoughtful challenge, and in particular flagging what future provocations could look like so we can prepare ourselves and let our more reactive selves come to the fore, less of the child selves.
In fact, I think I’ll reflect on this list for a long time to ensure I continue not to respond on Twitter!
Definitely the case in Germany. Top 3 Google results for “longtermism” are all very negative posts. 2 of them by some of Germany biggest news magazines (ZEIT and Spiegel). As far as I know there is no positive content on Longtermism in German.
I agree! This is part of what we’re trying to work on, by making good-quality pieces in favor of EA and longtermism easier to find.
Also, I doubt Torres is writing in bad faith exactly. “Bad faith” to me has connotations of ‘is saying stuff they know to be untrue’, when with Torres I’m sure he believes what he’s saying he’s just angry about it, and anger biases.
Agreed.
My model is, he has a number of frustrations with EA. That on its own isn’t a big deal. There are plenty of valid, invalid, and arguable gripes with various aspects of EA.
But he also has a major bucket error where the concept of “far-right” is applied to a much bigger Category of bad stuff. Since some aspects of EA & longtermism seem to be X to him, and X goes in the Category, and stuff in the Category is far-right, EA must have far-right aspects. To inform people of the problem, he writes articles claiming they’re far-right.
If EA’s say his claims are factually false, he thinks the respondents are fooling themselves. After all, they’re ignoring his wider point that EA has stuff from the Category, in favor of the nitpicky technicalities of his examples. He may even think they’re trying to motte & bailey people into thinking EA & longtermism can’t possibly have X. To me, it sounds like his narrative is now that he’s waging a PR battle against Bad Guys.
I’m not sure what the Category is, though.
At first I thought it was an entirely emotional thing- stuff that make him sufficiently angry, or a certain flavor of angry, or anything where he can’t verbalize why it makes him angry, are assumed to be far-right. But I don’t think that fits his actions. I don’t expect many people can decide “this makes me mad, so it’s full of white supremacy and other ills”, run a years-long vendetta on that basis, and still have a nuanced conversation about which parts aren’t bad.
Now I think X has a “shape”- with time & motivation, in a safe environment, Torres could give a consistent definition of what X is and isn’t. And with more of those, he could explain what it is & why he hates it without any references to far-right stuff. Maybe he could even do an ELI5 of why X goes in the same Category as far right stuff in the first place. But not much chance of this actually happening, since it requires him being vulnerable with a mistrusted representative of the Bad Guys.
Yes, i’m always unsure of what “bad faith” really means. I often see it cited as a main reason to engage or not engage with an argument. But I don’t know why it should matter to me what a writer or journalist intends deep down. I would hope that “good faith” doesn’t just mean aligned on overall goals already.
To be more specific, i keep seeing reference hidden context behind Phil Torres’s pieces. To someone who doesn’t have the time to read through many cryptic old threads, it just makes me skeptical that the bad faith criticism is useful in discounting or not discounting an argument.
Have you ever had conversations where someone has misrepresented everything you’ve said or where they kept implying that you were a bad person every time you disagreed with them?
Equally there’s an argument to thank and reply to critical pieces made against the EA community which honestly engage with subject matter. This post (now old) making criticisms of long-termism is a good example: https://medium.com/curious/against-strong-longtermism-a-response-to-greaves-and-macaskill-cb4bb9681982
I’m sure / really hope Will’s new book does engage with the points made here. And if so, it provides the rebuttal to those who come across hit-pieces and take them at face value, or those who promulgate hit-pieces because of their own ideological drives.
Yup, I saw somebody on Medium speaking favorably about a Phil Torres piece as a footnote of his article on Ukraine (I responded here). And earlier I responded to Alice Crary’s piece. Right now the anti-EAs are often self-styled intellectual elites, but a chorus of bad faith could go mainstream at some point. (And then I hope you guys will see why I’m proposing an evidence clearinghouse, to help build a new and more efficient culture of good epistemics and better information… whether or not you think my idea would work as intended.)
I just posted a comment giving a couple of real-life anecdotes showing this effect.