I agree with this, and would add that the appropriate response to arguments made in bad faith is not to “steelman” them (or to add them to a syllabus, or to keep disseminating a cherry-picked quote from a doctoral dissertation), but to expose them for what they are or ignore them altogether. Intellectually dishonesty is the epistemic equivalent of defection in the cooperative enterprise of truth-seeking; to cooperate with defectors is not a sign of virtue, but quite the opposite.
This person doesn’t believe their own argument, but they aren’t lying within the argument itself.
While it’s obvious that we should point out lies where we see them, I think we should distinguish between (1) and (2). An argument’s original promoter not believing it isn’t a reason for no one to believe it, and shouldn’t stop us from engaging with arguments that aren’t obviously false.
I agree that there is a relevant difference, and I appreciate your pointing it out. However, I also think that knowledge of the origins of a claim or an argument is sometimes relevant for deciding whether one should engage seriously with it, or engage with it at all, even if the person presenting it is not himself/herself acting in bad faith. For example, if I know that the oil or the tobacco industries funded studies seeking to show that global warming is not anthropogenic or that smoking doesn’t cause cancer, I think it’s reasonable to be skeptical even if the claims or arguments contained in those studies are presented by a person unaffiliated with those industries. One reason is that the studies may consist of filtered evidence—that is, evidence selected to demonstrate a particular conclusion, rather than to find the truth. Another reason is that by treating arguments skeptically when they originate in a non-truth-seeking process, one disincentivizes that kind of intellectually dishonest and socially harmful behavior.
In the case at hand, I think what’s going on is pretty clear. A person who became deeply hostile to longtermism (for reasons that look prima facie mostly unrelated to the intellectual merits of those views) diligently went through most of the longtermist literature fishing for claims that would, if presented in isolation to a popular audience using technically true but highly tendentious or misleading language and/or stripped of the relevant context, cause serious damage to the longtermist movement. In light of this, I think it is not only naive but epistemically unjustified to insist that this person’s findings be assessed on their merits alone. (Again, consider what your attitude would be if the claims originated e.g. in an industry lobbyist.)
In addition, I think that it’s inappropriate to publicize this person’s writings, by including them in a syllabus or by reproducing their cherry-picked quotes. In the case of Nick Beckstead’s quote, in particular, its reproduction seems especially egregious, because it helps promote an image of someone diametrically opposed to the truth: an early Giving What We Can Member who pledged to donate 50% of his income to global poverty charities for the rest of his life is presented—from a single paragraph excerpted from a 180-page doctoral dissertation intended to be read primarily by an audience of professional analytic philosophers—as “support[ing] white supremacist ideology”. Furthermore, even if Nick was just an ordinary guy rather than having impeccable cosmopolitan credentials, I think it would be perfectly appropriate to write what he did in the context of a thesis advancing the argument that our moral judgments are less reliable than is generally assumed. More generally, and more importantly, I believe that as EAs we should be willing to question established beliefs related to the cost-effectiveness of any cause, even if this risks reaching very uncomfortable conclusions, as long as the questioning is done as part of a good-faith effort in cause-prioritization and subject to the usual caveats related to possible reputational damage or the spreading of information hazards. It frightens me to think what our movement might become if it became an accepted norm that explorations of the sort exemplified by the quote can only be carried out “through a postcolonial lens”!
Note: Although I generally oppose disclaimers, I will add one here. I’ve known Nick Beckstead for a decade or so. We interacted a bit back when he was working at FHI, though after he moved to Open Phil in 2014 we had no further communication, other than exchanging greetings when he visited the CEA office around 2016 and corresponding briefly in a professional capacity. I am also an FTX Fellow, and as I learned recently, Nick has been appointed CEO of the FTX Foundation. However, I made this same criticism ten months ago, way before I developed any ties to FTX (or had any expectations that I would develop such ties or that Nick was being considered for a senior position). Here’s what I wrote back then:
I personally do not think it is appropriate to include an essay in a syllabus or engage with it in a forum post when (1) this essay characterizes the views it argues against using terms like ‘white supremacy’ and in a way that suggests (without explicitly asserting it, to retain plausible deniability) that their proponents—including eminently sensible and reasonable people such as Nick Beckstead and others— are white supremacists, and when (2) its author has shown repeatedly in previous publications, social media posts and other behavior that he is not writing in good faith and that he is unwilling to engage in honest discussion.
One reason is that the studies may consist of filtered evidence—that is, evidence selected to demonstrate a particular conclusion, rather than to find the truth. Another reason is that by treating arguments skeptically when they originate in a non-truth-seeking process, one disincentivizes that kind of intellectually dishonest and socially harmful behavior.
The “incentives” point is reasonable, and it’s part of the reason I’d want to deprioritize checking into claims with dishonest origins.
However, I’ll note that establishing a rule like “we won’t look at claims seriously if the person making them has a personal vendetta against us” could lead to people trying to argue against examining someone’s claims by arguing that they have a personal vendetta, which gets weird and messy. (“This person told me they were sad after org X rejected their job application, so I’m not going to take their argument against org X’s work very seriously.”)
Of course, there are many levels to what a “personal vendetta” might entail, and there are real trade-offs to whatever policy you establish. But I’m wary of taking the most extreme approach in any direction (“let’s just ignore Phil entirely”).
As for filtered evidence — definitely a concern if you’re trying to weigh the totality of evidence for or against something. But not necessarily relevant if there’s one specific piece of evidence that would be damning if true. For example, if Phil had produced a verifiable email exchange showing an EA leader threatening to fire a subordinate for writing something critical of longtermism, it wouldn’t matter much to me how much that leader had done to encourage criticism in public.
I think it is not only naive but epistemically unjustified to insist that this person’s findings be assessed on their merits alone.
I agree with this to the extent that those findings allow for degrees of freedom — so I’ll be very skeptical of conversations reported third-hand or cherry-picked quotes from papers, but still interested in leaked emails that seem like the genuine article.
In addition...
No major disagreements with anything past this point. I certainly wouldn’t put Phil’s white-supremacy work on a syllabus, though I could imagine excerpts of his criticism on other topics making it in — of the type “this point of view implies this objection” rather than “this point of view implies that the person holding it is a dangerous lunatic”.
I agree with this, and would add that the appropriate response to arguments made in bad faith is not to “steelman” them (or to add them to a syllabus, or to keep disseminating a cherry-picked quote from a doctoral dissertation), but to expose them for what they are or ignore them altogether. Intellectually dishonesty is the epistemic equivalent of defection in the cooperative enterprise of truth-seeking; to cooperate with defectors is not a sign of virtue, but quite the opposite.
I’ve seen “in bad faith” used in two ways:
This person’s argument is based on a lie.
This person doesn’t believe their own argument, but they aren’t lying within the argument itself.
While it’s obvious that we should point out lies where we see them, I think we should distinguish between (1) and (2). An argument’s original promoter not believing it isn’t a reason for no one to believe it, and shouldn’t stop us from engaging with arguments that aren’t obviously false.
(See this comment for more.)
I agree that there is a relevant difference, and I appreciate your pointing it out. However, I also think that knowledge of the origins of a claim or an argument is sometimes relevant for deciding whether one should engage seriously with it, or engage with it at all, even if the person presenting it is not himself/herself acting in bad faith. For example, if I know that the oil or the tobacco industries funded studies seeking to show that global warming is not anthropogenic or that smoking doesn’t cause cancer, I think it’s reasonable to be skeptical even if the claims or arguments contained in those studies are presented by a person unaffiliated with those industries. One reason is that the studies may consist of filtered evidence—that is, evidence selected to demonstrate a particular conclusion, rather than to find the truth. Another reason is that by treating arguments skeptically when they originate in a non-truth-seeking process, one disincentivizes that kind of intellectually dishonest and socially harmful behavior.
In the case at hand, I think what’s going on is pretty clear. A person who became deeply hostile to longtermism (for reasons that look prima facie mostly unrelated to the intellectual merits of those views) diligently went through most of the longtermist literature fishing for claims that would, if presented in isolation to a popular audience using technically true but highly tendentious or misleading language and/or stripped of the relevant context, cause serious damage to the longtermist movement. In light of this, I think it is not only naive but epistemically unjustified to insist that this person’s findings be assessed on their merits alone. (Again, consider what your attitude would be if the claims originated e.g. in an industry lobbyist.)
In addition, I think that it’s inappropriate to publicize this person’s writings, by including them in a syllabus or by reproducing their cherry-picked quotes. In the case of Nick Beckstead’s quote, in particular, its reproduction seems especially egregious, because it helps promote an image of someone diametrically opposed to the truth: an early Giving What We Can Member who pledged to donate 50% of his income to global poverty charities for the rest of his life is presented—from a single paragraph excerpted from a 180-page doctoral dissertation intended to be read primarily by an audience of professional analytic philosophers—as “support[ing] white supremacist ideology”. Furthermore, even if Nick was just an ordinary guy rather than having impeccable cosmopolitan credentials, I think it would be perfectly appropriate to write what he did in the context of a thesis advancing the argument that our moral judgments are less reliable than is generally assumed. More generally, and more importantly, I believe that as EAs we should be willing to question established beliefs related to the cost-effectiveness of any cause, even if this risks reaching very uncomfortable conclusions, as long as the questioning is done as part of a good-faith effort in cause-prioritization and subject to the usual caveats related to possible reputational damage or the spreading of information hazards. It frightens me to think what our movement might become if it became an accepted norm that explorations of the sort exemplified by the quote can only be carried out “through a postcolonial lens”!
Note: Although I generally oppose disclaimers, I will add one here. I’ve known Nick Beckstead for a decade or so. We interacted a bit back when he was working at FHI, though after he moved to Open Phil in 2014 we had no further communication, other than exchanging greetings when he visited the CEA office around 2016 and corresponding briefly in a professional capacity. I am also an FTX Fellow, and as I learned recently, Nick has been appointed CEO of the FTX Foundation. However, I made this same criticism ten months ago, way before I developed any ties to FTX (or had any expectations that I would develop such ties or that Nick was being considered for a senior position). Here’s what I wrote back then:
The “incentives” point is reasonable, and it’s part of the reason I’d want to deprioritize checking into claims with dishonest origins.
However, I’ll note that establishing a rule like “we won’t look at claims seriously if the person making them has a personal vendetta against us” could lead to people trying to argue against examining someone’s claims by arguing that they have a personal vendetta, which gets weird and messy. (“This person told me they were sad after org X rejected their job application, so I’m not going to take their argument against org X’s work very seriously.”)
Of course, there are many levels to what a “personal vendetta” might entail, and there are real trade-offs to whatever policy you establish. But I’m wary of taking the most extreme approach in any direction (“let’s just ignore Phil entirely”).
As for filtered evidence — definitely a concern if you’re trying to weigh the totality of evidence for or against something. But not necessarily relevant if there’s one specific piece of evidence that would be damning if true. For example, if Phil had produced a verifiable email exchange showing an EA leader threatening to fire a subordinate for writing something critical of longtermism, it wouldn’t matter much to me how much that leader had done to encourage criticism in public.
I agree with this to the extent that those findings allow for degrees of freedom — so I’ll be very skeptical of conversations reported third-hand or cherry-picked quotes from papers, but still interested in leaked emails that seem like the genuine article.
No major disagreements with anything past this point. I certainly wouldn’t put Phil’s white-supremacy work on a syllabus, though I could imagine excerpts of his criticism on other topics making it in — of the type “this point of view implies this objection” rather than “this point of view implies that the person holding it is a dangerous lunatic”.
Agree with this.