This person doesnât believe their own argument, but they arenât lying within the argument itself.
While itâs obvious that we should point out lies where we see them, I think we should distinguish between (1) and (2). An argumentâs original promoter not believing it isnât a reason for no one to believe it, and shouldnât stop us from engaging with arguments that arenât obviously false.
I agree that there is a relevant difference, and I appreciate your pointing it out. However, I also think that knowledge of the origins of a claim or an argument is sometimes relevant for deciding whether one should engage seriously with it, or engage with it at all, even if the person presenting it is not himself/âherself acting in bad faith. For example, if I know that the oil or the tobacco industries funded studies seeking to show that global warming is not anthropogenic or that smoking doesnât cause cancer, I think itâs reasonable to be skeptical even if the claims or arguments contained in those studies are presented by a person unaffiliated with those industries. One reason is that the studies may consist of filtered evidenceâthat is, evidence selected to demonstrate a particular conclusion, rather than to find the truth. Another reason is that by treating arguments skeptically when they originate in a non-truth-seeking process, one disincentivizes that kind of intellectually dishonest and socially harmful behavior.
In the case at hand, I think whatâs going on is pretty clear. A person who became deeply hostile to longtermism (for reasons that look prima facie mostly unrelated to the intellectual merits of those views) diligently went through most of the longtermist literature fishing for claims that would, if presented in isolation to a popular audience using technically true but highly tendentious or misleading language and/âor stripped of the relevant context, cause serious damage to the longtermist movement. In light of this, I think it is not only naive but epistemically unjustified to insist that this personâs findings be assessed on their merits alone. (Again, consider what your attitude would be if the claims originated e.g. in an industry lobbyist.)
In addition, I think that itâs inappropriate to publicize this personâs writings, by including them in a syllabus or by reproducing their cherry-picked quotes. In the case of Nick Becksteadâs quote, in particular, its reproduction seems especially egregious, because it helps promote an image of someone diametrically opposed to the truth: an early Giving What We Can Member who pledged to donate 50% of his income to global poverty charities for the rest of his life is presentedâfrom a single paragraph excerpted from a 180-page doctoral dissertation intended to be read primarily by an audience of professional analytic philosophersâas âsupport[ing] white supremacist ideologyâ. Furthermore, even if Nick was just an ordinary guy rather than having impeccable cosmopolitan credentials, I think it would be perfectly appropriate to write what he did in the context of a thesis advancing the argument that our moral judgments are less reliable than is generally assumed. More generally, and more importantly, I believe that as EAs we should be willing to question established beliefs related to the cost-effectiveness of any cause, even if this risks reaching very uncomfortable conclusions, as long as the questioning is done as part of a good-faith effort in cause-prioritization and subject to the usual caveats related to possible reputational damage or the spreading of information hazards. It frightens me to think what our movement might become if it became an accepted norm that explorations of the sort exemplified by the quote can only be carried out âthrough a postcolonial lensâ!
Note: Although I generally oppose disclaimers, I will add one here. Iâve known Nick Beckstead for a decade or so. We interacted a bit back when he was working at FHI, though after he moved to Open Phil in 2014 we had no further communication, other than exchanging greetings when he visited the CEA office around 2016 and corresponding briefly in a professional capacity. I am also an FTX Fellow, and as I learned recently, Nick has been appointed CEO of the FTX Foundation. However, I made this same criticism ten months ago, way before I developed any ties to FTX (or had any expectations that I would develop such ties or that Nick was being considered for a senior position). Hereâs what I wrote back then:
I personally do not think it is appropriate to include an essay in a syllabus or engage with it in a forum post when (1) this essay characterizes the views it argues against using terms like âwhite supremacyâ and in a way that suggests (without explicitly asserting it, to retain plausible deniability) that their proponentsâincluding eminently sensible and reasonable people such as Nick Beckstead and othersâ are white supremacists, and when (2) its author has shown repeatedly in previous publications, social media posts and other behavior that he is not writing in good faith and that he is unwilling to engage in honest discussion.
One reason is that the studies may consist of filtered evidenceâthat is, evidence selected to demonstrate a particular conclusion, rather than to find the truth. Another reason is that by treating arguments skeptically when they originate in a non-truth-seeking process, one disincentivizes that kind of intellectually dishonest and socially harmful behavior.
The âincentivesâ point is reasonable, and itâs part of the reason Iâd want to deprioritize checking into claims with dishonest origins.
However, Iâll note that establishing a rule like âwe wonât look at claims seriously if the person making them has a personal vendetta against usâ could lead to people trying to argue against examining someoneâs claims by arguing that they have a personal vendetta, which gets weird and messy. (âThis person told me they were sad after org X rejected their job application, so Iâm not going to take their argument against org Xâs work very seriously.â)
Of course, there are many levels to what a âpersonal vendettaâ might entail, and there are real trade-offs to whatever policy you establish. But Iâm wary of taking the most extreme approach in any direction (âletâs just ignore Phil entirelyâ).
As for filtered evidence â definitely a concern if youâre trying to weigh the totality of evidence for or against something. But not necessarily relevant if thereâs one specific piece of evidence that would be damning if true. For example, if Phil had produced a verifiable email exchange showing an EA leader threatening to fire a subordinate for writing something critical of longtermism, it wouldnât matter much to me how much that leader had done to encourage criticism in public.
I think it is not only naive but epistemically unjustified to insist that this personâs findings be assessed on their merits alone.
I agree with this to the extent that those findings allow for degrees of freedom â so Iâll be very skeptical of conversations reported third-hand or cherry-picked quotes from papers, but still interested in leaked emails that seem like the genuine article.
In addition...
No major disagreements with anything past this point. I certainly wouldnât put Philâs white-supremacy work on a syllabus, though I could imagine excerpts of his criticism on other topics making it in â of the type âthis point of view implies this objectionâ rather than âthis point of view implies that the person holding it is a dangerous lunaticâ.
Iâve seen âin bad faithâ used in two ways:
This personâs argument is based on a lie.
This person doesnât believe their own argument, but they arenât lying within the argument itself.
While itâs obvious that we should point out lies where we see them, I think we should distinguish between (1) and (2). An argumentâs original promoter not believing it isnât a reason for no one to believe it, and shouldnât stop us from engaging with arguments that arenât obviously false.
(See this comment for more.)
I agree that there is a relevant difference, and I appreciate your pointing it out. However, I also think that knowledge of the origins of a claim or an argument is sometimes relevant for deciding whether one should engage seriously with it, or engage with it at all, even if the person presenting it is not himself/âherself acting in bad faith. For example, if I know that the oil or the tobacco industries funded studies seeking to show that global warming is not anthropogenic or that smoking doesnât cause cancer, I think itâs reasonable to be skeptical even if the claims or arguments contained in those studies are presented by a person unaffiliated with those industries. One reason is that the studies may consist of filtered evidenceâthat is, evidence selected to demonstrate a particular conclusion, rather than to find the truth. Another reason is that by treating arguments skeptically when they originate in a non-truth-seeking process, one disincentivizes that kind of intellectually dishonest and socially harmful behavior.
In the case at hand, I think whatâs going on is pretty clear. A person who became deeply hostile to longtermism (for reasons that look prima facie mostly unrelated to the intellectual merits of those views) diligently went through most of the longtermist literature fishing for claims that would, if presented in isolation to a popular audience using technically true but highly tendentious or misleading language and/âor stripped of the relevant context, cause serious damage to the longtermist movement. In light of this, I think it is not only naive but epistemically unjustified to insist that this personâs findings be assessed on their merits alone. (Again, consider what your attitude would be if the claims originated e.g. in an industry lobbyist.)
In addition, I think that itâs inappropriate to publicize this personâs writings, by including them in a syllabus or by reproducing their cherry-picked quotes. In the case of Nick Becksteadâs quote, in particular, its reproduction seems especially egregious, because it helps promote an image of someone diametrically opposed to the truth: an early Giving What We Can Member who pledged to donate 50% of his income to global poverty charities for the rest of his life is presentedâfrom a single paragraph excerpted from a 180-page doctoral dissertation intended to be read primarily by an audience of professional analytic philosophersâas âsupport[ing] white supremacist ideologyâ. Furthermore, even if Nick was just an ordinary guy rather than having impeccable cosmopolitan credentials, I think it would be perfectly appropriate to write what he did in the context of a thesis advancing the argument that our moral judgments are less reliable than is generally assumed. More generally, and more importantly, I believe that as EAs we should be willing to question established beliefs related to the cost-effectiveness of any cause, even if this risks reaching very uncomfortable conclusions, as long as the questioning is done as part of a good-faith effort in cause-prioritization and subject to the usual caveats related to possible reputational damage or the spreading of information hazards. It frightens me to think what our movement might become if it became an accepted norm that explorations of the sort exemplified by the quote can only be carried out âthrough a postcolonial lensâ!
Note: Although I generally oppose disclaimers, I will add one here. Iâve known Nick Beckstead for a decade or so. We interacted a bit back when he was working at FHI, though after he moved to Open Phil in 2014 we had no further communication, other than exchanging greetings when he visited the CEA office around 2016 and corresponding briefly in a professional capacity. I am also an FTX Fellow, and as I learned recently, Nick has been appointed CEO of the FTX Foundation. However, I made this same criticism ten months ago, way before I developed any ties to FTX (or had any expectations that I would develop such ties or that Nick was being considered for a senior position). Hereâs what I wrote back then:
The âincentivesâ point is reasonable, and itâs part of the reason Iâd want to deprioritize checking into claims with dishonest origins.
However, Iâll note that establishing a rule like âwe wonât look at claims seriously if the person making them has a personal vendetta against usâ could lead to people trying to argue against examining someoneâs claims by arguing that they have a personal vendetta, which gets weird and messy. (âThis person told me they were sad after org X rejected their job application, so Iâm not going to take their argument against org Xâs work very seriously.â)
Of course, there are many levels to what a âpersonal vendettaâ might entail, and there are real trade-offs to whatever policy you establish. But Iâm wary of taking the most extreme approach in any direction (âletâs just ignore Phil entirelyâ).
As for filtered evidence â definitely a concern if youâre trying to weigh the totality of evidence for or against something. But not necessarily relevant if thereâs one specific piece of evidence that would be damning if true. For example, if Phil had produced a verifiable email exchange showing an EA leader threatening to fire a subordinate for writing something critical of longtermism, it wouldnât matter much to me how much that leader had done to encourage criticism in public.
I agree with this to the extent that those findings allow for degrees of freedom â so Iâll be very skeptical of conversations reported third-hand or cherry-picked quotes from papers, but still interested in leaked emails that seem like the genuine article.
No major disagreements with anything past this point. I certainly wouldnât put Philâs white-supremacy work on a syllabus, though I could imagine excerpts of his criticism on other topics making it in â of the type âthis point of view implies this objectionâ rather than âthis point of view implies that the person holding it is a dangerous lunaticâ.