Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog
10% Pledge #54 with GivingWhatWeCan.orgRichard Y Chappellđ¸
Such a norm would make intellectual progress impossible. Weâd just spend all day accusing each other of vague COIs. (E.g.: âThorstad is a humanities professor, in a social environment that valorizes extreme Leftism and looks with suspicion upon anyone to the right of Bernie Sanders. In such a social environment, it would be very difficult for him to acknowledge the good that billionaire philanthropists do; he will face immense social pressure to instead reduce the status of billionaires and raise the status of left-wing activists, regardless of the objective merits of the respective groups. Itâs worth considering whether these social pressures may have something to do with the positions he ends up taking with regard to EA.â)
Thereâs a reason why philosophy usually has a norm of focusing on the first-order issues rather than these sorts of ad hominems.
I think youâve misunderstood me. My complaint is not that these philosophers openly argue, âEAs are insufficiently Left, so be suspicious of them.â (Thatâs not what they say.) Rather, they presuppose Leftismâs obviousness in a different way. They seem unaware that market liberals sincerely disagree with them about whatâs likely to have good results.
This leads them to engage in fallacious reasoning, like âEAs must be methodologically biased against systemic change, because why else would they not support anti-capitalist revolution?â I have literally never seen any proponent of the institutional critique acknowledge that some of us genuinely believe, for reasons, that anti-capitalist revolution is a bad idea. There is zero grappling with the possibility of disagreement about which âsystemic changesâ are good or bad. Itâs really bizarre. And I should stress that Iâm not criticizing their politics here. Iâm criticizing their reasoning. Their âevidenceâ of methodological bias is that we donât embrace their politics. Thatâs terrible reasoning!
I donât think Iâm methodologically biased against systemic change, and nothing Iâve read in these critiques gives me any reason to reconsider that judgment. Itâs weird to present as an âobjectionâ something that gives oneâs target no reason to reconsider their view. Thatâs not how philosophy normally works!
Now, you could develop some sort of argument about which claims are or are not âextraordinaryâ, and whether the historical success of capitalism relative to anti-capitalism really makes no difference to what we should treat as âthe default starting point.â Those could be interesting arguments (if you anticipated and addressed the obvious objections)! Iâm skeptical that theyâd succeed, but Iâd appreciate the intellectual engagement, and the possibility of learning something from it. Existing proponents of the institutional critique have not done any of that work (from what Iâve read to date). And theyâre philosophersâitâs their job to make reasoned arguments that engage with the perspectives of those they disagree with.
How does writing a substantive post on x-risk give Thorstad a free pass to cast aspersions when he turns to discussing politics or economics?
Iâm criticizing specific content here. I donât know who you are or what your grievances are, and Iâd ask you not to project them onto my specific criticisms of Thorstad and Crary et al.
Thorstad acknowledged that many of us have engaged in depth with the critique he references, but instead of treating our responses as worth considering, he suggests it is âworth considering if the social and financial position of effective altruists might have something to do withâ the conclusions we reach.
It is hardly âmud-slingingâ for me to find this slimy dismissal objectionable. Nor is it mud-slinging to point out ways in which Crary et al (cited approvingly by Thorstad) are clearly being unprincipled in their appeals to âsystemic changeâ. This is specific, textually-grounded criticism of specific actors, none of whom are you.
I think this point is extremely revealing:
The first linked post here seems to defend, or at least be sympathetic to, the position that encouraging veganism specifically among Black people in US cities is somehow more an attempt at âsystemic changeâ with regard to animal exploitation than working towards lab-grown meat (the whole point of which is that it might end up replacing farming altogether).
See also Crary et al.âs lament that EA funders prioritize transformative alt-meat research and corporate campaigns over sanctuaries for individual rescued animals. They are clearly not principled advocates for systemic change over piecemeal interventions. Rather, I take these examples to show that their criticisms are entirely opportunistic. (As I previously argued on my blog, the best available evidenceâespecially taking into account their self-reported motivation for writing the anti-EA bookâsuggests that these authors want funding for their friends and political allies, and donât want it to have to pass any kind of evaluation for cost-effectiveness relative to competing uses of the available funds. Itâs all quite transparent, and I donât understand why people insist on pretending that these hacks have intellectual merit.)
Thatâs certainly possible! I just find it incredibly frustrating that these criticisms are always written in a way that fails to acknowledge that some of us might just genuinely disagree with the criticsâ preferred politics, and that we could have reasonable and principled grounds for doing so, which are worth engaging with.
As a methodological principle, I think one should argue the first-order issues before accusing oneâs interlocutors of bias. Fans of the institutional critique too often skip that crucial first step.
Thorstad writes:
I think that the difficulty which philanthropists have in critiquing the systems that create and sustain them may explain much of the difficulty in conversations around what is often called the institutional critique of effective altruism.
The main difficulty I have with these âconversationsâ is that I havenât actually seen a substantive critique, containing anything recognizable as an argument. Critics donât say: âWe should institute systemic policies X, Y, Z, and hereâs the supporting evidence why.â Instead, they just seem to presuppose that a broadly anti-capitalist leftism is obviously correct, such that anyone who doesnât share their politics (for which, recall, we have been given no argument whatsoever) must be in need of psychologizing.
So consider that as an alternative hypothesis: the dialectic around the âinstitutional critiqueâ is âdifficultâ (unproductive?) because it consists in critics psychologizing EAs rather than trying to persuade us with arguments.
Although effective altruists did engage in detail with the institutional critique, much of the response was decidedly unsympathetic. It is worth considering if the social and financial position of effective altruists might have something to do with this reaction â not because effective altruists are greedy (they are not), but because most of us find it hard to think ill of the institutions that raised us up.
This exemplifies the sort of engagement that I find unproductive. Rather than psychologizing those he disagrees with, I would much prefer to see Thorstad attempt to offer a persuasive first-order argument for some specific alternative cause prioritization (that diverges from the EA conventional wisdom). I think that would obviously be far more âworth consideringâ than convenient psychological stories that function to justify dismissing different perspectives than his own.
I think the latter is outright bad and detracts from reasoned discourse.
Thanks for the feedback! Itâs probably helpful to read this in conjunction with âGood Judgment with Numbersâ, because the latter post gives a fuller picture of my view whereas this one is specifically focused on why a certain kind of blind dismissal of numbers is messed up.
(A general issue I often find here is that when Iâm explaining why a very specific bad objection is bad, many EAs instead want to (mis)read me as suggesting that nothing remotely in the vicinity of the targeted position could possibly be justified, and then complain that my argument doesnât refute thisâvery different - âsteelmanâ position that they have in mind. But Iâm not arguing against the position that we should sometimes be concerned about over-quantification for practical reasons. How could I? I agree with it! Iâm arguing against the specific position specified in the post, i.e. holding that different kinds of values canâtâliterally, canât, like, in principleâbe quantified.)
I think this is confusing two forms of âextremeâ.
Iâm actually trying to suggest that my interlocutor has confused these two things. Thereâs whatâs conventional vs socially extreme, and thereâs whatâs epistemically extreme, and they arenât the same thing. Thatâs my whole point in that paragraph. It isnât necessarily epistemically safe to do whatâs socially safe or conventional.
Yeah, I agree that one also shouldnât blindly trust numbers (and discounting for lack of robustness of supporting evidence is one reasonable way to implement that). I take that to be importantly different fromâand much more reasonable thanâthe sort of âin principleâ objection to quantification that this post addresses.
ReÂfusÂing to QuanÂtify is ReÂfusÂing to Think (about trade-offs)
When is PhilosÂoÂphy Worth FundÂing?
I think there could be ways of doing both. But yeah, I think the core idea of âitâs good to actively help people, and helping more is better than helping lessâ should be a core component of civic virtue thatâs taught as plain commonsense wisdom alongside âracism is badâ, etc.
Definitional clarity can be helpful if you think that people might otherwise be talking past each other (using the same word to mean something importantly different without realizing it). But otherwise, I generally agree with your take. (Itâs a classic failure-mode of analytic philosophers that some pretend not to know what a word means until it has been precisely defined. Itâs quite silly.)
Eh, Iâm with Aristotle on this one: itâs better to start early with moral education. If anything, I think EA leaves it too late. We should be thinking about how to encourage the virtues of scope-sensitive beneficentrism (obviously not using those terms!) starting in early childhood.
(Or, rather, since most actual EAs arenât qualified to do this, we should hope to win over some early childhood educators who would be competent to do this!)
CountÂing Costless Beneficence
I mean, itâs undeniable that the best thing is best. Itâs not like thereâs some (coherent) alternative view that denies this. So I take it the real question is how much pressure one should feel towards doing the impartial best (at cost of significant self-sacrifice); whether the maximum should be viewed as the baseline for minimal acceptability, and anything short of it constitutes failure, or whether we rather aim to normalize something more modest and simply celebrate further good beyond that point as an extra bonus.
I can see pathologies in both directions here. I donât think it makes sense to treat perfection as the baseline, such that any realistic outcome automatically qualifies as failure. For anyone to think that way would seem quite confused. (Which is not to deny that it can happen.) But also, it would seem a bit pathological to refuse to celebrate moral saints? Like, obviously there is something very impressive about moral heroism and extreme altruism that goes beyond what I personally would be willing to sacrifice for others? I think the crucial thing is just to frame it positively rather than negatively, and donât get confused about where the baseline or zero-point properly lies.
What do you mean by âmaximizationâ? I think itâs important to distinguish between:
(1) Hegemonic maximization: the (humanly infeasible) idea that every decision in your life should aim to do the most impartial good possible.
(2) Maximizing within specific decision contexts: insofar as youâre trying to allocate your charity budget (or altruistic efforts more generally), you should try to get the most bang for your buck.
As I understand it, EA aims to be maximizing in the second sense only. (Hence the norm around donating 10%, not some incredibly demanding standard.)
On the broader themes, a lot of what youâre pointing to is potential conflicts between ethics and self-interest, and I think itâs pretty messed up to use the language of psychological âhealthâ to justify a wanton disregard for ethics. Maybe itâs partly a cultural clash, and when you say things like âAll perspectives are valid,â you really mean them in a non-literal sense?
Iâd like to see more basic public philosophy arguing for effective altruism and against its critics. (I obviously do this a bunch, and am puzzled that there isnât more of it, particularly from philosophers whoâunlike meâare actually employed by EA orgs!)
One way that EAIF could help with this is by reaching out to promising candidates (well-respected philosophers who seem broadly sympathetic to EA principles) to see whether they could productively use a course buyout to provide time for EA-related public philosophy. (This could of course include constructively criticizing EA, or suggesting ways to improve, in addition toâwhat I tend to see as the higher priorityâdrawing attention to apt EA criticisms of ordinary moral thought and behavior and ways that everyone else could clearly improve by taking these lessons on board.)
A specific example that springs to mind is Richard Pettigrew. He independently wrote an excellent, measured criticism of Leif Wenarâs nonsense, and also reviewed the Crary et al volume in a top academic journal (Mind, iirc). Heâs a very highly-regarded philosopher, and Iâd love to see him engage more with EA ideas. Maybe a course buyout from EAIF could make that happen? Seems worth exploring, in any case.
QB: How Much do FuÂture GenÂerÂaÂtions MatÂter?
Sounds like a good move! In my experience (both as an author and a reader), Substack is very simple and convenient, and the network effects (e.g. obtaining new readers via substackâs ârecommendationsâ feature) are much larger than I would have predicted in advance.
One quick reason for thinking that academic philosophy norms should apply to the âinstitutional critiqueâ is that it appears in works of academic philosophy. If people like Crary et al are just acting as private political actors, I guess they can say whatever they want on whatever flimsy basis they want. But insofar as theyâre writing philosophy papers (and books published by academic presses) arguing for the institutional critique as a serious objection to Effective Altruism, Iâm claiming that they havenât done a competent job of arguing for their thesis.