Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog
10% Pledge #54 with GivingWhatWeCan.orgRichard Y Chappellđ¸
FacÂing up to the Price on Life
See the Theories of Well-being chapter at utilitarianism.net for a detailed philosophical overview of this topic.
The simple case against hedonism is just that it is bizarrely restrictive: many of us have non-hedonistic ultimate desires about our own lives that seem perfectly reasonable, so the burden is on the hedonist to establish that they know better than we do what is good for us, andâin particularâthat our subjective feelings are the only things that could reasonably be taken to matter for our own sakes. Thatâs an extremely (and I would say implausibly) restrictive claim.
How does averting a birth cause an extra child to be born somewhere else?
Just sharing my 2024 Year in Review post from Good Thoughts. It summarizes a couple dozen posts in applied ethics and ethical theory (including issues relating to naive instrumentalism and what I call ânon-ideal decision theoryâ) that would likely be of interest to many forum readers. (Plus a few more specialist philosophy posts that may only appeal to a more niche audience.)
Fair enoughâI think I agree with that. Something that I discuss a lot in my writing is that we clearly have strong moral reasons to do more good rather than less, but that an over-emphasis on âobligationâ and âdemandsâ can get in the way of people appreciating this. I think Iâm basically channeling the same frustration that you have, but rather than denying that there is such a thing as âsupererogationâ, I would frame it as emphasizing that we obviously have really good reasons to do supererogatory things, and refusing to do so can even be a straightforward normative error. See, especially, What Permissibility Could Be, where I emphatically reject the ârationalistâ conception of permissibility on which we have no more reason to do supererogatory acts than selfish ones.
I basically agree with Scott. You need to ask what it even means to call something âobligatoryâ. For many utilitarians (from Sidgwick to Peter Singer), they mean nothing more than what you have most reason to do. But that is not what anyone else means by the term, which (as J.S. Mill better recognized) has important connections to blameworthiness. So then the question arises why you would think that anything less than perfection was automatically deserving of blame. You might just as well claim that anything better than maximal evil is thereby deserving of praise!
For related discussion, see my posts:
Deontic Pluralism (on different things that âoughtâ and âobligationâ can mean)
Imperfection is OK! (on how to think about our moral imperfection, and why we neednât feel bad about itâunless we do something far more egregious than just being less than perfect)
And for a systematic exploration of demandingness and its limits (published in a top academic journal), see:
Iâd say that itâs a (putative) instance of adversarial ethics rather than âends justify the meansâ reasoning (in the usual sense of violating deontic constraints).
Sometimes that seems OK. Like, it seems reasonable to refrain from rescuing the large man in my status-quo-reversal of the Trolley Bridge case. (And to urge others to likewise refrain, for the sake of the five who would die if anyone acted to save the one.) So that makes me wonder if our disapproval of the present case reflects a kind of speciesismâeither our own, or the anticipated speciesism of a wider audience for whom this sort of reasoning would provide a PR problem?
OTOH, I think the meat-eater problem is misguided anyway, so another possibility is just that mistakenly urging against saving innocent peopleâs lives is especially bad. I guess I do think the moral risk here is sufficient to be extra wary about how one expresses concerns like the meat-eater problem. Like Jason, I think itâs much better to encourage AW offsets than to discourage GHD life-saving.
(Offsetting the potential downsides from helping others seems like a nice general solution to the problem of adversarial ethics, even if it isnât strictly optimal.)
I basically agree with the core case for âanimal welfare offsettingâ, and discuss some related ideas in Confessions of a Cheeseburger Ethicist. The main points of resistance Iâd flag are just:
For some people, there may not be any âtradeoffâ between going vegan and donating. If you can easily do both, all the better! So itâs worth being clear that the argument isnât really against veganism so much as against investing too much moral effort into becoming vegan (if it would require significant willpower).
As Jeff notes, there may be better second-order effects from going vegan. Presumably a sufficiently large extra donation could balance those out, but itâs very hard to guess what would be sufficient. (I do think thereâs also value to omnivores like us being public about valuing animal welfare and donating accordingly. That might help reach some different audiences, for example. But I still think itâs worth esteeming veganism as a laudatory practice, even if it shouldnât be everyoneâs top priority.)
Or if any other kind of progress (including moral progress, some of which will come from future people) will eventually abolish factory-farming. Iâd be utterly shocked if factory-farming is still a thing 1000+ years from now. But sure, it is a possibility, so you could discount the value of new lives by some modest amount to reflect this risk. I just donât think that will yield the result that marginal population increases are net-negative for the world in expectation.
In the long term, we will hopefully invent forms of delicious meat like cultured meat that do not involve sentient animal suffering⌠When that happens, pro-natalism might make more sense.
As Kevin Kuruc argues, progress happens from people (or productive person-years), not from the bare passage of time. So we should expect thereâs some number of productive person-years required to solve this problem. So there simply is no meat-eater problem. As a first-pass model: removing person-years from the present doesnât reduce the number of animals harmed before a solution is found; it just makes the solution arrive later.
- 25 Dec 2024 1:27 UTC; 14 points) 's comment on PabloAMC âs Quick takes by (
One quick reason for thinking that academic philosophy norms should apply to the âinstitutional critiqueâ is that it appears in works of academic philosophy. If people like Crary et al are just acting as private political actors, I guess they can say whatever they want on whatever flimsy basis they want. But insofar as theyâre writing philosophy papers (and books published by academic presses) arguing for the institutional critique as a serious objection to Effective Altruism, Iâm claiming that they havenât done a competent job of arguing for their thesis.
Such a norm would make intellectual progress impossible. Weâd just spend all day accusing each other of vague COIs. (E.g.: âThorstad is a humanities professor, in a social environment that valorizes extreme Leftism and looks with suspicion upon anyone to the right of Bernie Sanders. In such a social environment, it would be very difficult for him to acknowledge the good that billionaire philanthropists do; he will face immense social pressure to instead reduce the status of billionaires and raise the status of left-wing activists, regardless of the objective merits of the respective groups. Itâs worth considering whether these social pressures may have something to do with the positions he ends up taking with regard to EA.â)
Thereâs a reason why philosophy usually has a norm of focusing on the first-order issues rather than these sorts of ad hominems.
I think youâve misunderstood me. My complaint is not that these philosophers openly argue, âEAs are insufficiently Left, so be suspicious of them.â (Thatâs not what they say.) Rather, they presuppose Leftismâs obviousness in a different way. They seem unaware that market liberals sincerely disagree with them about whatâs likely to have good results.
This leads them to engage in fallacious reasoning, like âEAs must be methodologically biased against systemic change, because why else would they not support anti-capitalist revolution?â I have literally never seen any proponent of the institutional critique acknowledge that some of us genuinely believe, for reasons, that anti-capitalist revolution is a bad idea. There is zero grappling with the possibility of disagreement about which âsystemic changesâ are good or bad. Itâs really bizarre. And I should stress that Iâm not criticizing their politics here. Iâm criticizing their reasoning. Their âevidenceâ of methodological bias is that we donât embrace their politics. Thatâs terrible reasoning!
I donât think Iâm methodologically biased against systemic change, and nothing Iâve read in these critiques gives me any reason to reconsider that judgment. Itâs weird to present as an âobjectionâ something that gives oneâs target no reason to reconsider their view. Thatâs not how philosophy normally works!
Now, you could develop some sort of argument about which claims are or are not âextraordinaryâ, and whether the historical success of capitalism relative to anti-capitalism really makes no difference to what we should treat as âthe default starting point.â Those could be interesting arguments (if you anticipated and addressed the obvious objections)! Iâm skeptical that theyâd succeed, but Iâd appreciate the intellectual engagement, and the possibility of learning something from it. Existing proponents of the institutional critique have not done any of that work (from what Iâve read to date). And theyâre philosophersâitâs their job to make reasoned arguments that engage with the perspectives of those they disagree with.
How does writing a substantive post on x-risk give Thorstad a free pass to cast aspersions when he turns to discussing politics or economics?
Iâm criticizing specific content here. I donât know who you are or what your grievances are, and Iâd ask you not to project them onto my specific criticisms of Thorstad and Crary et al.
Thorstad acknowledged that many of us have engaged in depth with the critique he references, but instead of treating our responses as worth considering, he suggests it is âworth considering if the social and financial position of effective altruists might have something to do withâ the conclusions we reach.
It is hardly âmud-slingingâ for me to find this slimy dismissal objectionable. Nor is it mud-slinging to point out ways in which Crary et al (cited approvingly by Thorstad) are clearly being unprincipled in their appeals to âsystemic changeâ. This is specific, textually-grounded criticism of specific actors, none of whom are you.
I think this point is extremely revealing:
The first linked post here seems to defend, or at least be sympathetic to, the position that encouraging veganism specifically among Black people in US cities is somehow more an attempt at âsystemic changeâ with regard to animal exploitation than working towards lab-grown meat (the whole point of which is that it might end up replacing farming altogether).
See also Crary et al.âs lament that EA funders prioritize transformative alt-meat research and corporate campaigns over sanctuaries for individual rescued animals. They are clearly not principled advocates for systemic change over piecemeal interventions. Rather, I take these examples to show that their criticisms are entirely opportunistic. (As I previously argued on my blog, the best available evidenceâespecially taking into account their self-reported motivation for writing the anti-EA bookâsuggests that these authors want funding for their friends and political allies, and donât want it to have to pass any kind of evaluation for cost-effectiveness relative to competing uses of the available funds. Itâs all quite transparent, and I donât understand why people insist on pretending that these hacks have intellectual merit.)
Thatâs certainly possible! I just find it incredibly frustrating that these criticisms are always written in a way that fails to acknowledge that some of us might just genuinely disagree with the criticsâ preferred politics, and that we could have reasonable and principled grounds for doing so, which are worth engaging with.
As a methodological principle, I think one should argue the first-order issues before accusing oneâs interlocutors of bias. Fans of the institutional critique too often skip that crucial first step.
Thorstad writes:
I think that the difficulty which philanthropists have in critiquing the systems that create and sustain them may explain much of the difficulty in conversations around what is often called the institutional critique of effective altruism.
The main difficulty I have with these âconversationsâ is that I havenât actually seen a substantive critique, containing anything recognizable as an argument. Critics donât say: âWe should institute systemic policies X, Y, Z, and hereâs the supporting evidence why.â Instead, they just seem to presuppose that a broadly anti-capitalist leftism is obviously correct, such that anyone who doesnât share their politics (for which, recall, we have been given no argument whatsoever) must be in need of psychologizing.
So consider that as an alternative hypothesis: the dialectic around the âinstitutional critiqueâ is âdifficultâ (unproductive?) because it consists in critics psychologizing EAs rather than trying to persuade us with arguments.
Although effective altruists did engage in detail with the institutional critique, much of the response was decidedly unsympathetic. It is worth considering if the social and financial position of effective altruists might have something to do with this reaction â not because effective altruists are greedy (they are not), but because most of us find it hard to think ill of the institutions that raised us up.
This exemplifies the sort of engagement that I find unproductive. Rather than psychologizing those he disagrees with, I would much prefer to see Thorstad attempt to offer a persuasive first-order argument for some specific alternative cause prioritization (that diverges from the EA conventional wisdom). I think that would obviously be far more âworth consideringâ than convenient psychological stories that function to justify dismissing different perspectives than his own.
I think the latter is outright bad and detracts from reasoned discourse.
Thanks for the feedback! Itâs probably helpful to read this in conjunction with âGood Judgment with Numbersâ, because the latter post gives a fuller picture of my view whereas this one is specifically focused on why a certain kind of blind dismissal of numbers is messed up.
(A general issue I often find here is that when Iâm explaining why a very specific bad objection is bad, many EAs instead want to (mis)read me as suggesting that nothing remotely in the vicinity of the targeted position could possibly be justified, and then complain that my argument doesnât refute thisâvery different - âsteelmanâ position that they have in mind. But Iâm not arguing against the position that we should sometimes be concerned about over-quantification for practical reasons. How could I? I agree with it! Iâm arguing against the specific position specified in the post, i.e. holding that different kinds of values canâtâliterally, canât, like, in principleâbe quantified.)
I think this is confusing two forms of âextremeâ.
Iâm actually trying to suggest that my interlocutor has confused these two things. Thereâs whatâs conventional vs socially extreme, and thereâs whatâs epistemically extreme, and they arenât the same thing. Thatâs my whole point in that paragraph. It isnât necessarily epistemically safe to do whatâs socially safe or conventional.
Yeah, I agree that one also shouldnât blindly trust numbers (and discounting for lack of robustness of supporting evidence is one reasonable way to implement that). I take that to be importantly different fromâand much more reasonable thanâthe sort of âin principleâ objection to quantification that this post addresses.
Iâm open to the possibility that whatâs all things considered best might take into account other kinds of values beyond traditionally welfarist ones (e.g. Nietzschean perfectionism). But standard sorts of agent-relative reasons like Wolf adverts to (reasons to want your life in particular to be more well-rounded) strike me as valid excuses rather than valid justifications. It isnât really a better decision to do the more selfish thing, IMO.
Your second paragraph is hard to answer because different people have different moral beliefs, and (as I suggest in the OP) laxer moral beliefs often stem from motivated reasoning. So the two may be intertwined. But obviously my hope is that greater clarity of moral knowledge may help us to do more good even with limited moral motivation.