Why do you think it’s too nihilistic and divorced from humane values to be worth taking seriously? Surely those sympathetic to it, myself included, don’t agree.
By prioritizing suffering and those badly off, the procreation asymmetry (often a person-affecting view) and suffering-focused views generally are more humane than alternatives, almost by definition of the word ‘humane’ (although possibly depending on the precise definition).
Furthermore, the questions asked assume the answer (that the universe is good at all) or are rhetorical (“How could that not be better than a barren rock?”) without offering any actual arguments. Perhaps you had arguments earlier in the article in mind that you don’t refer to explicitly there? EDIT: See Richard’s response.
Although the section does point out reasonable concerns someone might have with person-affecting views, it doesn’t treat the views fairly. That quote is an important example of this. There’s also the way totalism is defended against the Repugnant Conclusion, so much that it gets a whole section with defenses against the RC, but no further defenses are given for person-affecting views in response to objections, only versions that are weaker and weaker, before finally rejecting them all with that quote, which almost begs the question, poses a rhetorical question and is very derisive.
Another major objection against total utilitarianism that’s missing is replacement/replaceability, the idea that, relative to the status quo, it would be better to kill individuals (or everyone) to replace them with better off individuals.
Richard, I do agree that the “indifferent to making happy people” view can lead to that sort of conclusion, that sounds indeed nihilistic. But I find it hard to find good arguments against it. I don’t find it obvious to say that a situation where there’s beings who are experiencing something is better than a situation where there’s no beings to experience anything at all. Reason 1) is that no one suffers from that absence of experience, reason 2) is that at least this also guarantees that there’s no horrible suffering. This might be very counterintuitive to some (or many) but I also feel that as soon as there is one creature suffering horrible for a prolonged amount of time, it might maybe be better to have nothing at all (see e.g. Omelas: do we want that world or would we rather having nothing at all?)
Hi Tobias, thanks for this. I’m curious: can you find “good arguments” against full-blown nihilism? I think nihilism is very difficult to argue against, except by pointing to the bedrock moral convictions it is incompatible with. So that’s really all I’m trying to do here. (See also my reply to Michael.)
I don’t find it obvious to say that a situation where there’s beings who are experiencing something is better than a situation where there’s no beings to experience anything at all.
Just to clarify: it depends on the experiences (and more, since I’m not a hedonist). Some lives would be worse than nothing at all. But yeah, if you just don’t share the intuition that utopia is better than a barren rock then I don’t expect anything else I have to say here will be very persuasive to you.
Reason 1) is that no one suffers from that absence of experience
But isn’t that presupposing the suffering is all that matters? I’m trying to pump the intuition that good things matter too.
2) is that at least this also guarantees that there’s no horrible suffering.
Yep, I’ll grant that: horrible suffering is really, really bad, so there’s at least that to be said for the barren rock. :-)
Nihilists claim that nothing is of value. The view I’m addressing holds that nothing is of positive value: utopia is no better than a barren rock. I find that objectionably nihilistic. (Though, in at least recognizing the problem of negative value, it isn’t as bad as full-blown nihilism.)
Furthermore, the questions asked assume the answer (that the universe is good at all) or are rhetorical (“How could that not be better than a barren rock?”) without offering any actual arguments.
I’m trying to explain that I take as a premise that some things have positive value, and that utopia is better than a barren rock. (If you reject that premise, I have nothing more to say to you—any more than I could argue with someone who insisted that pain was intrinsically good. No offense intended; it’s simply a dialectical impasse.)
To make the argument pedantically explicit:
(P1) Utopia is better than a barren rock.
(P2) Person-affecting views (of the sort under discussion) imply otherwise.
Therefore, (C) Person-affecting views (of the sort under discussion) are false.
Is this “question-begging”? No more than any putative counterexample ever is. Of course, the logic of counterexamples is such that they can only ever be persuasive to those who haven’t already appreciated that the putative counterexample is an implication of the targeted view. If you already accept the implication, then you won’t be persuaded. But the argument may nonetheless be rationally persuasive for those who (perhaps like the OP?) are initially drawn to person-affecting views, but hadn’t considered this implication. Upon considering it, they may find that they share my view that the implication (rejecting P1) is unacceptable.
it doesn’t treat the views fairly. That quote is an important example of this.
Surely those sympathetic to the expressed objections, myself included, don’t agree.
Utilitarianism.net isn’t wikipedia, striving for NPOV. You may not like our point of view, but having a point of view (and spending more time defending it than defending opposing views) does not mean that one has failed to treat the opposing views fairly. (Philosophers can disagree with each other without accusing each other of unfairness or other intellectual vices.)
FWIW, I thought the proposal to incorporate “value blur” to avoid the simple objections was a pretty neat (and, afaik, novel?) sympathetic suggestion we offer on behalf of the person-affecting theorist. But yes, we do go on to suggest that the core view remains unacceptable. That’s a substantive normative claim we’re making. The fact that others may disagree with the claim doesn’t automatically make it “unfair”.
You’re welcome to disagree! But I would hope that you can appreciate that we should also be free to disagree with you, including about the question of which moral views are plausible candidates to take seriously (i.e. as potentially correct) and which are not.
Fair with respect to it being a proposed counterexample. I’ve edited my reply above accordingly.
“it doesn’t treat the views fairly. That quote is an important example of this.”
Surely those sympathetic to the expressed objections, myself included, don’t agree.
Utilitarianism.net isn’t wikipedia, striving for NPOV. You may not like our point of view, but having a point of view (and spending more time defending it than defending opposing views) does not mean that one has failed to treat the opposing views fairly.
(...)
You’re welcome to disagree! But I would hope that you can appreciate that we should also be free to disagree with you, including about the question of which moral views are plausible candidates to take seriously (i.e. as potentially correct) and which are not.
I have multiple complaints where I think the article is unfair or misleading, and they’re not just a matter of having disagreements with specific claims.
First, the article often fails to mark when something is opinion, giving the misleading impression of fact and objectivity. I quote examples below.
Second, I think we should hold ourselves to higher standards than using contemptuous language to refer to views or intuitions ethicists and thoughtful people find plausible or endorse, and I don’t think it’s fair to otherwise just call the views implausible or not worth taking seriously without marking this very explicitly as opinion (“arguably” isn’t enough, in my view, and I’d instead recommend explicitly referring to the authors, e.g. use “We think (...)”).
Third, I think being fair should require including the same kinds of arguments on each side, when available, and also noting when these arguments “prove too much” or otherwise undermine the views the article defends, if they do. Some of the kinds of arguments used to defend the total view against the Repugnant Conclusion can be used against intuitions supporting the total view or intuitions against person-affecting views (tolerating and debunking, as mentioned above, and attacking the alternatives, which the article does indeed do for alternatives to PA views).
Expanding on this third point, “How could that not be better than a barren rock?” has an obvious answer that was left out: person-affecting views (or equivalently, reasons implying person-affecting views) could be correct (or correct to a particular person, without stance-independence). This omission and the contemptuous dismissal of the person-affecting intuition for this case that follows seem supposed to rule out tolerating the intuition and debunking the intuition, moves the article uses to defend the total view from the Repugnant Conclusion as an objection. The article also makes no attempt at either argument, when it’s not hard to come up with such arguments. This seems to me to be applying a double standard for argument inclusion.
One of the debunking arguments made undermines the veil of ignorance argument, which literally asks you to imagine yourself as part of the population, and is one of the three main arguments for utilitarianism on the introductory page:
Third, we may mistakenly imagine ourselves as part of the populations being compared in the repugnant conclusion. Consequently, an egoistic bias may push us to favor populations with a high quality of life.
I’d also guess it’s pretty easy to generate debunking arguments against specific intuitions, and I can propose a few specifically against adding lives ever being good in itself. Debunking arguments have also been used against moral realism generally, so they might “prove too much” (although I think stance-independent moral realism is actually false, anyway).
The article also criticizes the use of the word ‘repugnant’ in the name of the Repugnant Conclusion for being “rhetorically overblown” in the main text (as well as ‘sadistic’ in “Sadistic Conclusion” for being “misleading”/”a misnomer”, but only in a footnote), but then goes on to use similarly contemptuous and dismissive language against specific views (emphasis mine):
The procreative asymmetry also has several deeply problematic implications, stemming from its failure to consider positive lives to be a good thing.
(This is also a matter of opinion, and not marked as such.)
Most people would prefer world A over an empty world B. But the simple procreative asymmetry would seem, perversely, to favor the empty world B since it counts the many good lives in world A for nothing while the few bad lives dominate the decision.
Granted, the immense incomparability introduced by all the putatively “meh” lives in A at least blocks the perverse conclusion that we must outright prefer the empty world B. Even so, holding the two worlds to be incomparable or “on a par” also seems wrong.
(This is also a matter of opinion, and not marked as such.)
Any view that denies this verdict is arguably too nihilistic and divorced from humane values to be worth taking seriously.
Again, I also think “divorced from humane values” is plainly false under some common definitions of ‘humane’. The way I use that word, mostly as a synonym for ‘compassionate’, ensuring happy people are born has nothing to do with being humane, while prioritizing suffering and the badly off as practically implied by procreation asymmetry is more humane than not.
There are other normative claims made without any language to suggest that they’re opinions at all (again, emphasis mine):
The simplest such view holds that positive lives make no difference in value to the outcome. But this falsely implies that creating lives with low positive welfare is just as good as creating an equal number of lives at a high welfare level.
(...)
Clearly, we should prefer world A1 over A2
I doubt there are decisive proofs for these claims.
Another (again, emphasis mine):
Others might be drawn to a weaker (and correspondingly more plausible) version of the asymmetry, according to which we do have some reason to create flourishing lives, but stronger reason to help existing people or to avoid lives of negative wellbeing.
This gives me the impression that the author(s) didn’t properly entertain person-affecting views or really consider objections to the weaker versions that don’t apply to the stronger ones or alternatives (other than the original reasons given for person-affecting views). The weaker versions seem to me to be self-undermining, have to draw more arbitrary lines, and are supported only by direct intuitions about cases (at least in the article) over the stronger versions, not more general reasons:
On self-undermining, the reasons people give for holding person-affecting intuitions in the first place have to be defeated when lives are good enough, and the view would not really be person-affecting anymore, including according to the article’s definition (“Person-affecting views that deny we have (non-instrumental) reason to add happy lives to the world.”). Why wouldn’t “meh” lives be good enough, too?
On arbitrariness, how do you define a “flourishing life” and where do you draw the line (or precisely how the blur is graded)? Will this view end up having to define it in an individual-specific (or species-specific) way, or otherwise discount some individuals and species for having their maximums too low? Something else?
As far as I can tell, the only arguments given for the weaker versions are intuitions about cases. Intuitions about cases should be weighed against more general reasons like those given in actualist arguments and Frick’s conditional reasons.
The value blur proposal was interesting and seems to me worth writing up somewhere, but it’s unlikely to represent anyone’s (or any ethicist’s) actual views, and those sympathetic to person-affecting views might not endorse it even if they knew of it. The article also has a footnote that undermines the view itself (intentionally or not), but there are views that I think meet this challenge, so the value blur view risks being a strawman rather than a steelman, as might have been intended:
A major challenge for such a view would be to explain how to render this value blur compatible with the asymmetry, so that miserable lives are appropriately recognized as bad (not merely meh).
It would make more sense to me to focus on the asymmetric person-affecting views ethicists actually defend/endorse or that otherwise already appear in the literature. (Personally, I think in actualist and/or conditional reason terms, and I’m most sympathetic to negative utilitarianism (not technically PA), actualist asymmetric person-affecting views, and the views in Thomas, 2019, but Thomas, 2019 seems too new and obscure to me to be the focus of the article, too.)
I agree with some of these points. I am very often bothered by overuse of the charge of nihilism in general, and in this case if it comes down to “you don’t literally care about nothing, but there is something that seems to us worth caring about that you don’t” then this seems especially misleading. A huge amount of what we think of as moral progress comes from not caring anymore about things we used to, for instance couldn’t an old fashioned racist accuse modern sensibilities of being nihilistic philistines with respect to racial special obligations? I am somewhat satisfied by Chappell’s response here that what is uniquely being called out is views on which nothing is of positive value, which I guess is a more unique use of the charge and less worrying.
I also agree that the piece would have been more hygienic if it discussed parallel problems with its own views and parallel defenses of others more, though in the interest of space it might have instead linked to some pieces making these points or flagged that such points had been made elsewhere instead.
However, all of this being said, your comment bothers me. The standard you are holding this piece to is one that I think just about every major work of analytic ethics of the last century would have failed. The idea that this piece points to some debunking arguments but other debunking arguments can be made against views it likes is I think true of literally every work of ethics that has ever made a debunking argument. It is also true of lots of very standard arguments, like any that points to counter-intuitive implications of a view being criticized.
Likewise the idea that offhand uses of the words “problematic” or “perverse” to describe different arguments/implications is too charged not to be marked explicitly as a matter of opinion…I mean, at least some pieces of ethical writing don’t use debunking arguments at all, this point in particular though seems to go way too far. Not just because it is asking for ethics to entirely change its style in order to tip-toe around the author’s real emotions, but also because these emotions seem essential to the project itself to me.
Ethics papers do a variety of things, in particular they highlight distinctions, implications, and other things that might allow the reader to see a theory more clearly, but unless you are an extremely strict realist (and even realists like Parfit regularly break this rule) they are also to an extent an exercise in rhetoric. In particular they try to give the reader a sense of what it feels like from the inside to believe what they believe, and I think this is important and analytic philosophy will have gone too far when it decides that this part of the project simply doesn’t matter.
I’m sorry if I’m sounding somewhat charged here, again, I agree with many of your points and think you mean well here, but I’ve become especially allergic to this type of motte and bailey recently, and I’m worried that the way this comment is written verges on it.
Fair with respect nihilism in particular. I can see both the cases for and against that charge against the procreation asymmetry, EDIT although the word has fairly negative connotations, so I still think it’s better to not use it in this context.
With respect to fairness, I think the way the website is used and marketed, i.e., as an introductory textbook to be shared more widely with audiences not yet very familiar with the area, it’ll mislead readers new to the area or who otherwise don’t take the time to read it more carefully and critically. It’s even referenced in the EA Forum tag/wiki for Utilitarianism, alone with a podcast* in the section External links (although there are other references in Further reading), and described there as a textbook, too. I’m guessing EA groups will sometimes share it with their members. It might be used in actual courses, as it seems intended. If I were to include it in EA materials or university courses, I’d also include exercises asking readers to spot where parallel arguments could have been used but weren’t and try to come up with them, as well as about other issues, and have them read opposing pieces. We shouldn’t normally have to do this for something described as or intended to be treated as a textbook.
Within an actual university philosophy class, maybe this is all fine, since other materials and critical reading will normally be expected (or, I’d hope so). But that still leaves promotion within EA, where this might not happen. The page tries to steer the audience towards the total view and longtermism, so it could shape our community while misleading uncritical readers through unfairly treating other views. To be clear, though, I don’t know how and how much it is being or will be promoted within the community. Maybe these concerns are overblown.
On the other hand, academics are trained to see through these issues, and papers are read primarily by smaller and more critical audiences, so the risks of misleading are lower. So it seems reasonable to me to hold it to a higher standard than an academic paper.
* Bold part edited in after. I missed the podcast when I first looked. EDIT: I’ve also just added https://www.utilitarianism.com and some other standard references to that page.
I’m of two minds on this. On the one hand you’re right that a textbook style should be more referential and less polemical as a rule. On the other hand, as you also point out, pretty much every philosophy class I’ve ever taken is made entirely of primary source readings. In the rare cases where something more referential is assigned instead, generally it’s just something like a Stanford Encyclopedia of Philosophy entry. I’m not certain how all introductory EA fellowships are run, but the one I facilitated was also mostly primary, semi-polemical sources, defending a particular perspective, followed by discussion, much like a philosophy class. Maybe utilitarianism.net is aiming more for being a textbook on utilitarianism, but it seems to me like it is more of a set of standard arguments for the classical utilitarian perspective, with a pretty clear bias in favor of it. That also seems more consistent with what Chappell has been saying, though of course it’s possible that its framing doesn’t reflect this sufficiently as well. Like you though, I’m not super familiar with how this resource is generally used, I just don’t know that I would think of it first and foremost as a sort of neutral secondary reference. That just doesn’t seem like its purpose.
Also, another difference with academic papers is that they’re often upfront about their intentions to defend a particular position, so readers don’t get the impression that a paper gives a balanced or fair treatment of the relevant issues. Utilitarianism.net is not upfront about this, and also makes some attempt to cover each side, but does so selectively and with dismissive language, so it may give a false impression of fairness.
That’s fair. Although on the point of covering both sides to a degree that at least seems typical of works of this genre. The Very Short Introduction series is the closest I have ever gotten to being assigned a textbook in a philosophy class, and usually they read about like this. Singer and de Lazari Radek’s Utilitarianism Very Short Introduction seems very stylistically similar in certain ways for instance. But I do think it makes sense that they should be more upfront about the scope at least.
It seems like you’re conflating the following two views:
Utilitarianism.net has an obligation to present views other than total symmetric utilitarianism in a sympathetic light.
Utilitarianism.net has an obligation not to present views other than total symmetric utilitarianism in an uncharitable and dismissive light.
I would claim #2, not #1, and presumably so would Michael. The quote about nihilism etc. is objectionable because it’s not just unsympathetic to such views, it’s condescending. Clearly many people who have reflected carefully about ethics think these alternatives are worth taking seriously, and it’s controversial to claim that “humane values” necessitate wanting to create happy beings de novo even at some (serious) opportunity cost to suffering. “Nihilistic” also connotes something stronger than denying positive value.
It seems to me that you’re conflating process and substance. Philosophical charity is a process virtue, and one that I believe our article exemplifies. (Again, the exploration of value blur offers a charitable development of the view in question.) You just don’t like that our substantive verdict on the view is very negative. And that’s fine, you don’t have to like it. But I want to be clear that this normative disagreement isn’t evidence of any philosophical defect on our part. (And I should flag that Michael’s process objections, e.g. complaining that we didn’t preface every normative claim with the tedious disclaimer “in our opinion”, reveals a lack of familiarity with standard norms for writing academic philosophy.)
“Clearly many people who have reflected carefully about ethics think these alternatives are worth taking seriously, and it’s controversial to claim...”
This sociological claim isn’t philosophically relevant. There’s nothing inherently objectionable about concluding that some people have been mistaken in their belief that a certain view is worth taking seriously. There’s also nothing inherently objectionable about making claims that are controversial. (Every interesting philosophical claim is controversial.)
What you’re implicitly demanding is that we refrain from doing philosophy (which involves taking positions, including ones that others might dislike or find controversial), and instead merely report on others’ arguments and opinions in a NPOV fashion. That’s a fine norm for wikipedia, but I don’t think it’s a reasonable demand to make of all philosophers in all places, and IMO it would make utilitarianism.net worse (and something I, personally, would be much less interested in creating and contributing to) if we were to try to implement it there.
As a process matter, I’m all in favour of letting a thousand flowers bloom. If you don’t like our philosophical POV, feel free to make your own resource that presents things from a POV you find more congenial! And certainly if we’re making philosophical errors, or overlooking important counterarguments, I’m happy to have any of that drawn to my attention. But I don’t really find it valuable to just hear that some people don’t like our conclusions (that pretty much goes without saying). And I confess I find it very frustrating when people try to turn that substantive disagreement into a process complaint, as though it were somehow intrinsically illegitimate to disagree about which views are serious contenders to be true.
But I want to be clear that this normative disagreement isn’t evidence of any philosophical defect on our part.
Oh I absolutely agree with this. My objections to that quote have no bearing on how legitimate your view is, and I never claimed as much. What I find objectionable is that by using such dismissive language about the view you disagree with, not merely critical language, you’re causing harm to population ethics discourse. Ideally readers will form their views on this topic based on their merits and intuitions, not based on claims that views are “too divorced from humane values to be worth taking seriously.”
complaining that we didn’t preface every normative claim with the tedious disclaimer “in our opinion”
Personally I don’t think you need to do this.
This sociological claim isn’t philosophically relevant. There’s nothing inherently objectionable about concluding that some people have been mistaken in their belief that a certain view is worth taking seriously. There’s also nothing inherently objectionable about making claims that are controversial.
Again, I didn’t claim that your dismissiveness bears on the merit of your view. The objectionable thing is that you’re confounding readers’ perceptions of the views with labels like “[not] worth taking seriously.” The fact that many people do take this view seriously suggests that that kind of label is uncharitable. (I suppose I’m not opposed in principle to being dismissive to views that are decently popular—I would have that response to the view that animals don’t matter morally, for example. But what bothers me about this case is partly that your argument for why it’s not worth taking seriously is pretty unsatisfactory.)
I’m certainly not calling for you to pass no judgments whatsoever on philosophical views, and “merely report on others’ arguments,” and I don’t think a reasonable reading of my comment would lead you to believe that.
And certainly if we’re making philosophical errors, or overlooking important counterarguments, I’m happy to have any of that drawn to my attention.
Indeed, I gave substantive feedback on the Population Ethics page a few months back, and hope you and your coauthors take it into account. :)
Why do you think it’s too nihilistic and divorced from humane values to be worth taking seriously? Surely those sympathetic to it, myself included, don’t agree.
By prioritizing suffering and those badly off, the procreation asymmetry (often a person-affecting view) and suffering-focused views generally are more humane than alternatives, almost by definition of the word ‘humane’ (although possibly depending on the precise definition).
Furthermore, the questions asked assume the answer (that the universe is good at all) or are rhetorical (“How could thatnotbe better than a barren rock?”) without offering any actual arguments. Perhaps you had arguments earlier in the article in mind that you don’t refer to explicitly there?EDIT: See Richard’s response.Although the section does point out reasonable concerns someone might have with person-affecting views, it doesn’t treat the views fairly. That quote is an important example of this. There’s also the way totalism is defended against the Repugnant Conclusion, so much that it gets a whole section with defenses against the RC, but no further defenses are given for person-affecting views in response to objections, only versions that are weaker and weaker, before finally rejecting them all with that quote, which
almost begs the question, poses a rhetorical question andis very derisive.Another major objection against total utilitarianism that’s missing is replacement/replaceability, the idea that, relative to the status quo, it would be better to kill individuals (or everyone) to replace them with better off individuals.
Richard, I do agree that the “indifferent to making happy people” view can lead to that sort of conclusion, that sounds indeed nihilistic. But I find it hard to find good arguments against it. I don’t find it obvious to say that a situation where there’s beings who are experiencing something is better than a situation where there’s no beings to experience anything at all. Reason 1) is that no one suffers from that absence of experience, reason 2) is that at least this also guarantees that there’s no horrible suffering. This might be very counterintuitive to some (or many) but I also feel that as soon as there is one creature suffering horrible for a prolonged amount of time, it might maybe be better to have nothing at all (see e.g. Omelas: do we want that world or would we rather having nothing at all?)
Hi Tobias, thanks for this. I’m curious: can you find “good arguments” against full-blown nihilism? I think nihilism is very difficult to argue against, except by pointing to the bedrock moral convictions it is incompatible with. So that’s really all I’m trying to do here. (See also my reply to Michael.)
Just to clarify: it depends on the experiences (and more, since I’m not a hedonist). Some lives would be worse than nothing at all. But yeah, if you just don’t share the intuition that utopia is better than a barren rock then I don’t expect anything else I have to say here will be very persuasive to you.
But isn’t that presupposing the suffering is all that matters? I’m trying to pump the intuition that good things matter too.
Yep, I’ll grant that: horrible suffering is really, really bad, so there’s at least that to be said for the barren rock. :-)
Nihilists claim that nothing is of value. The view I’m addressing holds that nothing is of positive value: utopia is no better than a barren rock. I find that objectionably nihilistic. (Though, in at least recognizing the problem of negative value, it isn’t as bad as full-blown nihilism.)
I’m trying to explain that I take as a premise that some things have positive value, and that utopia is better than a barren rock. (If you reject that premise, I have nothing more to say to you—any more than I could argue with someone who insisted that pain was intrinsically good. No offense intended; it’s simply a dialectical impasse.)
To make the argument pedantically explicit:
(P1) Utopia is better than a barren rock.
(P2) Person-affecting views (of the sort under discussion) imply otherwise.
Therefore, (C) Person-affecting views (of the sort under discussion) are false.
Is this “question-begging”? No more than any putative counterexample ever is. Of course, the logic of counterexamples is such that they can only ever be persuasive to those who haven’t already appreciated that the putative counterexample is an implication of the targeted view. If you already accept the implication, then you won’t be persuaded. But the argument may nonetheless be rationally persuasive for those who (perhaps like the OP?) are initially drawn to person-affecting views, but hadn’t considered this implication. Upon considering it, they may find that they share my view that the implication (rejecting P1) is unacceptable.
Surely those sympathetic to the expressed objections, myself included, don’t agree.
Utilitarianism.net isn’t wikipedia, striving for NPOV. You may not like our point of view, but having a point of view (and spending more time defending it than defending opposing views) does not mean that one has failed to treat the opposing views fairly. (Philosophers can disagree with each other without accusing each other of unfairness or other intellectual vices.)
FWIW, I thought the proposal to incorporate “value blur” to avoid the simple objections was a pretty neat (and, afaik, novel?) sympathetic suggestion we offer on behalf of the person-affecting theorist. But yes, we do go on to suggest that the core view remains unacceptable. That’s a substantive normative claim we’re making. The fact that others may disagree with the claim doesn’t automatically make it “unfair”.
You’re welcome to disagree! But I would hope that you can appreciate that we should also be free to disagree with you, including about the question of which moral views are plausible candidates to take seriously (i.e. as potentially correct) and which are not.
Fair with respect to it being a proposed counterexample. I’ve edited my reply above accordingly.
I have multiple complaints where I think the article is unfair or misleading, and they’re not just a matter of having disagreements with specific claims.
First, the article often fails to mark when something is opinion, giving the misleading impression of fact and objectivity. I quote examples below.
Second, I think we should hold ourselves to higher standards than using contemptuous language to refer to views or intuitions ethicists and thoughtful people find plausible or endorse, and I don’t think it’s fair to otherwise just call the views implausible or not worth taking seriously without marking this very explicitly as opinion (“arguably” isn’t enough, in my view, and I’d instead recommend explicitly referring to the authors, e.g. use “We think (...)”).
I think the above two are especially misleading since the website describes itself as a “textbook introduction to utilitarianism” and if it’s going to be shared and used as such (e.g. in EA reading groups or shared with people new to EA). I think it’s normal to expect textbooks to strive for NPOV.
Third, I think being fair should require including the same kinds of arguments on each side, when available, and also noting when these arguments “prove too much” or otherwise undermine the views the article defends, if they do. Some of the kinds of arguments used to defend the total view against the Repugnant Conclusion can be used against intuitions supporting the total view or intuitions against person-affecting views (tolerating and debunking, as mentioned above, and attacking the alternatives, which the article does indeed do for alternatives to PA views).
Expanding on this third point, “How could that not be better than a barren rock?” has an obvious answer that was left out: person-affecting views (or equivalently, reasons implying person-affecting views) could be correct (or correct to a particular person, without stance-independence). This omission and the contemptuous dismissal of the person-affecting intuition for this case that follows seem supposed to rule out tolerating the intuition and debunking the intuition, moves the article uses to defend the total view from the Repugnant Conclusion as an objection. The article also makes no attempt at either argument, when it’s not hard to come up with such arguments. This seems to me to be applying a double standard for argument inclusion.
One of the debunking arguments made undermines the veil of ignorance argument, which literally asks you to imagine yourself as part of the population, and is one of the three main arguments for utilitarianism on the introductory page:
I’d also guess it’s pretty easy to generate debunking arguments against specific intuitions, and I can propose a few specifically against adding lives ever being good in itself. Debunking arguments have also been used against moral realism generally, so they might “prove too much” (although I think stance-independent moral realism is actually false, anyway).
The article also criticizes the use of the word ‘repugnant’ in the name of the Repugnant Conclusion for being “rhetorically overblown” in the main text (as well as ‘sadistic’ in “Sadistic Conclusion” for being “misleading”/”a misnomer”, but only in a footnote), but then goes on to use similarly contemptuous and dismissive language against specific views (emphasis mine):
(This is also a matter of opinion, and not marked as such.)
(This is also a matter of opinion, and not marked as such.)
Again, I also think “divorced from humane values” is plainly false under some common definitions of ‘humane’. The way I use that word, mostly as a synonym for ‘compassionate’, ensuring happy people are born has nothing to do with being humane, while prioritizing suffering and the badly off as practically implied by procreation asymmetry is more humane than not.
There are other normative claims made without any language to suggest that they’re opinions at all (again, emphasis mine):
I doubt there are decisive proofs for these claims.
Another (again, emphasis mine):
This gives me the impression that the author(s) didn’t properly entertain person-affecting views or really consider objections to the weaker versions that don’t apply to the stronger ones or alternatives (other than the original reasons given for person-affecting views). The weaker versions seem to me to be self-undermining, have to draw more arbitrary lines, and are supported only by direct intuitions about cases (at least in the article) over the stronger versions, not more general reasons:
On self-undermining, the reasons people give for holding person-affecting intuitions in the first place have to be defeated when lives are good enough, and the view would not really be person-affecting anymore, including according to the article’s definition (“Person-affecting views that deny we have (non-instrumental) reason to add happy lives to the world.”). Why wouldn’t “meh” lives be good enough, too?
On arbitrariness, how do you define a “flourishing life” and where do you draw the line (or precisely how the blur is graded)? Will this view end up having to define it in an individual-specific (or species-specific) way, or otherwise discount some individuals and species for having their maximums too low? Something else?
As far as I can tell, the only arguments given for the weaker versions are intuitions about cases. Intuitions about cases should be weighed against more general reasons like those given in actualist arguments and Frick’s conditional reasons.
The value blur proposal was interesting and seems to me worth writing up somewhere, but it’s unlikely to represent anyone’s (or any ethicist’s) actual views, and those sympathetic to person-affecting views might not endorse it even if they knew of it. The article also has a footnote that undermines the view itself (intentionally or not), but there are views that I think meet this challenge, so the value blur view risks being a strawman rather than a steelman, as might have been intended:
It would make more sense to me to focus on the asymmetric person-affecting views ethicists actually defend/endorse or that otherwise already appear in the literature. (Personally, I think in actualist and/or conditional reason terms, and I’m most sympathetic to negative utilitarianism (not technically PA), actualist asymmetric person-affecting views, and the views in Thomas, 2019, but Thomas, 2019 seems too new and obscure to me to be the focus of the article, too.)
I agree with some of these points. I am very often bothered by overuse of the charge of nihilism in general, and in this case if it comes down to “you don’t literally care about nothing, but there is something that seems to us worth caring about that you don’t” then this seems especially misleading. A huge amount of what we think of as moral progress comes from not caring anymore about things we used to, for instance couldn’t an old fashioned racist accuse modern sensibilities of being nihilistic philistines with respect to racial special obligations? I am somewhat satisfied by Chappell’s response here that what is uniquely being called out is views on which nothing is of positive value, which I guess is a more unique use of the charge and less worrying.
I also agree that the piece would have been more hygienic if it discussed parallel problems with its own views and parallel defenses of others more, though in the interest of space it might have instead linked to some pieces making these points or flagged that such points had been made elsewhere instead.
However, all of this being said, your comment bothers me. The standard you are holding this piece to is one that I think just about every major work of analytic ethics of the last century would have failed. The idea that this piece points to some debunking arguments but other debunking arguments can be made against views it likes is I think true of literally every work of ethics that has ever made a debunking argument. It is also true of lots of very standard arguments, like any that points to counter-intuitive implications of a view being criticized.
Likewise the idea that offhand uses of the words “problematic” or “perverse” to describe different arguments/implications is too charged not to be marked explicitly as a matter of opinion…I mean, at least some pieces of ethical writing don’t use debunking arguments at all, this point in particular though seems to go way too far. Not just because it is asking for ethics to entirely change its style in order to tip-toe around the author’s real emotions, but also because these emotions seem essential to the project itself to me.
Ethics papers do a variety of things, in particular they highlight distinctions, implications, and other things that might allow the reader to see a theory more clearly, but unless you are an extremely strict realist (and even realists like Parfit regularly break this rule) they are also to an extent an exercise in rhetoric. In particular they try to give the reader a sense of what it feels like from the inside to believe what they believe, and I think this is important and analytic philosophy will have gone too far when it decides that this part of the project simply doesn’t matter.
I’m sorry if I’m sounding somewhat charged here, again, I agree with many of your points and think you mean well here, but I’ve become especially allergic to this type of motte and bailey recently, and I’m worried that the way this comment is written verges on it.
Fair with respect nihilism in particular. I can see both the cases for and against that charge against the procreation asymmetry, EDIT although the word has fairly negative connotations, so I still think it’s better to not use it in this context.
With respect to fairness, I think the way the website is used and marketed, i.e., as an introductory textbook to be shared more widely with audiences not yet very familiar with the area, it’ll mislead readers new to the area or who otherwise don’t take the time to read it more carefully and critically. It’s even referenced in the EA Forum tag/wiki for Utilitarianism, alone with a podcast* in the section External links (although there are other references in Further reading), and described there as a textbook, too. I’m guessing EA groups will sometimes share it with their members. It might be used in actual courses, as it seems intended. If I were to include it in EA materials or university courses, I’d also include exercises asking readers to spot where parallel arguments could have been used but weren’t and try to come up with them, as well as about other issues, and have them read opposing pieces. We shouldn’t normally have to do this for something described as or intended to be treated as a textbook.
Within an actual university philosophy class, maybe this is all fine, since other materials and critical reading will normally be expected (or, I’d hope so). But that still leaves promotion within EA, where this might not happen. The page tries to steer the audience towards the total view and longtermism, so it could shape our community while misleading uncritical readers through unfairly treating other views. To be clear, though, I don’t know how and how much it is being or will be promoted within the community. Maybe these concerns are overblown.
On the other hand, academics are trained to see through these issues, and papers are read primarily by smaller and more critical audiences, so the risks of misleading are lower. So it seems reasonable to me to hold it to a higher standard than an academic paper.
* Bold part edited in after. I missed the podcast when I first looked. EDIT: I’ve also just added https://www.utilitarianism.com and some other standard references to that page.
I’m of two minds on this. On the one hand you’re right that a textbook style should be more referential and less polemical as a rule. On the other hand, as you also point out, pretty much every philosophy class I’ve ever taken is made entirely of primary source readings. In the rare cases where something more referential is assigned instead, generally it’s just something like a Stanford Encyclopedia of Philosophy entry. I’m not certain how all introductory EA fellowships are run, but the one I facilitated was also mostly primary, semi-polemical sources, defending a particular perspective, followed by discussion, much like a philosophy class. Maybe utilitarianism.net is aiming more for being a textbook on utilitarianism, but it seems to me like it is more of a set of standard arguments for the classical utilitarian perspective, with a pretty clear bias in favor of it. That also seems more consistent with what Chappell has been saying, though of course it’s possible that its framing doesn’t reflect this sufficiently as well. Like you though, I’m not super familiar with how this resource is generally used, I just don’t know that I would think of it first and foremost as a sort of neutral secondary reference. That just doesn’t seem like its purpose.
Also, another difference with academic papers is that they’re often upfront about their intentions to defend a particular position, so readers don’t get the impression that a paper gives a balanced or fair treatment of the relevant issues. Utilitarianism.net is not upfront about this, and also makes some attempt to cover each side, but does so selectively and with dismissive language, so it may give a false impression of fairness.
That’s fair. Although on the point of covering both sides to a degree that at least seems typical of works of this genre. The Very Short Introduction series is the closest I have ever gotten to being assigned a textbook in a philosophy class, and usually they read about like this. Singer and de Lazari Radek’s Utilitarianism Very Short Introduction seems very stylistically similar in certain ways for instance. But I do think it makes sense that they should be more upfront about the scope at least.
It seems like you’re conflating the following two views:
Utilitarianism.net has an obligation to present views other than total symmetric utilitarianism in a sympathetic light.
Utilitarianism.net has an obligation not to present views other than total symmetric utilitarianism in an uncharitable and dismissive light.
I would claim #2, not #1, and presumably so would Michael. The quote about nihilism etc. is objectionable because it’s not just unsympathetic to such views, it’s condescending. Clearly many people who have reflected carefully about ethics think these alternatives are worth taking seriously, and it’s controversial to claim that “humane values” necessitate wanting to create happy beings de novo even at some (serious) opportunity cost to suffering. “Nihilistic” also connotes something stronger than denying positive value.
It seems to me that you’re conflating process and substance. Philosophical charity is a process virtue, and one that I believe our article exemplifies. (Again, the exploration of value blur offers a charitable development of the view in question.) You just don’t like that our substantive verdict on the view is very negative. And that’s fine, you don’t have to like it. But I want to be clear that this normative disagreement isn’t evidence of any philosophical defect on our part. (And I should flag that Michael’s process objections, e.g. complaining that we didn’t preface every normative claim with the tedious disclaimer “in our opinion”, reveals a lack of familiarity with standard norms for writing academic philosophy.)
This sociological claim isn’t philosophically relevant. There’s nothing inherently objectionable about concluding that some people have been mistaken in their belief that a certain view is worth taking seriously. There’s also nothing inherently objectionable about making claims that are controversial. (Every interesting philosophical claim is controversial.)
What you’re implicitly demanding is that we refrain from doing philosophy (which involves taking positions, including ones that others might dislike or find controversial), and instead merely report on others’ arguments and opinions in a NPOV fashion. That’s a fine norm for wikipedia, but I don’t think it’s a reasonable demand to make of all philosophers in all places, and IMO it would make utilitarianism.net worse (and something I, personally, would be much less interested in creating and contributing to) if we were to try to implement it there.
As a process matter, I’m all in favour of letting a thousand flowers bloom. If you don’t like our philosophical POV, feel free to make your own resource that presents things from a POV you find more congenial! And certainly if we’re making philosophical errors, or overlooking important counterarguments, I’m happy to have any of that drawn to my attention. But I don’t really find it valuable to just hear that some people don’t like our conclusions (that pretty much goes without saying). And I confess I find it very frustrating when people try to turn that substantive disagreement into a process complaint, as though it were somehow intrinsically illegitimate to disagree about which views are serious contenders to be true.
Oh I absolutely agree with this. My objections to that quote have no bearing on how legitimate your view is, and I never claimed as much. What I find objectionable is that by using such dismissive language about the view you disagree with, not merely critical language, you’re causing harm to population ethics discourse. Ideally readers will form their views on this topic based on their merits and intuitions, not based on claims that views are “too divorced from humane values to be worth taking seriously.”
Personally I don’t think you need to do this.
Again, I didn’t claim that your dismissiveness bears on the merit of your view. The objectionable thing is that you’re confounding readers’ perceptions of the views with labels like “[not] worth taking seriously.” The fact that many people do take this view seriously suggests that that kind of label is uncharitable. (I suppose I’m not opposed in principle to being dismissive to views that are decently popular—I would have that response to the view that animals don’t matter morally, for example. But what bothers me about this case is partly that your argument for why it’s not worth taking seriously is pretty unsatisfactory.)
I’m certainly not calling for you to pass no judgments whatsoever on philosophical views, and “merely report on others’ arguments,” and I don’t think a reasonable reading of my comment would lead you to believe that.
Indeed, I gave substantive feedback on the Population Ethics page a few months back, and hope you and your coauthors take it into account. :)