In practice, no. For example, I am willing to bite the bullet on saying that torture is not always wrong—the case of the terrorist who has planted a nuclear bomb in a big city that will detonate in a few hours, unless we torture his small child in front of him. How much weight should I give to the possibility that, for example, torture is always wrong, even if it is the only way to prevent a much greater amount of suffering? I have no idea. I’m not clear how—in the absence of a divine being and who has commanded us not to do it—it could be wrong, in such circumstances. And I don’t give any serious credence to the existence of such a being.
PeterSinger
I give more credence to the idea that some insects, and a wider range of crustaceans than just lobsters and crabs, are sentient and therefore must be inside my moral circle. But see my reply to “justsaying” above—I still have no idea what their suffering would be like, and therefore how much weight to give it. (Of course, the numbers count too.)
The things that most people can see are good, and which therefore would bring more people into the movement. Like finding the best ways to help people in extreme poverty, and ending factory farming (see my above answer to what I would do if I were in my twenties).
One common objection to what The Life You Can Save and GiveWell are doing—recommending the most effective charities to help people in extreme poverty—is that this is a band-aid, and doesn’t get at the underlying problems, for which structural change is needed. I’d like to see more EAs engaging with that objection, and assessing paths to structural changes that are feasible and likely to make a difference.
Thanks, Tyler. Here is the piece: https://www.newyorker.com/culture/the-new-yorker-interview/peter-singer-is-committed-to-controversial-ideas
It’s really hard to know what relative weights to give chickens, and harder still with shrimp or insects. The Rethink Priorities weights could be wrong by orders of magnitude, but they might also be roughly correct.
Re the Meat Eater Problem (see Michael Plant’s article in the Journal of Controversial Ideas) I don’t think we will get to a better, kinder world by letting people die from preventable, poverty-related conditions. A world without poverty is more likely to come around to caring about animals than one in which some are wealthy and others are in extreme poverty.
I don’t claim that this is an adequate answer to the dilemma you sketch for someone with my views. It’s a good topic for further thought.
Good question, but I don’t have a good answer. My answer is more pragmatic than principled (see, for example, my previous response to Devon Fritz’s question about what EA is getting most wrong.)
Placing too much emphasis on longtermism. I’m not against longtermism at all—it’s true that we neglect future sentient beings, as we neglect people who are distant from us, and as we neglect nonhuman animals. But it’s not good for people to get the impresson that EA is mostly about longtermism. That impression hinders the prospects of EA becoming a broad and popular movement that attracts a wide range of people, and we have an important message to get across to those people: some ways of doing good are hundreds of times more effective than others.
My impression, by the way, is that this lesson has been learned, and longtermism is less prominent in discussions of EA today than it was a couple of years ago. But I could be wrong about that.
If you want a more concrete example of what Parfit took to be an irreducibly normative truth, it might be that the fact that if I do X, someone will be in agony is a reason against doing X (not necessarily a conclusive reason, of course).
When Parfit said that if there are no such truths, nothing would matter, he meant that nothing would matter in an objective sense. It might matter to me, of course. But it wouldn’t really matter. I agree with that, although I can also see that the fact that something matters to me, or to those I love and care about, does give me a reason not to do it. For more discussion, see the collection of essays I edited, Does Anything Really Matter (Oxford, 2017). The intention, when I conceived this volume, was for Parfit to reply to his critics in the same volume, but his reply grew so long that it had to be published separately, and it forms the bulk of On What Matters, Volume Three.
Getting too far ahead of where most people are—for example, by talking about insect suffering. It’s hard enough, at present, to get people to care about chickens or fish. We need to focus on areas in which many people are already on our side, and others can be persuaded to come over. Otherwise, we aren’t likely to make progress, and without occasional concrete gains for animals, we won’t be able to grow the movement.
I’m not sure that I’d be a philosopher today. When I was in my twenties, practical ethics was virtually a new field, and there was a lot to be done. (Of course, it wasn’t really new, because philosophers had discussed practical issues from Plato onwards, but it had been neglected in the 20th century, so it seemed new.) Now there are many very good people working in practical ethics, and it is harder to have an impact. Perhaps I would become a full-time campaigner, either for effective altruism in general, or more specifically, against factory farming, which I see as a moral atrocity, producing suffering on a scale too vast for us to comprehend, and also terrible for the climate, the local and regional environment, and wasteful of the food we grow to feed the animals.
My introduction to philosophy was Bertrand Russell’s History of Western Philosophy, which I read while in high school (there were no philosophy classes in Australian high schools then) so that clearly had a significant influence on me, but more in informing me about what philosophy is, and in interesting me in some of the ideas discussed, rather than in the sense of influencing me in specific beliefs. Sidgwick’s The Methods of Ethics had a much greater influence on me, firstly in showing me how many commonsense moral rules can be explained as offering a morality that is easier for people to follow, in everyday life, than utilitarianism, but will generally lead to the kind of outcomes that utilitarians favor. R.M. Hare’s work was also influential, in the same direction—here I have in mind Freedom and Reason, and his later work, Moral Thinking. Jonathan Glover’s Causing Death and Saving Lives illustrated the importance of clear thinking for handling life and death questions in bioethics, and led me in that direction. Finally, Derek Parfit has had a major influence on me, initially through his teaching at Oxford, where I first came across the issues in population ethics that he later discussed in Reasons and Persons, and then later through On What Matters, which, as discussed in the interview mentioned in another comment above, persuaded me that Hume was wrong about reason being the slave of the passions, and led me to hold that there are objective truths in ethics.
Daniel, Thanks for referring people to that Future of Life podcast interview in which I explain why I became a moral realist. Given that I’ve dealt with the issue quite fully there, I’ll move on to other questions.
Why is the choice not directly comparable? If it were possible to offer a blind person a choice between being able to see, or having a guide dog, would it be so difficult for the blind person to choose?
Still, if you can suggest better comparisons that make the same point, I’ll be happy to use them.
These are good points and I’m suitably chastened for not being sufficiently thorough in checking Toby Ord’s claims,
I’m pleased to see that GiveWell is again investigating treating blindness: http://blog.givewell.org/2017/05/11/update-on-our-views-on-cataract-surgery/. In this very recent post, they say: “We believe there is evidence that cataract surgeries substantially improve vision. Very roughly, we estimate that the cost-effectiveness of cataract surgery is ~$1,000 per severe visual impairment reversed.[1]”
The footnote reads: “This estimate is on the higher end of the range we calculated, because it assumes additional costs due to demand generation activities, or identifying patients who would not otherwise have known about surgery. We use this figure because we expect that GiveWell is more likely to recommend an organization that can demonstrate, through its demand generation activities, that it is causing additional surgeries to happen. The $1,000 figure also reflects our sense that cost-effectiveness in general tends to worsen (become more expensive) as we spend more time building our model of any intervention. Finally, it is a round figure that communicates our uncertainty about this estimate overall. But it’s reasonable to say that until they complete this investigation, which will be years rather than months, it may be better to avoid using the example of preventing or curing blindness.” So the options seem to be either not using the example of blindness at all, or using this rough figure of $1000, with suitable disclaimers. It still leads to 40 cases of severe visual impairment reversed v. 1 case of providing a blind person with a guide dog.- Fact checking comparison between trachoma surgeries and guide dogs by 10 May 2017 22:33 UTC; 53 points) (
- 21 Sep 2017 18:04 UTC; 10 points) 's comment on Against EA PR by (LessWrong;
I don’t understand the objection about it being “ableist” to say funding should go towards preventing people becoming blind rather than training guide dogs
If “ableism” is really supposed to be like racism or sexism, then we should not regard it as better to be able to see than to have the disability of not being able to see. But if people who cannot see are no worse off than people who can see, why should we even provide guide dogs for them? On the other hand, if—more sensibly—disability activists think that people who are unable to see are at a disadvantage and need our help, wouldn’t they agree that it is better to prevent many people—say, 400 -- experiencing this disadvantage than to help one person cope a little better with the disadvantage? Especially if the 400 are living in a developing country and have far less social support than the one person who lives in a developed country?
Can someone explain to me what is wrong with this argument? If not, I plan to keep using the example.
Regrettably, I misspoke in my TED talk when I referred to “curing” blindness from trachoma. I should have said “preventing.” (I used to talk about curing blindness by performing cataract surgery, and that may be the cause of the slip.) But there is a source for the figure I cited, and it is not GiveWell. I give the details in The Most Good You Can Do”, in an endnote on p. 194, but to save you all looking it up, here it is:
“I owe this comparison to Toby Ord, “The moral imperative towards cost-effectiveness,” http://www.givingwhatwecan.org/sites/givingwhatwecan.org/files/attachments/moral_imperative.pdf. Ord suggests a figure of $20 for preventing blindness; I have been more conservative. Ord explains his estimate of the cost of providing a guide dog as follows: “Guide Dogs of America estimate $19,000 for the training of the dog. When the cost of training the recipient to use the dog is included, the cost doubles to $38,000. Other guide dog providers give similar estimates, for example Seeing Eye estimates a total of $50,000 per person/dog partnership, while Guiding Eyes for the Blind estimates a total of $40,000.” His figure for the cost of preventing blindness by treating trachoma comes from Joseph Cook et al., “Loss of vision and hearing,” in Dean Jamison et al., eds., Disease Control Priorities in Developing Countries, 2d ed. (Oxford: Oxford University Press, 2006), 954. The figure Cook et al. give is $7.14 per surgery, with a 77 percent cure rate. I thank Brian Doolan of the Fred Hollows Foundation for discussion of his organization’s claim that it can restore sight for $25. GiveWell suggests a figure of $100 for surgeries that prevent one to thirty years of blindness and another one to thirty years of low vision but cautions that the sources of these figures are not clear enough to justify a high level of confidence.”
Now, maybe there is some more recent research casting doubt on this figure, but note that the numbers I use allow that the figure may be $100 (typically, when I speak on this, I give a range, saying that for the cost of training one guide dog, we may be able to prevent somewhere between 400 − 1600 cases of blindness. Probably it isn’t necessary even to do that. The point would be just as strong if it were 400, or even 40.
As it happens, more or less simultaneously with this AMA, there is a Pea Soup discussion going on in response to a text about my views by Johann Frick. My response to Johann is relevant to this question, even though it doesn’t use the satisficing terminology. But do take a look:
https://peasoupblog.com/2024/07/johann-frick-singer-without-utilitarianism-on-ecumenicalism-and-esotericism-in-practical-ethics/#comment-28935
I’m going to stop answering your questions now, as I’ve got other things I need to do as well as the Pea Soup discussion, including preparing for the next interview for the Lives Well Lived podcast I am doing with Kasia de Lazari-Radek. If you are not familiar with it, check it out on Apple Podcasts, Spotify, etc etc. We have interviews up with Jane Goodall, Yuval Harari, Ingrid Newkirk, Daniel Kahneman (sadly, recorded shortly before his death) and others.
But here is some good news—you can try asking your questions to Peter Singer AI! Seriously—become a paid subscriber to my Substack, and it’s available now (and, EAs, all funds raised will be donated to The Life You Can Save’s recommended charities). Eventually we will open it up to everyone, but we want to test it first and would value your comments.
https://boldreasoningwithpetersinger.substack.com/
Thanks for all the questions, and sorry that I can’t answer them all.
Peter